# What single step does with relationship

We had a journal club about the single step GBLUP method for genomic evaluation a few weeks ago. In this post, we’ll make a few graphs of how the single step method models relatedness between individuals.

Imagine you want to use genomic selection in a breeding program that already has a bunch of historical pedigree and trait information. You could use some so-called multistep evaluation that uses one model for the classical pedigree + trait quantitative genetics and one model for the genotype + trait genomic evaluation, and then mix the predictions from them together. Or you could use the single-step method, which combines pedigree, genotypes and traits into one model. It does this by combining the relationship estimates from pedigree and genotypes. That matrix can then go into your mixed model.

We’ll illustrate this with a tiny simulated population: five generations of 100 individuals per generation, where ten random pairings produce the next generation, with families of ten individuals. (The R code is on Github and uses AlphaSimR for simulation and AGHmatrix for matrices). Here is a heatmap of the pedigree-based additive relationship matrix for the population:

What do we see? In the lower left corner are the founders, and not knowing anything about their heritage, the matrix has them down as unrelated. The squares of high relatedness along the diagonal are the families in each generation. As we go upwards and to the right, relationship is building up.

Now, imagine the last generation of the population also has been genotyped with a SNP chip. Here is a heatmap of their genomic relationship matrix:

Genomic relationship is more detailed. We can still discern the ten families within the last generation, but no longer are all the siblings equally related to each other and to their ancestors. The genotyping helps track segregation within families, pointing out to us when relatives are more or less related than the average that we get from the pedigree.

Enter the single-step relationship matrix. The idea is to put in the genomic relationships for the genotyped individuals into the big pedigree-based relationship matrix, and then adjust the rest of the matrix to propagate that extra information we now have from the genotyped individuals to their ungenotyped relatives. Here is the resulting heatmap:

You can find the matrix equations in Legarra, Aguilar & Misztal (2009). The matrix, called H, is broken down into four partitions called H11, H12, H21, and H22. H22 is the part that pertains to the genotyped animals, and it’s equal to the genomic relationship matrix G (after some rescaling). The others are transformations of G and the corresponding parts of the additive relationship matrix, spreading G onto A.

To show what is going on, here is the difference between the additive relationship matrix and the single-step relationship matrix, with lines delineating the genotyped animals and breaking the matrix into the four partitions:

What do we see? In the top right corner, we have a lot of difference, where the genomic relationship matrix has been plugged in. Then, fading as we go from top to bottom and from right to left, we see the influence of the genomic relationship on relatives, diminishing the further we get from the genotyped individuals.

Literature

Legarra, Andres, I. Aguilar, and I. Misztal. ”A relationship matrix including full pedigree and genomic information.” Journal of dairy science 92.9 (2009): 4656-4663.

# Excerpts about genomics in animal breeding

Here are some good quotes I’ve come across while working on something.

Artificial selection on the phenotypes of domesticated species has been practiced consciously or unconsciously for millennia, with dramatic results. Recently, advances in molecular genetic engineering have promised to revolutionize agricultural practices. There are, however, several reasons why molecular genetics can never replace traditional methods of agricultural improvement, but instead they should be integrated to obtain the maximum improvement in economic value of domesticated populations.

Lande R & Thompson R (1990) Efficiency of marker-assisted selection in the improvement of quantitative traits. Genetics.

Smith and Smith suggested that the way to proceed is to map QTL to low resolution using standard mapping methods and then to increase the resolution of the map in these regions in order to locate more closely linked markers. In fact, future developments should make this approach unnecessary and make possible high resolution maps of the whole genome, even, perhaps, to the level of the DNA sequence. In addition to easing the application of selection on loci with appreciable individual effects, we argue further that the level of genomic information available will have an impact on infinitesimal models. Relationship information derived from marker information will replace the standard relationship matrix; thus, the average relationship coefficients that this represents will be replaced by actual relationships. Ultimately, we can envisage that current models combining few selected QTL with selection on polygenic or infinitesimal effects will be replaced with a unified model in which different regions of the genome are given weights appropriate to the variance they explain.

Haley C & Visscher P. (1998) Strategies to utilize marker–quantitative trait loci associations. Journal of Dairy Science.

Instead, since the late 1990s, DNA marker genotypes were included into the conventional BLUP analyses following Fernando and Grossman (1989): add the marker genotype (0, 1, or 2, for an animal) as a fixed effect to the statistical model for a trait, obtain the BLUP solutions for the additive polygenic effect as before, and also obtain the properly adjusted BLUE solution for the marker’s allele substitution effect; multiply this BLUE by 0, 1, or 2 (specic for the animal) and add the result to the animal’s BLUP to obtain its marker-enhanced EBV. A logical next step was to treat the marker genotypes as semi-random effects, making use of several different shrinkage strategies all based on the marker heritability (e.g., Tsuruta et al., 2001); by 2007, breeding value estimation packages such as PEST (Neumaier and Groeneveld, 1998) supported this strategy as part of their internal calculations. At that time, a typical genetic evaluation run for a production trait would involve up to 30 markers.

Knol EF, Nielsen B, Knap PW. (2016) Genomic selection in commercial pig breeding. Animal Frontiers.

Although it has not caught the media and public imagination as much as transgenics and cloning, genomics will, I believe, have just as great a long-term impact. Because of the availability of information from genetically well-researched species (humans and mice), genomics in farm animals has been established in an atypical way. We can now see it as progressing in four phases: (i) making a broad sweep map (~20 cM) with both highly informative (microsatellite) and evolutionary conserved (gene) markers; (ii) using the informative markers to identify regions of chromosomes containing quantitative trait loci (QTL) controlling commercially important traits–this requires complex pedigrees or crosses between phenotypically anc genetically divergent strains; (iii) progressing from the informative markers into the QTL and identifying trait genes(s) themselves either by complex pedigrees or back-crossing experiments, and/or using the conserved markers to identify candidate genes from their position in the gene-rich species; (iv) functional analysis of the trait genes to link the genome through physiology to the trait–the ‘phenotype gap’.

Bulfield G. (2000) Biotechnology: advances and impact. Journal of the Science of Food and Agriculture.

I believe animal breeding in the post-genomic era will be dramatically different to what it is today. There will be massive research effort to discover the function of genes including the effect of DNA polymorphisms on phenotype. Breeding programmes will utilize a large number of DNA-based tests for specific genes combined with new reproductive techniques and transgenes to increase the rate of genetic improvement and to produce for, or allocate animals to, the product line to which they are best suited. However, this stage will not be reached for some years by which time many of the early investors will have given up, disappointed with the early benefits.

Goddard M. (2003). Animal breeding in the (post-) genomic era. Animal Science.

Genetics is a quantitative subject. It deals with ratios, with measurements, and with the geometrical relationships of chromosomes. Unlike most sciences that are based largely on mathematical techniques, it makes use of its own system of units. Physics, chemistry, astronomy, and physiology all deal with atoms, molecules, electrons, centimeters, seconds, grams–their measuring systems are all reducible to these common units. Genetics has none of these as a recognizable component in its fundamental units, yet it is a mathematically formulated subject that is logically complete and self-contained.

Sturtevant AH & Beadle GW. (1939) An introduction to genetics. W.B. Saunders company, Philadelphia & London.

We begin by asking why genes on nonhomologous chromosomes assort independently. The simple cytological story rehearsed above answers the questions. That story generates further questions. For example, we might ask why nonhomologous chromosomes are distributed independently at meiosis. To answer this question we could describe the formation of the spindle and the migration of chromosomes to the poles of the spindle just before meiotic division. Once again, the narrative would generate yet further questions. Why do the chromosomes ”condense” at prophase? How is the spindle formed? Perhaps in answering these questions we would begin to introduce the chemical details of the process. Yet simply plugging a molecular account into the narratives offered at the previous stages would decrease the explanatory power of those narratives.

Kitcher, P. (1984) 1953 and all that. A tale of two sciences. Philosophical Review.

And, of course, this great quote by Jay Lush.

# There is no breeder’s equation for environmental change

This post is about why heritability coefficients of human traits can’t tell us what to do. Yes, it is pretty much an elaborate subtweet.

Let us begin in a different place, where heritability coefficients are useful, if only a little. Imagine there is selection going on. It can be natural or artificial, but it’s selection the old-fashioned way: there is some trait of an individual that makes it more or less likely to successfully reproduce. We’re looking at one generation of selection: there is one parent generation, some of which reproduce and give rise to the offspring generation.

Then, if we have a well-behaved quantitative trait, no systematic difference between the environments that the two generations experience (also, no previous selection; this is one reason I said ‘if only a little’), we can get an estimate of the response to selection, that is how the mean of the trait will change between the generations:

$R = h^2S$

R is the response. S, the selection differential, is the difference between the mean all of the parental generation and the selected parents, and thus measures the strength of the selection. h2 is the infamous heritability, which measures the accuracy of the selection.

That is, the heritability coefficient tells you how well the selection of parents reflect the offspring traits. A heritability coefficient of 1 would mean that selection is perfect; you can just look at the parental individuals, pick the ones you like, and get the whole selection differential as a response. A heritability coefficient of 0 means that looking at the parents tells you nothing about what their offspring will be like, and selection thus does nothing.

Conceptually, the power of the breeder’s equation comes from the mathematical properties of selection, and the quantitative genetic assumptions of a linear parent–offspring relationship. (If you’re a true connoisseur of theoretical genetics or a glutton for punishment, you can derive it from the Price equation; see Walsh & Lynch (2018).) It allows you to look (one generation at a time) into the future only because we understand what selection does and assume reasonable things about inheritance.

We don’t have that kind machinery for environmental change.

Now, another way to phrase the meaning of the heritability coefficient is that it is a ratio of variances, namely the additive genetic variance (which measures the trait variation that runs in families) divided by the total variance (which measures the total variation in the population, duh). This is equally valid, more confusing, and also more relevant when we’re talking about something like a population of humans, where no breeding program is going on.

Thus, the heritability coefficient is telling us, in a specific highly geeky sense, how much of trait variation is due to inheritance. Anything we can measure about a population will have a heritability coefficient associated with it. What does this tell us? Say, if drug-related crime has yay big heritability, does that tell us anything about preventing drug-related crime? If heritability is high, does that mean interventions are useless?

The answers should be evident from the way I phrased those rhetorical questions and from the above discussion: There is no theoretical genetics machinery that allows us to predict the future if the environment changes. We are not doing selection on environments, so the mathematics of selection don’t help us. Environments are not inherited according to the rules of quantitative genetics. Nothing prevents a trait from being eminently heritable and respond even stronger to changes in the environment, or vice versa.

(There is also the argument that quantitative genetic modelling of human traits matters because it helps control for genetic differences when estimating other factors. One has to be more sympathetic towards that, because who can argue against accurate measurement? But ought implies can. For quantitative genetic models to be better, they need to solve the problems of identifying variance components and overcoming population stratification.)

Much criticism of heritability in humans concern estimation problems. These criticisms may be valid (estimation is hard) or silly (of course, lots of human traits have substantial heritabilities), but I think they miss the point. Even if accurately estimated, heritabilities don’t do us much good. They don’t help us with the genetic component, because we’re not doing breeding. They don’t help us with the environmental component, because there is no breeder’s equation for environmental change.

# Paper: ‘Removal of alleles by genome editing (RAGE) against deleterious load’

Our new paper is about using predicted deleterious variants in animal breeding. We use simulation to look at the potential to improve livestock fitness by either selecting on detected deleterious variants or removing deleterious alleles by genome editing.

Summary

Deleterious variants occur when errors in DNA replication that disrupt the function of a gene. Such errors are frequent enough that all organisms carry mildly deleterious variants. Geneticists describe this as a deleterious load, that cause organisms to be less healthy and fit than they could have been if these errors didn’t happen. Load is especially pertinent to livestock populations, because of their relatively small population sizes and inbreeding.

Historically, it has not been possible to observe deleterious variants directly, but as genome sequencing becomes cheaper and new bioinformatic methods are being developed, we can now sequence livestock and detect variants that are likely to be deleterious.

In this study, we used computer simulation to see how future breeding strategies involving selection or genome editing could be used to reduce deleterious load. We tested selection against deleterious load and genome editing strategy we call RAGE (Removal of Alleles by Genome Editing) in simulated livestock populations to see how it improved fitness. The simulations suggest that selecting on deleterious variants identified from genome sequencing may help improve fitness of livestock populations, and that genome editing to remove deleterious variants could improve them even more.

For these strategies to be effective, it is important that detection of deleterious variants is accurate, and genome editing of more than one variant per animal would need to become possible without damaging side-effects. Future research on how to measure deleterious load in large sequence datasets from livestock animals, and how to perform genome editing safely and effectively will be important.

Figure 2 from the paper, showing the average fitness of simulated populations (y-axis) over the generations of breeding (x-axis) with different types of future breeding against deleterious variants.

‘RAGE against …’, what’s with the acronym?

We are very happy with the acronym. In addition to making at least two pop culture references, it’s also a nod to Promotion of Alleles by Genome Editing (PAGE) from Jenko et al. (2015). I like that the acronyms, both PAGE and RAGE, emphasises that we’re dealing with alleles that already exist within a population. We propose using genome editing as a way to promote alleles we like and remove alleles we don’t like in addition to classical breeding. The fancy new biotechnology does not replace selection, but supplements it.

Do you really think one should genome edit farm animals?

Yes, if all the bio- and reproductive technology can be made to work! Currently, genome editing methods like Crispr/Cas9 require many attempts to get precise editing to the desired allele at one place, and it doesn’t scale to multiple edits in the same animal … Not yet. But lots of smart people are competing to make it happen.

Genome editing of farm animals would also need a lot of reproductive technology, that currently isn’t really there (but probably more so for cattle than for other species). Again, lots of clever people work on it.

If it can be made to work, genome editing could be a useful breeding method.

What about the ethics of genome editing?

We don’t discuss ethics much in the paper. In one simple sense, that is because ethics isn’t our expertise. I also think a discussion of the ethics of RAGE, much like an informed discussion about the economics of it, requires empirical knowledge that we don’t have yet.

I am not of the opinion that there is a dignity or integrity to the genome that would prohibit genome editing as a rule. So the question is not ‘genome editing or not’, but ‘under what circumstances and for what applications is genome editing useful and justified?’ and ‘are the benefits of RAGE, PAGE, or whatever -GE, enough to outweigh the risks and costs?’. There is room for uncertainty and disagreement about those questions.

For a good discussion of the ethics of genome editing that is likely to raise more questions than it answers, see Eriksson et al. (2018). Among other things, they make the point that advanced reproductive technologies is a precondition for genome editing, but kind of slips out of the discussion sometimes. I think the most pressing question, both from the ethical and economical perspective, is whether the benefits of genome editing are enough to justify widespread use of reproductive technologies (in species where that isn’t already commonplace). I also like how they make the point that one needs to look at the specific applications of genome editing, in context, when evaluating them.

The simulation looks nifty! I want to simulate breeding programs like that!

You can! The simulations used the quantitative genetic simulation R package AlphaSimR with some modifications for simulating the fitness traits. There is code with the paper. Here are also the slides from when I talked about the paper at the Edinburgh R user group.

You make a ton of assumptions!

We do. Some of them are extremely uncontroversial (the basic framework of segregation and recombination during inheritance), some we can get some idea about by looking at the population genetics literature (we’ve taken inspiration from estimates of deleterious mutation rates and effect distributions estimated from humans), and some we don’t have much knowledge about at all (how does load of deleterious variants relate to the production, reproduction and health traits that are important to breeding? The only way to know is to measure). If you read the paper, don’t skip that part of the Discussion.

Would this work in plants?

Yes, probably! Plant breeding programs are a bit different, so I guess one should simulate them to really know. RAGE would be a part of the ‘Breeding 4.0’ logic of Wallace, Rodgers-Melnick & Butler (2018). In many ways the problems with plants are smaller, with less unknown reproductive technology that needs to be invented first, and an easier time field testing edited individuals.

Literature

Johnsson M, Gaynor RC, Jenko J, Gorjanc G, de Koning D-J, Hickey, JM. (2019) Removal of alleles by genome editing (RAGE) against deleterious load. Genetics Selection Evolution.

Jenko J, Gorjanc G, Cleveland MA, Varshney RK, Whitelaw CBA, Woolliams JA, Hickey JM. (2015). Potential of promotion of alleles by genome editing to improve quantitative traits in livestock breeding programs. Genetics Selection Evolution.

Eriksson, S., Jonas, E., Rydhmer, L., & Röcklinsberg, H. (2018). Invited review: Breeding and ethical perspectives on genetically modified and genome edited cattle. Journal of dairy science, 101(1), 1-17.

Wallace, J. G., Rodgers-Melnick, E., & Buckler, E. S. (2018). On the road to Breeding 4.0: unraveling the good, the bad, and the boring of crop quantitative genomics. Annual review of genetics, 52, 421-444.

# ‘Any distinction in principle between qualitative and quantitative characters disappeared long ago’

Any distinction in principle between qualitative and quantitative characters disappeared long ago, although in the early days of Mendelism it was often conjectured that they might be inherited according to fundamentally different laws.

If it is still convenient to call some characters qualitative and others quantitative, it is only to denote that the former naturally have a discontinuous and the latter a continuous distribution, or that the former are not easily measured on a familiar metrical scale. Colors are an example. Differences between colors can be measured in terms of length of light waves, hue, brilliance etc., but most of us find it difficult to compare those measurements with our own visual impressions.

Most quantitative characters are affected by many pairs of genes and also importantly by environmental variations. It is rarely possible to identify the pertinent genes in a Mendelian way or to map the chromosomal position of any of them. Fortunately this inability to identify and describe the genes individually is almost no handicap to the breeder of economic plants or animals. What he would actually do if he knew the details about all the genes which affect a quantitative character in that population differs little from what he will do if he merely knows how heritable it is and whether much of the hereditary variance comes from dominance or overdominance, and from epistatic interactions between the genes.

(That last part might not always be true anymore, but it still remained on point for more than half the time that genetics as a discipline has existed.)

Jay L Lush (1949) Heritability of quantitative characters in farm animals

# Journal club of one: ‘Biological relevance of computationally predicted pathogenicity of noncoding variants’

Wouldn’t it be great if we had a way to tell genetic variants that do something to gene function and regulation from those that don’t? This is a Really Hard Problem, especially for variants that fall outside of protein-coding regions, and thus may or may not do something to gene regulation.

There is a host of bioinformatic methods to tackle the problem, and they use different combinations of evolutionary analysis (looking at how often the position of the variant differs between or within species) and functional genomics (what histone modifications, chromatin accessibility etc are like at the location of the variant) and statistics (comparing known functional variants to other variants).

When a new method is published, it’s always accompanied by a receiver operating curve showing it predicting held-out data well, and some combination of comparisons to other methods and analyses of other datasets of known or presumed functional variants. However, one wonders how these methods will do when we use them to evaluate unknown variants in the lab, or eventually in the clinic.

This is what this paper, Liu et al (2019) Biological relevance of computationally predicted pathogenicity of noncoding variants is trying to do. They construct three test cases that are supposed to be more realistic (pessimistic) test beds for six noncoding variant effect predictors.

1. Find out which allele of a variant is the deleterious one. The presumed deleterious test alleles here are ones that don’t occur in any species of a large multiple genome alignment.
2. Find a causative variant among a set of linked variants. The test alleles are causative variants from the Human Gene Mutation Database and some variants close to them.
3. Enrich for causative variants among increasingly bigger sets of non-functional variants.

In summary, the methods don’t do too well. The authors think that they have ‘underwhelming performance’. That isn’t happy news, but I don’t think it’s such a surprise. Noncoding variant prediction is universally acknowledged to be tricky. In particular, looking at Task 3, the predictors are bound to look much less impressive in the face of class imbalance than in those receiver operating curves. Then again, class imbalance is going to be a fact when we go out to apply these methods to our long lists of candidate variants.

Task 1 isn’t that well suited to the tools, and the way it’s presented is a bit silly. After describing how they compiled their evolution-based test variant set, the authors write:

Our expectation was that a pathogenic allele would receive a significantly higher impact score (as defined for each of the six tested methods) than a non-pathogenic allele at the same position. Instead, we found that these methods were unsuccessful at this task. In fact, four of them (LINSIGHT, EIGEN, GWAVA, and CATO) reported identical scores for all alternative alleles at every position as they were not designed for allelic contrasts …

Sure, it’s hard to solve this problem with a program that only produces one score per site, but you knew that when you started writing this paragraph, didn’t you?

The whole paper is useful, but to me, the most interesting insight is that variants close to each other tend to have correlated features, meaning that there is little power to tell them apart (Task 2). This might be obvious if you think about it (e.g., if two variants fall in the same enhancer, how different can their chromatin state and histone modifications really be?), but I guess I haven’t thought that hard about it before. This high correlation is unfortunate, because that means that methods for finding causative variants (association and variant effect prediction) have poor spatial resolution. We might need something else to solve the fine mapping problem.

Figure 4 from Liu et al., showing correlation between features of linked variants.

Finally, shout-out to Reviewer 1 whose comment gave rise to these sentences:

An alternative approach is to develop a composite score that may improve upon individual methods. We examined one such method, namely PRVCS, which unfortunately had poor performance (Supplementary Figure 11).

I thought this read like something prompted by an eager beaver reviewer, and thanks to Nature Communications open review policy, we can confirm my suspicions. So don’t say that open review is useless.

Comment R1.d. Line 85: It would be interesting to see if a combination of the examined scores would better distinguish between pathogenic and non-pathogenic non-coding regions. Although we suspect there to be high correlation between features this will test the hypothesis that each score may not be sufficient on its own to make any distinction between pathogenic and non-pathogenic ncSNVs. However, a combined model might provide more discriminating power than individual scores, suggesting that each score captures part of the underlying information with regards to a region’s pathogenicity propensity.

Literature

Liu, L., Sanderford, M. D., Patel, R., Chandrashekar, P., Gibson, G., & Kumar, S. (2019). Biological relevance of computationally predicted pathogenicity of noncoding variants. Nature Communications, 10(1), 330.

# Journal club of one: ‘The heritability fallacy’

Public debate about genetics often seems to centre on heritability and on psychiatric and mental traits, maybe because we really care about our minds, and because for a long time heritability was all human geneticists studying quantitative traits could estimate. Here is an anti-heritabililty paper that I think articulates many of the common grievances: Moore & Shenk (2016) The heritability fallacy. The abstract gives a snappy summary of the argument:

The term ‘heritability,’ as it is used today in human behavioral genetics, is one of the most misleading in the history of science. Contrary to popular belief, the measurable heritability of a trait does not tell us how ‘genetically inheritable’ that trait is. Further, it does not inform us about what causes a trait, the relative influence of genes in the development of a trait, or the relative influence of the environment in the development of a trait. Because we already know that genetic factors have significant influence on the development of all human traits, measures of heritability are of little value, except in very rare cases. We, therefore, suggest that continued use of the term does enormous damage to the public understanding of how human beings develop their individual traits and identities.

At first glance, this paper should be a paper for me. I tend to agree that heritability estimates of human traits aren’t very useful. I also agree that geneticists need to care about the interpretations of their claims beyond the purely scientific domain. But the more I read, the less excited I became. The paper is a list of complaints about heritability coefficients. Some are more or less convincing. For example, I find it hard to worry too much about the ‘equal environments assumption’ in twin studies. But sure, it’s hard to identify variance components, and in practice, researchers sometimes restort to designs that are a lot iffier than twin studies.

But I think the main thrust of the paper is this huge overstatement:

Most important of all is a deep flaw in an assumption that many people make about biology: That genetic influences on trait development can be separated from their environmental context. However, contemporary biology has demonstrated beyond any doubt that traits are produced by interactions between genetic and nongenetic factors that occur in each moment of developmental time … That is to say, there are simply no such things as gene-only influences.

There certainly is such a thing as additive genetic variance as well as additive gene action. This passage only makes sense to me if ‘interaction’ is interpreted not as a statistical term but as describing a causal interplay. If so, it is perfectly true that all traits are the outcomes of interplay between genes and environment. It doesn’t follow that genetic variants in populations will interact with variable environments to the degree that quantitative genetic models are ‘nonsensical in most circumstances’.

They illustrate with this parable: Billy and Suzy are filling a bucket. Suzy is holding the hose and Billy turns on the tap. How much of the water is due to Billy and how much is due to Suzy? The answer is supposed to be that the question makes no sense, because they are both filling the bucket through a causal interplay. Well. If they’re filling a dozen buckets, and halfway through, Billy opens the tap half a turn more, and Suzy starts moving faster between buckets, because she’s tired of this and wants lunch … The correct level of analysis for the quantitative bucketist isn’t Billy, Suzy and the hose. It is the half-turn of the tap and Suzy’s moving of the nozzle.

The point is that quantitative genetic models describe variation between individuals. The authors know this, of course, but they write as if genetic analysis of variance is some kind of sleight of hand, as if quantitative genetics ought to be about development, and the fact that it isn’t is a deliberate obfuscation. Here is how they describe Jay Lush’s understanding of heritability:

The intention was ‘to quantify the level of predictability of passage of a biologically interesting phenotype from parent to offspring’. In this way, the new technical use of ‘heritability’ accurately reflected that period’s understanding of genetic determinism. Still, it was a curious appropriation of the term, because—even by the admission of its proponents—it was meant only to represent how variation in DNA relates to variation in traits across a population, not to be a measure of the actual influence of genes on the development of any given trait.

I have no idea what position Lush took on genetic determinism. But we can find the context of heritability by looking at the very page before in Animal breeding plans. The definition of the heritability coefficient occurs on page 87. This is how Lush starts the chapter on page 86:

In the strictest sense of the word, the question of whether a characteristic is hereditary or environmental has no meaning. Every characteristic is both hereditary and environmental, since it is the end result of a long chain of interactions of the genes with each other, with the environment and with the intermediate products at each stage of development. The genes cannot develop the characteristic unless they have the proper environment, and no amount of attention to the environment will cause the characteristc to develop unless the necessary genes are present. If either the genes or the environment are changed, the characteristic which results from their interactions may be changed.

I don’t know — maybe the way quantitative genetics has been used in human behavioural and psychiatric genetics invites genetic determinism. Or maybe genetic determinism is one of those false common-sense views that are really hard to unlearn. In any case, I don’t think it’s reasonable to put the blame on the concept of heritability for not being some general ‘measure of the biological inheritability of complex traits’ — something that it was never intended to be, and cannot possibly be.

My guess is that new debates will be about polygenic scores and genomic prediction. I hope that will be more useful.

Literature

David S. Moore & David Shenk (2016) The heritability fallacy

Jay Lush Animal breeding plans. Online at: https://archive.org/details/animalbreedingpl032391mbp/page/n99