Journal club of one: ‘Biological relevance of computationally predicted pathogenicity of noncoding variants’

Wouldn’t it be great if we had a way to tell genetic variants that do something to gene function and regulation from those that don’t? This is a Really Hard Problem, especially for variants that fall outside of protein-coding regions, and thus may or may not do something to gene regulation.

There is a host of bioinformatic methods to tackle the problem, and they use different combinations of evolutionary analysis (looking at how often the position of the variant differs between or within species) and functional genomics (what histone modifications, chromatin accessibility etc are like at the location of the variant) and statistics (comparing known functional variants to other variants).

When a new method is published, it’s always accompanied by a receiver operating curve showing it predicting held-out data well, and some combination of comparisons to other methods and analyses of other datasets of known or presumed functional variants. However, one wonders how these methods will do when we use them to evaluate unknown variants in the lab, or eventually in the clinic.

This is what this paper, Liu et al (2019) Biological relevance of computationally predicted pathogenicity of noncoding variants is trying to do. They construct three test cases that are supposed to be more realistic (pessimistic) test beds for six noncoding variant effect predictors.

The tasks are:

  1. Find out which allele of a variant is the deleterious one. The presumed deleterious test alleles here are ones that don’t occur in any species of a large multiple genome alignment.
  2. Find a causative variant among a set of linked variants. The test alleles are causative variants from the Human Gene Mutation Database and some variants close to them.
  3. Enrich for causative variants among increasingly bigger sets of non-functional variants.

In summary, the methods don’t do too well. The authors think that they have ‘underwhelming performance’. That isn’t happy news, but I don’t think it’s such a surprise. Noncoding variant prediction is universally acknowledged to be tricky. In particular, looking at Task 3, the predictors are bound to look much less impressive in the face of class imbalance than in those receiver operating curves. Then again, class imbalance is going to be a fact when we go out to apply these methods to our long lists of candidate variants.

Task 1 isn’t that well suited to the tools, and the way it’s presented is a bit silly. After describing how they compiled their evolution-based test variant set, the authors write:

Our expectation was that a pathogenic allele would receive a significantly higher impact score (as defined for each of the six tested methods) than a non-pathogenic allele at the same position. Instead, we found that these methods were unsuccessful at this task. In fact, four of them (LINSIGHT, EIGEN, GWAVA, and CATO) reported identical scores for all alternative alleles at every position as they were not designed for allelic contrasts …

Sure, it’s hard to solve this problem with a program that only produces one score per site, but you knew that when you started writing this paragraph, didn’t you?

The whole paper is useful, but to me, the most interesting insight is that variants close to each other tend to have correlated features, meaning that there is little power to tell them apart (Task 2). This might be obvious if you think about it (e.g., if two variants fall in the same enhancer, how different can their chromatin state and histone modifications really be?), but I guess I haven’t thought that hard about it before. This high correlation is unfortunate, because that means that methods for finding causative variants (association and variant effect prediction) have poor spatial resolution. We might need something else to solve the fine mapping problem.

Figure 4 from Liu et al., showing correlation between features of linked variants.

Finally, shout-out to Reviewer 1 whose comment gave rise to these sentences:

An alternative approach is to develop a composite score that may improve upon individual methods. We examined one such method, namely PRVCS, which unfortunately had poor performance (Supplementary Figure 11).

I thought this read like something prompted by an eager beaver reviewer, and thanks to Nature Communications open review policy, we can confirm my suspicions. So don’t say that open review is useless.

Comment R1.d. Line 85: It would be interesting to see if a combination of the examined scores would better distinguish between pathogenic and non-pathogenic non-coding regions. Although we suspect there to be high correlation between features this will test the hypothesis that each score may not be sufficient on its own to make any distinction between pathogenic and non-pathogenic ncSNVs. However, a combined model might provide more discriminating power than individual scores, suggesting that each score captures part of the underlying information with regards to a region’s pathogenicity propensity.

Literature

Liu, L., Sanderford, M. D., Patel, R., Chandrashekar, P., Gibson, G., & Kumar, S. (2019). Biological relevance of computationally predicted pathogenicity of noncoding variants. Nature Communications, 10(1), 330.

Journal club of one: ‘The heritability fallacy’

Public debate about genetics often seems to centre on heritability and on psychiatric and mental traits, maybe because we really care about our minds, and because for a long time heritability was all human geneticists studying quantitative traits could estimate. Here is an anti-heritabililty paper that I think articulates many of the common grievances: Moore & Shenk (2016) The heritability fallacy. The abstract gives a snappy summary of the argument:

The term ‘heritability,’ as it is used today in human behavioral genetics, is one of the most misleading in the history of science. Contrary to popular belief, the measurable heritability of a trait does not tell us how ‘genetically inheritable’ that trait is. Further, it does not inform us about what causes a trait, the relative influence of genes in the development of a trait, or the relative influence of the environment in the development of a trait. Because we already know that genetic factors have significant influence on the development of all human traits, measures of heritability are of little value, except in very rare cases. We, therefore, suggest that continued use of the term does enormous damage to the public understanding of how human beings develop their individual traits and identities.

At first glance, this paper should be a paper for me. I tend to agree that heritability estimates of human traits aren’t very useful. I also agree that geneticists need to care about the interpretations of their claims beyond the purely scientific domain. But the more I read, the less excited I became. The paper is a list of complaints about heritability coefficients. Some are more or less convincing. For example, I find it hard to worry too much about the ‘equal environments assumption’ in twin studies. But sure, it’s hard to identify variance components, and in practice, researchers sometimes restort to designs that are a lot iffier than twin studies.

But I think the main thrust of the paper is this huge overstatement:

Most important of all is a deep flaw in an assumption that many people make about biology: That genetic influences on trait development can be separated from their environmental context. However, contemporary biology has demonstrated beyond any doubt that traits are produced by interactions between genetic and nongenetic factors that occur in each moment of developmental time … That is to say, there are simply no such things as gene-only influences.

There certainly is such a thing as additive genetic variance as well as additive gene action. This passage only makes sense to me if ‘interaction’ is interpreted not as a statistical term but as describing a causal interplay. If so, it is perfectly true that all traits are the outcomes of interplay between genes and environment. It doesn’t follow that genetic variants in populations will interact with variable environments to the degree that quantitative genetic models are ‘nonsensical in most circumstances’.

They illustrate with this parable: Billy and Suzy are filling a bucket. Suzy is holding the hose and Billy turns on the tap. How much of the water is due to Billy and how much is due to Suzy? The answer is supposed to be that the question makes no sense, because they are both filling the bucket through a causal interplay. Well. If they’re filling a dozen buckets, and halfway through, Billy opens the tap half a turn more, and Suzy starts moving faster between buckets, because she’s tired of this and wants lunch … The correct level of analysis for the quantitative bucketist isn’t Billy, Suzy and the hose. It is the half-turn of the tap and Suzy’s moving of the nozzle.

The point is that quantitative genetic models describe variation between individuals. The authors know this, of course, but they write as if genetic analysis of variance is some kind of sleight of hand, as if quantitative genetics ought to be about development, and the fact that it isn’t is a deliberate obfuscation. Here is how they describe Jay Lush’s understanding of heritability:

The intention was ‘to quantify the level of predictability of passage of a biologically interesting phenotype from parent to offspring’. In this way, the new technical use of ‘heritability’ accurately reflected that period’s understanding of genetic determinism. Still, it was a curious appropriation of the term, because—even by the admission of its proponents—it was meant only to represent how variation in DNA relates to variation in traits across a population, not to be a measure of the actual influence of genes on the development of any given trait.

I have no idea what position Lush took on genetic determinism. But we can find the context of heritability by looking at the very page before in Animal breeding plans. The definition of the heritability coefficient occurs on page 87. This is how Lush starts the chapter on page 86:

In the strictest sense of the word, the question of whether a characteristic is hereditary or environmental has no meaning. Every characteristic is both hereditary and environmental, since it is the end result of a long chain of interactions of the genes with each other, with the environment and with the intermediate products at each stage of development. The genes cannot develop the characteristic unless they have the proper environment, and no amount of attention to the environment will cause the characteristc to develop unless the necessary genes are present. If either the genes or the environment are changed, the characteristic which results from their interactions may be changed.

I don’t know — maybe the way quantitative genetics has been used in human behavioural and psychiatric genetics invites genetic determinism. Or maybe genetic determinism is one of those false common-sense views that are really hard to unlearn. In any case, I don’t think it’s reasonable to put the blame on the concept of heritability for not being some general ‘measure of the biological inheritability of complex traits’ — something that it was never intended to be, and cannot possibly be.

My guess is that new debates will be about polygenic scores and genomic prediction. I hope that will be more useful.

Literature

David S. Moore & David Shenk (2016) The heritability fallacy

Jay Lush Animal breeding plans. Online at: https://archive.org/details/animalbreedingpl032391mbp/page/n99

Journal club of one: ‘Sacred text as cultural genome: an inheritance mechanism and method for studying cultural evolution’

This is a fun paper about something I don’t know much about: Hartberg & Sloan Wilson (2017) ‘Sacred text as cultural genome: an inheritance mechanism and method for studying cultural evolution‘. It does exactly what it says on the package: it takes an image from genome science, that of genomic DNA and gene expression, and uses it as a metaphor for how pastors in Christian churches use the Bible. So, the Bible is the genome, churches are cells, and citing bible passages in a sermon is gene expression–at least something along those lines.

The authors use a quantitative analysis analogous to differential gene expression to compare the Bible passages cited in sermons from six Protestant churches in the US with different political leanings (three conservative and three progressive; coincidentally, N = 3 is kind of the stereotypical sample size of an early 2000s gene expression study). The main message is that the churches use the Bible differently, that the conservative churches use more of the text, and that even when they draw on the same book, they use different verses.

They exemplify with Figure 3, which shows a ‘Heat map showing the frequency with which two churches, one highly conservative (C1) and one highly progressive (P1), cite specific verses within chapter 3 of the Gospel According to John in their Sunday sermons.’ I will not reproduce it for copyright reasons, but it pretty clearly shows how P1 often cites the first half of the chapter but doesn’t use the second half at all. C1, instead, uses verses from the whole chapter, but its three most used verses are all in latter half, that is the block that P1 doesn’t use at all. What are these verses? The paper doesn’t quote them except 3:16 ‘For God so loved the world, that he gave his one and only Son, that whoever believes in him should not perish, but have eternal life’, which is the exception to the pattern — it’s the most common verse in both churches (and generally, a very famous passage).

Chapter 3 of the Gospel of John is the story of how Jesus teaches Nicodemus. Here is John 3:1-17:

1 Now there was a man of the Pharisees named Nicodemus, a ruler of the Jews. 2 The same came to him by night, and said to him, ”Rabbi, we know that you are a teacher come from God, for no one can do these signs that you do, unless God is with him.”
3 Jesus answered him, ”Most certainly, I tell you, unless one is born anew, he can’t see God’s Kingdom.”
4 Nicodemus said to him, ”How can a man be born when he is old? Can he enter a second time into his mother’s womb, and be born?”
5 Jesus answered, ”Most certainly I tell you, unless one is born of water and spirit, he can’t enter into God’s Kingdom. 6 That which is born of the flesh is flesh. That which is born of the Spirit is spirit. 7 Don’t marvel that I said to you, ‘You must be born anew.’ 8 The wind blows where it wants to, and you hear its sound, but don’t know where it comes from and where it is going. So is everyone who is born of the Spirit.”
9 Nicodemus answered him, ”How can these things be?”
10 Jesus answered him, ”Are you the teacher of Israel, and don’t understand these things? 11 Most certainly I tell you, we speak that which we know, and testify of that which we have seen, and you don’t receive our witness. 12 If I told you earthly things and you don’t believe, how will you believe if I tell you heavenly things? 13 No one has ascended into heaven but he who descended out of heaven, the Son of Man, who is in heaven. 14 As Moses lifted up the serpent in the wilderness, even so must the Son of Man be lifted up, 15 that whoever believes in him should not perish, but have eternal life. 16 For God so loved the world, that he gave his one and only Son, that whoever believes in him should not perish, but have eternal life. 17 For God didn’t send his Son into the world to judge the world, but that the world should be saved through him.”

This is the passage that P1 uses a lot, but they break before they get to the verses that come right after: John 3:18-21. The conservative church uses them the most out of this chapter.

18 Whoever believes in him is not condemned, but whoever does not believe stands condemned already because they have not believed in the name of God’s one and only Son. 19 This is the verdict: Light has come into the world, but people loved darkness instead of light because their deeds were evil. 20 Everyone who does evil hates the light, and will not come into the light for fear that their deeds will be exposed. 21 But whoever lives by the truth comes into the light, so that it may be seen plainly that what they have done has been done in the sight of God.

So this is consistent with the idea of the paper: In the progressive church, the pastor emphasises the story about doubt and the possibility of salvation, where Nicodemus comes to ask Jesus for explanations, and Jesus talks about being born again. It also has some beautiful perplexing Jesus-style imagery with the spirit being like the wind. In the conservative church, the part about condemnation and evildoers hating the light gets more traction.

As for the main analogy between the Bible and a genome, I’m not sure that it works. The metaphors are mixed, and it’s not obvious what the unit of inheritance is. For example, when the paper talks about ‘fitness-enhanching information’, does that refers to the fitness of the church, the members of the church, or the Bible itself? The paper sometimes talk as if the bible was passed on from generation to generation, for instance here in the introduction:

Any mechanism of inheritance must transmit information across generations with high fidelity and translate this information into phenotypic expression during each generation. In this article we argue that sacred texts have these properties and therefore qualify as important inheritance mechanisms in cultural evolution.

But the sacred text isn’t passed on from generation to generation. The Bible is literally a book that is transmitted by printing. What may be passed on is the way pastors interpret it and, in the authors’ words, ‘cherry pick’ verses to cite. But clearly, that is not stored in the bible ‘genome’ but somehow in the culture of churches and the institutions of learning that pastors attend.

If we want to stick to the idea of the bible as a genome, I think this story makes just as much sense: Don’t think about how this plasticity of interpretation may be adaptive for humans. Instead, take a sacred text-centric perspective, analogous to the gene-centric perspective. Think of the plasticity in interpretation as preserving the fitness of the bible by making it fit community values. Because the Bible can serve as source materials for churches with otherwise different values, it survives as one of the most important and widely read books in the world.

Literature

Hartberg, Yasha M., and David Sloan Wilson. ”Sacred text as cultural genome: an inheritance mechanism and method for studying cultural evolution.” Religion, Brain & Behavior 7.3 (2017): 178-190.

The Bible quotes are from the World English Bible translation.

Journal club of one: ”Give one species the task to come up with a theory that spans them all: what good can come out of that?”

This paper by Hanna Kokko on human biases in evolutionary biology and behavioural biology is wonderful. The style is great, and it’s full of ideas. The paper asks, pretty much, the question in the title. How much do particularities of human nature limit our thinking when we try to understand other species?

Here are some of the points Kokko comes up with:

The use of introspection and perspective-taking in invention of hypotheses. The paper starts out with a quote from Robert Trivers advocating introspection in hypothesis generation. This is interesting, because I’m sure researchers do this all the time, but to celebrate it in public is another thing. To understand evolutionary hypotheses one often has to take the perspective of an animal, or some other entity like an allele of an enhancer or a transposable element, and imagine what its interests are, or how its situation resembles a social situation such as competition or a conflict of interest.

If this sounds fuzzy or unscientific, we try to justify it by saying that such language is a short-hand, and what we really mean is some impersonal, mechanistic account of variation and natural selection. This is true to some extent; population genetics and behavioural ecology make heavy use of mathematical models that are free of such fuzzy terms. However, the intuitive and allegorical parts of the theory really do play an important role both in invention and in understanding of the research.

While scientists avoid using such anthropomorphizing language (to an extent; see [18,19] for critical views), it would be dishonest to deny that such thoughts are essential for the ease with which we grasp the many dilemmas that individuals of other species face. If the rules of the game change from A to B, the expected behaviours or life-history traits change too, and unless a mathematical model forces us to reconsider, we accept the implicit ‘what would I do if…’ as a powerful hypothesis generation tool. Finding out whether the hypothesized causation is strong enough to leave a trace in the phylogenetic pattern then necessitates much more work. Being forced to examine whether our initial predictions hold water when looking at the circumstances of many species is definitely part of what makes evolutionary and behavioural ecology so exciting.

Bias against hermaphrodites and inbreeding. There is a downside, of course. Two of the examples Kokko gives of human biases possibly hampering evolutionary thought are hermaphroditism and inbreeding — two things that may seem quite strange and surprising from a mammalian perspective, but are the norm in a substantial number of taxa.

Null models and default assumptions. One passage clashes with how I like to think. Kokko brings up null models, or default assumptions, and identifies a correct null assumption with being ”simpler, i.e. more parsimonious”. I tend to think that null models may be occasionally useful for statistical inference, but are a bit suspect in scientific reasoning. Both because there’s an asymmetry in defaulting to one model and putting the burden of proof on any alternative, and because parsimony is quite often in the eye of the beholder, or in the structure of the theories you’ve already accepted. But I may be wrong, at least in this case. If you want to formulate an evolutionary hypothesis about a particular behaviour (in this case, female multiple mating), it really does seem to matter for what needs explaining if the behaviour could be explained by a simple model (bumping into mates randomly and not discriminating between them).

However, I think that in this case, what needs explaining is not actually a question about scope and explanatory power, but about phylogeny. There is an ancestral state and what needs explaining is how it evolved from there.

Group-level and individual-level selection. The most fun part, I think, is the speculation that our human biases may make us particularly prone to think of group-level benefits. I’ll just leave this quote here:

Although I cannot possibly prove the following claim, I consider it an interesting conjecture to think about how living in human societies makes us unusually strongly aware of the group-level consequences of our actions. Whether innate, or frequently enough drilled during upbringing to become part of our psyche, the outcome is clear. By the time a biology student enters university, there is a belief in place that evolution in general produces traits because they benefit entire species. /…/ What follows, then, is that teachers need to point out the flaws in one set of ideas (e.g. ‘individuals die to avoid overpopulation’) much more strongly than the other. After the necessary training, students then graduate with the lesson not only learnt but also generalized, at which point it takes the form ‘as soon as someone evokes group-level thinking, we’ve entered “bad logic territory”’.

Literature

Kokko, Hanna. (2017) ”Give one species the task to come up with a theory that spans them all: what good can come out of that?” Proc. R. Soc. B. Vol. 284. No. 1867.