The word ”genome”

The sources I’ve seen attribute the coinage of ”genome” to botanist Hans Winkler (1920, p. 166).

The pertinent passage goes:

Ich schlage vor, für den haploiden Chromosomensatz, der im Verein mit dem zugehörigen Protoplasma die materielle Grundlage der systematischen Einheit darstellt den Ausdruck: das Genom zu verwenden … I suggest to use the expression ”the genome” for the haploid set of chromosomes, which together with the protoplasm it belongs with make up the material basis of the systematic unit …

That’s good, but why did Winkler need this term in the first place? In this chapter, he is dealing with the relationship between chromosome number and mode of reproduction. Of course, he’s going to talk about hybridization and ploidy, and he needs some terms to bring order to the mess. He goes on to coin a couple of other concepts that I had never heard of:

… und Kerne, Zellen und Organismen, in denen ein gleichartiges Genom mehr als einmal in jedem Kern vorhanden ist, homogenomatisch zu nennen, solche dagegen, die verschiedenartige Genome im Kern führen, heterogenomatisch.

So, a homogenomic organism has more than one copy of the same genome in its nuclei, while a heterogenomic organism has multiple genomes. He also suggests you could count the genomes, di-, tri- up to polygenomic organisms. He says that this is a different thing than polyploidy, which is when an organism has multiples of a haploid chromosome set. Winkler’s example: A hybrid between a diploid species with 10 chromosomes and another diploid species with 16 chromosomes might have 13 chromosomes and be polygenomic but not polyploid.

These terms don’t seem to have stuck as much, but I found them used here en there, for example in papers on bananas (Arvanitoyannis et al. 2008) and cotton (Brown & Menzel 1952); cooking bananas are heterogenomic.

This only really makes sense in cases with recent hybridisation, where you can trace different chromosomes to origins in different species. You need to be able to trace parts of the hybrid genome of the banana to genomes of other species. Otherwise, the genome of the banana just the genome of the banana.

Analogously, we also find polygenomes in this cancer paper (Navin et al. 2010):

We applied our methods to 20 primary ductal breast carcinomas, which enable us to classify them according to whether they appear as either monogenomic (nine tumors) or polygenomic (11 tumors). We define ”monogenomic” tumors to be those consisting of an apparently homogeneous population of tumor cells with highly similar genome profiles throughout the tumor mass. We define ”polygenomic” tumors as those containing multiple tumor subpopulations that can be distinguished and grouped by similar genome structure.

This makes sense; if a tumour has clones of cells in it with a sufficiently rearranged genome, maybe it is fair to describe it as a tumour with different genomes. It raises the question what is ”sufficiently” different for something to be a different genome.

How much difference can there be between sequences that are supposed to count as the same genome? In everything above, we have taken a kind of typological view: there is a genome of an individual, or a clone of cells, that can be thought of as one entity, despite the fact that every copy of it, in every different cell, is likely to have subtle differences. Philosopher John Dupré (2010), in ”The Polygenomic Organism”, questions what we mean by ”the genome” of an organism. How can we talk about an organism having one genome or another, when in fact, every cell in the body goes through mutation (actually, Dupré spends surprisingly little time on somatic mutation but more on epigenetics, but makes a similar point), sometimes chimerism, sometimes programmed genome rearrangements?

The genome is related to types of organism by attempts to find within it the essence of a species or other biological kind. This is a natural, if perhaps naïve, interpretation of the idea of the species ‘barcode’, the use of particular bits of DNA sequence to define or identify species membership. But in this paper I am interested rather in the relation sometimes thought to hold between genomes of a certain type and an individual organism. This need not be an explicitly essentialist thesis, merely the simple factual belief that the cells that make up an organism all, as a matter of fact, have in common the inclusion of a genome, and the genomes in these cells are, barring the odd collision with a cosmic ray or other unusual accident, identical.

Dupré’s answer is that there probably isn’t a universally correct way to divide living things into individuals, and what concept of individuality one should use really depends on what one wants to do with it. I take this to mean that it is perfectly fine to gloss over real biological detail, but that we need to keep in mind that they might unexpectedly start to matter. For example, when tracing X chromosomes through pedigrees, it might be fine to ignore that X-inactivation makes female mammals functionally mosaic–until you start looking at the expression of X-linked traits.

Photo of calico cat in Amsterdam by SpanishSnake (CC0 1.0). See, I found a reason to put in a cat picture!

Finally, the genome exists not just in the organism, but also in the computer, as sequences, maps and obscure bioinformatics file formats. Arguably, keeping the discussion above in mind, the genome only exists in the computer, as a scientific model of a much messier biology. Szymanski, Vermeulen & Wong (2019) investigate what the genome is by looking at how researchers talk about it. ”The genome” turns out to be many things to researchers. Here they are writing about what happened when the yeast genetics community created a reference genome.

If the digital genome is not assumed to solely a representation of a physical genome, we might instead see ”the genome” as a discursive entity moving from the cell to the database but without ever removing ”the genome” from the cell, aggregating rather than excluding. This move and its inherent multiplying has consequences for the shape of the community that continues to participate in constructing the genome as a digital text. It also has consequences for the work the genome can perform. As Chadarevian (2004) observes for the C. elegans genome sequence, moving the genome from cell to database enables it to become a new kind of mapping tool …

/…/

Consequently, the informational genome can be used to manufacture coherence across knowledge generated by disparate labs by making it possible to line up textual results – often quite literally, in the case of genome sequences as alphabetic texts — and read across them.

/…/

Prior to the availability of the reference genome, such coherence across the yeast community was generated by strain sharing practices and standard protocols and notation for documenting variation from the reference strain, S288C, authoritatively embodied in living cells housed at Mortimer’s stock center. After the sequencing project, part of that work was transferred to the informational, textual yeast genome, making the practice of lining up and making the same available to those who worked with the digital text as well as those who worked with the physical cell.

And that brings us back to Winkler: What does the genome have in common? That it makes up the basis for the systematic unit, that it belongs to organisms that we recognize as closely related enough to form a systematic unit.

Literature

Winkler H. (1920) Verbreitung und Ursache der Parthenogenesis im Pflanzen- und Tierreiche.

Arvanitoyannis, Ioannis S., et al. ”Banana: cultivars, biotechnological approaches and genetic transformation.” International journal of food science & technology 43.10 (2008): 1871-1879.

Navin, Nicholas, et al. ”Inferring tumor progression from genomic heterogeneity.” Genome research 20.1 (2010): 68-80.

Brown, Meta S., and Margaret Y. Menzel. ”Polygenomic hybrids in Gossypium. I. Cytology of hexaploids, pentaploids and hexaploid combinations.” Genetics 37.3 (1952): 242.

Dupré, John. ”The polygenomic organism.” The Sociological Review 58.1_suppl (2010): 19-31.

Szymanski, Erika, Niki Vermeulen, and Mark Wong. ”Yeast: one cell, one reference sequence, many genomes?.” New Genetics and Society 38.4 (2019): 430-450.

A model of polygenic adaptation in an infinite population

How do allele frequencies change in response to selection? Answers to that question include ”it depends”, ”we don’t know”, ”sometimes a lot, sometimes a little”, and ”according to a nonlinear differential equation that actually doesn’t look too horrendous if you squint a little”. Let’s look at a model of the polygenic adaptation of an infinitely large population under stabilising selection after a shift in optimum. This model has been developed by different researchers over the years (reviewed in Jain & Stephan 2017).

Here is the big equation for allele frequency change at one locus:

\dot{p}_i = -s \gamma_i p_i q_i (c_1 - z') - \frac{s \gamma_i^2}{2} p_i q_i (q_i - p_i) + \mu (q_i - p_i )

That wasn’t so bad, was it? These are the symbols:

  • the subscript i indexes the loci,
  • \dot{p} is the change in allele frequency per time,
  • \gamma_i is the effect of the locus on the trait (twice the effect of the positive allele to be precise),
  • p_i is the frequency of the positive allele,
  • q_i the frequency of the negative allele,
  • s is the strength of selection,
  • c_1 is the phenotypic mean of the population; it just depends on the effects and allele frequencies
  • \mu is the mutation rate.

This breaks down into three terms that we will look at in order.

The directional selection term

-s \gamma_i p_i q_i (c_1 - z')

is the term that describes change due to directional selection.

Apart from the allele frequencies, it depends on the strength of directional selection s, the effect of the locus on the trait \gamma_i and how far away the population is from the new optimum (c_1 - z'). Stronger selection, larger effect or greater distance to the optimum means more allele frequency change.

It is negative because it describes the change in the allele with a positive effect on the trait, so if the mean phenotype is above the optimum, we would expect the allele frequency to decrease, and indeed: when

(c_1 - z') < 0

this term becomes negative.

If you neglect the other two terms and keep this one, you get Jain & Stephan's "directional selection model", which describes behaviour of allele frequencies in the early phase before the population has gotten close to the new optimum. This approximation does much of the heavy lifting in their analysis.

The stabilising selection term

-\frac{s \gamma_i^2}{2} p_i q_i (q_i - p_i)

is the term that describes change due to stabilising selection. Apart from allele frequencies, it depends on the square of the effect of the locus on the trait. That means that, regardless of the sign of the effect, it penalises large changes. This appears to make sense, because stabilising selection strives to preserve traits at the optimum. The cubic influence of allele frequency is, frankly, not intuitive to me.

The mutation term

Finally,

\mu (q_i - p_i )

is the term that describes change due to new mutations. It depends on the allele frequencies, i.e. how of the alleles there are around that can mutate into the other alleles, and the mutation rate. To me, this is the one term one could sit down and write down, without much head-scratching.

Walking in allele frequency space

Jain & Stephan (2017) show a couple of examples of allele frequency change after the optimum shift. Let us try to draw similar figures. (Jain & Stephan don’t give the exact parameters for their figures, they just show one case with effects below their threshold value and one with effects above.)

First, here is the above equation in R code:

pheno_mean <- function(p, gamma) {
  sum(gamma * (2 * p - 1))
}

allele_frequency_change <- function(s, gamma, p, z_prime, mu) {
  -s * gamma * p * (1 - p) * (pheno_mean(p, gamma) - z_prime) +
    - s * gamma^2 * 0.5 * p * (1 - p) * (1 - p - p) +
    mu * (1 - p - p)
}

With this (and some extra packaging; code on Github), we can now plot allele frequency trajectories such as this one, which starts at some arbitrary point and approaches an optimum:

Animation of alleles at two loci approaching an equilibrium. Here, we have two loci with starting frequencies 0.2 and 0.1 and effect size 1 and 0.01, and the optimum is at 0. The mutation rate is 10-4 and the strength of selection is 1. Animation made with gganimate.

Resting in allele frequency space

The model describes a shift from one optimum to another, so we want want to start at equilibrium. Therefore, we need to know what the allele frequencies are at equilibrium, so we solve for 0 allele frequency change in the above equation. The first term will be zero, because

(c_1 - z') = 0

when the mean phenotype is at the optimum. So, we can throw away that term, and factor the rest equation into:

(1 - 2p) (-\frac{s \gamma ^2}{2} p(1-p) + \mu) = 0

Therefore, one root is p = 1/2. Depending on your constitution, this may or may not be intuitive to you. Imagine that you have all the loci, each with a positive and negative allele with the same effect, balanced so that half the population has one and the other half has the other. Then, there is this quadratic equation that gives two other equilibria:

\mu - \frac{s\gamma^2}{2}p(1-p) = 0
\implies p = \frac{1}{2} (1 \pm \sqrt{1 - 8 \frac{\mu}{s \gamma ^2}})

These points correspond to mutation–selection balance with one or the other allele closer to being lost. Jain & Stephan (2017) show a figure of the three equilibria that looks like a semicircle (from the quadratic equation, presumably) attached to a horizontal line at 0.5 (their Figure 1). Given this information, we can start our loci out at equilibrium frequencies. Before we set them off, we need to attend to the effect size.

How big is a big effect? Hur långt är ett snöre?

In this model, there are big and small effects with qualitatively different behaviours. The cutoff is at:

\hat{\gamma} = \sqrt{ \frac{8 \mu}{s}}

If we look again at the roots to the quadratic equation above, they can only exist as real roots if

\frac {8 \mu}{s \gamma^2} < 1

because otherwise the expression inside the square root will be negative. This inequality can be rearranged into:

\gamma^2 > \frac{8 \mu}{s}

This means that if the effect of a locus is smaller than the threshold value, there is only one equilibrium point, and that is at 0.5. It also affects the way the allele frequency changes. Let us look at two two-locus cases, one where the effects are below this threshold and one where they are above it.

threshold <- function(mu, s) sqrt(8 * mu / s)

threshold(1e-4, 1)
[1] 0.02828427

With mutation rate of 10-4 and strength of selection of 1, the cutoff is about 0.028. Let our ”big” loci have effect sizes of 0.05 and our small loci have effect sizes of 0.01, then. Now, we are ready to shift the optimum.

The small loci will start at an equilibrium frequency of 0.5. We start the large loci at two different equilibrium points, where one positive allele is frequent and the other positive allele is rare:

get_equilibrium_frequencies <- function(mu, s, gamma) {
  c(0.5,
    0.5 * (1 + sqrt(1 - 8 * mu / (s * gamma^2))),
    0.5 * (1 - sqrt(1 - 8 * mu / (s * gamma^2))))
}

(eq0.05 <- get_equilibrium_frequencies(1e-4, 1, 0.05))
[1] 0.50000000 0.91231056 0.08768944
get_equlibrium_frequencies(1e-4, 1, 0.01)
[1] 0.5 NaN NaN

Look at them go!

These animations show the same qualitative behaviour as Jain & Stephan illustrate in their Figure 2. With small effects, there is gradual allele frequency change at both loci:

However, with large effects, one of the loci (the one on the vertical axis) dramatically changes in allele frequency, that is it’s experiencing a selective sweep, while the other one barely changes at all. And the model will show similar behaviour when the trait is properly polygenic, with many loci, as long as effects are large compared to the (scaled) mutation rate.

Here, I ran 10,000 time steps; if we look at the phenotypic means, we can see that they still haven’t arrived at the optimum at the end of that time. The mean with large effects is at 0.089 (new optimum of 0.1), and the mean with small effects is 0.0063 (new optimum: 0.02).

Let’s end here for today. Maybe another time, we can return how this model applies to actually polygenic architectures, that is, with more than two loci. The code for all the figures is on Github.

Literature

Jain, K., & Stephan, W. (2017). Modes of rapid polygenic adaptation. Molecular biology and evolution, 34(12), 3169-3175.

The genomic scribe in hyperspace

When I was in school (it must have been in gymnasiet, roughly corresponding to secondary school or high school), I remember giving a presentation on a group project about the human genome project, and using the illiterate copyist analogy. After sequencing the human genome, we are able to blindly copy the text of life; we still need to learn to read it. At this point, I had no clue whatsoever that I would be working in genetics in the future. I certainly felt very clever coming up with that image. I must have read it somewhere.

If it is true that the illiterate scribe is a myth, and they must have had at least some ability to read, that makes the analogy more apt: even in 2003, researchers actually had a fairly good idea of how to read certain aspects of genetics. The genetic code is from 1961, for crying out loud (Yanofsky 2007)!

My classroom moment must have been around 2003, which is the year the ENCODE project started, aiming to do just that: create an encyclopedia (or really, a critical apparatus) of the human genome. It’s still going: a drove of papers from its third phase came out last year, and apparently it’s now in the fourth phase. ENCODE can’t be a project in the usual sense of a planned undertaking with a defined goal, but rather a research programme in the general direction of ”a comprehensive parts list of functional elements in the human genome” (ENCODE FAQ). Along with the phase 3 empirical papers, they published a fun perspective article (The ENCODE Project Consortium et al. 2020).

ENCODE commenced as an ambitious effort to comprehensively annotate the elements in the human genome, such as genes, control elements, and transcript isoforms, and was later expanded to annotate the genomes of several model organisms. Mapping assays identified biochemical activities and thus candidate regulatory elements.

The age means that ENCODE has lived through generations of genomic technologies. Phase 1 was doing functional genomics with microarrays, which now sounds about as quaint as doing it with blots. Nowadays, they have CRISPR-based editing assays and sequencing methods for chromosome 3D structure that just seem to keep adding Cs to their acronyms.

Last time I blogged about the ENCODE project was in 2013 (in Swedish), in connection with the opprobrium about junk DNA. If you care about junk DNA, check out Sean Eddy’s FAQ (Eddy 2012). If you still want to be angry about what percentage of the genome has function, what gene concepts are useful and the relationship between quantitative genetics and genomics, check out this Nature Video. It’s funny, because the video pre-empts some of the conclusions of the perspective article.

The video says: to do many of the potentially useful things we want to do with genomes (like sock cancer in the face, presumably), we need to look at individual differences (”between you, and you, and you”) and how they relate to traits. And an encyclopedia, great as it may be, is not going to capture that.

The perspective says:

It is now apparent that elements that govern transcription, chromatin organization, splicing, and other key aspects of genome control and function are densely encoded in the human genome; however, despite the discovery of many new elements, the annotation of elements that are highly selective for particular cell types or states is lagging behind. For example, very few examples of condition-specific activation or repression of transcriptional control elements are currently annotated in ENCODE. Similarly, information from human fetal tissue, reproductive organs and primary cell types is limited. In addition, although many open chromatin regions have been mapped, the transcription factors that bind to these sequences are largely unknown, and little attention has been devoted to the analysis of repetitive sequences. Finally, although transcript heterogeneity and isoforms have been described in many cell types, full-length transcripts that represent the isoform structure of spliced exons and edits have been described for only a small number of cell types.

That is, the future of genomics is in variation. We want to know about: organismic/developmental background (cell lines vs primary vs induced vs tissue), environmental variation (condition-dependence), genetic variation (gene editing assays that change local genetic variants, the genetic background of different cell line and human genomes), dynamics (time and induction). To put it in plain terms: We need to know how the genome regulation of different cells and individuals are different, and what that does to them. To put it in fancy terms: we are moving towards cellular phenomics, quantitative genomics, and an ever-expanding hypercube of data.

Literature

Eddy, S. R. (2012). The C-value paradox, junk DNA and ENCODE. Current biology, 22(21), R898-R899.

ENCODE Project Consortium, Snyder, M. P., Gingeras, T. R., Moore, J. E., Weng, Z., Gerstein, M. B., Ren, B., … & Myers, R. M. (2020). Perspectives on ENCODE. Nature, 583(7818), 693-698.

Yanofsky, C. (2007). Establishing the triplet nature of the genetic code. Cell, 128(5), 815-818.

My talk at the ChickenStress Genomics and Bioinformatics Workshop

A few months ago I gave a talk at the ChickenStress Genomics and Bioinformatics Workshop about genetic mapping of traits and gene expression.

ChickenStress is a European training network of researchers who study stress in chickens, as you might expect. It brings together people who work with (according to the work package names) environmental factors, early life experiences and genetics. The network is centered on a group of projects by early stage researchers — by the way, I think that’s a really good way to describe the work of a PhD student — and organises activities like this workshop.

I was asked to talk about our work from my PhD on gene expression and behaviour in the chicken (Johnsson & al. 2018, Johnsson & al. 2016), concentrating on concepts and methods rather than results. If I have any recurring readers, they will already know that brief is exactly what I like to do. I talked about the basis of genetic mapping of traits and gene expression, what data one needs to do it, and gave a quick demo for a flavour of an analysis workflow (linear mixed model genome-wide association in GEMMA).

Here are slides, and the git repository of the demo:

Journal club of one: ”Chromosome-level and haplotype-resolved genome assembly enabled by high-throughput single-cell sequencing of gamete genomes”

Genome assembly researchers are still figuring out new wild ways of combining different kinds of data. For example, ”trio binning” took what used to be a problem, — the genetic difference between the two genome copies that a diploid individual carries — and turned it into a feature: if you assemble a hybrid individual with genetically distant parents, you can separate the two copies and get two genomes in one. (I said that admixture was the future of every part of genetics, didn’t I?) This paper (Campoy et al. 2020) describes ”gamete binning” which uses sequencing of gametes perform a similar trick.

Expressed another way, gamete binning means building an individual-specific genetic map and then using it to order and phase the pieces of the assembly. This means two sequence datasets from the same individual — one single cell short read dataset from gametes (10X linked reads) and one long read dataset from the parent (PacBio) — and creatively re-using them in different ways.

This is what they do:

1. Assemble the long reads into a preliminary assembly, which will be a mosaic of the two genome copies (barring gross differences, ”haplotigs”, that can to some extent be removed).

2. Align the single cell short reads to the preliminary assembly and call SNPs. (They also did some tricks to deal with regions without SNPs, separating those that were not variable between genomes and those that were deleted in one genome.) Because the gametes are haploid, they get the phase of the parent’s genotype.

3. Align the long reads again. Now, based on the phased genotype, the long reads can be assigned to the genome copy they belong to. So they can partition the reads into one bin per genome copy and chromosome.

4. Assemble those bins separately. They now get one assembly for each homologous chromosome.

They apply it to an apricot tree, which has a 250 Mbp genome. When they sequence the parents of the tree, it seems to separate the genomes well. The two genome copies have quite a bit of structural variation:

Despite high levels of synteny, the two assemblies revealed large-scale rearrangements (23 inversions, 1,132 translocation/transpositions and 2,477 distal duplications) between the haplotypes making up more than 15% of the assembled sequence (38.3 and 46.2 Mb in each of assemblies; Supplementary Table 1). /…/ Mirroring the huge differences in the sequences, we found the vast amount of 942 and 865 expressed, haplotype-specific genes in each of the haplotypes (Methods; Supplementary Tables 2-3).

They can then go back to the single cell data and look at the recombination landscape and at chromosomal arrangements during meiosis.

This is pretty elegant. I wonder how dependent it is on the level of variation within the individual, and how it compares in cost and finickiness to other assembly strategies.

Literature

Campoy, José A., et al. ”Chromosome-level and haplotype-resolved genome assembly enabled by high-throughput single-cell sequencing of gamete genomes.” BioRxiv (2020).

What is a locus, anyway?

”Locus” is one of those confusing genetics terms (its meaning, not just its pronunciation). We can probably all agree with a dictionary and with Wikipedia that it means a place in the genome, but a place of what and in what sense? We also use place-related word like ”site” and ”region” that one might think were synonymous, but don’t seem to be.

For an example, we can look at this relatively recent preprint (Chebib & Guillaume 2020) about a model of the causes of genetic correlation. They have pairs of linked loci that each affect one trait each (that’s the tight linkage condition), and also a set of loci that affect both traits (the pleiotropic condition), correlated Gaussian stabilising selection, and different levels of mutation, migration and recombination between the linked pairs. A mutation means adding a number to the effect of an allele.

This means that loci in this model can have a large number of alleles with quantitatively different effects. The alleles at a locus share a distribution of mutation effects, that can be either two-dimensional (with pleiotropy) or one-dimensional. They also share a recombination rate with all other loci, which is constant.

What kind of DNA sequences can have these properties? Single nucleotide sites are out of the question, as they can have four, or maybe five alleles if you count a deletion. Larger structural variants, such as inversions or allelic series of indels might work. A protein-coding gene taken as a unit could have a huge number of different alleles, but they would probably have different distributions of mutational effects in different sites, and (relatively small) differences in genetic distance to different sites.

It seems to me that we’re talking about an abstract group of potential alleles that have sufficiently similar effects and that are sufficiently closely linked. This is fine; I’m not saying this to criticise the model, but to explore how strange a locus really is.

They find that there is less genetic correlation with linkage than with pleiotropy, unless the mutation rate is high, which leads to a discussion about mutation rate. This reasoning about the mutation rate of a locus illustrates the issue:

A high rate of mutation (10−3) allows for multiple mutations in both loci in a tightly linked pair to accumulate and maintain levels of genetic covariance near to that of mutations in a single pleiotropic locus, but empirical estimations of mutation rates from varied species like bacteria and humans suggests that per-nucleotide mutation rates are in the order of 10−8 to 10−9 … If a polygenic locus consists of hundreds or thousands of nucleotides, as in the case of many quantitative trait loci (QTLs), then per-locus mutation rates may be as high as 10−5, but the larger the locus the higher the chance of recombination between within-locus variants that are contributing to genetic correlation. This leads us to believe that with empirically estimated levels of mutation and recombination, strong genetic correlation between traits are more likely to be maintained if there is an underlying pleiotropic architecture affecting them than will be maintained due to tight linkage.

I don’t know if it’s me or the authors who are conceptually confused here. If they are referring to QTL mapping, it is true that the quantitative trait loci that we detect in mapping studies often are huge. ”Thousands of nucleotides” is being generous to mapping studies: in many cases, we’re talking millions of them. But the size of a QTL region from a mapping experiment doesn’t tell us how many nucleotides in it that matter to the trait. It reflects our poor resolution in delineating the, one or more, causative variants that give rise to the association signal. That being said, it might be possible to use tricks like saturation mutagenesis to figure out which mutations within a relevant region that could affect a trait. Then, we could actually observe a locus in the above sense.

Another recent theoretical preprint (Chantepie & Chevin 2020) phrases it like this:

[N]ote that the nature of loci is not explicit in this model, but in any case these do not represent single nucleotides or even genes. Rather, they represent large stretches of effectively non-recombining portions of the genome, which may influence the traits by mutation. Since free recombination is also assumed across these loci (consistent with most previous studies), the latter can even be thought of as small chromosomes, for which mutation rates of the order to 10−2 seem reasonable.

Literature

Chebib and Guillaume. ”Pleiotropy or linkage? Their relative contributions to the genetic correlation of quantitative traits and detection by multi-trait GWA studies.” bioRxiv (2019): 656413.

Chantepie and Chevin. ”How does the strength of selection influence genetic correlations?” bioRxiv (2020).

Journal club of one: ”Versatile simulations of admixture and accurate local ancestry inference with mixnmatch and ancestryinfer”

Admixture is the future of every sub-field of genetics, just in case you didn’t know. Both in the wild and domestic animals, populations or even species sometimes cross. This causes different patterns of relatedness than in well-mixed populations. Often we want to estimate ”local ancestry”, that is: what source population a piece of chromosome in an individual originates from. It is one of those genetics problems that is made harder by the absence of any way to observe it directly.

This recent paper (Schumer et al 2020; preprint version, which I read, here) presents a method for simulating admixed sequence data, and a method for inferring local ancestry from it. It does something I like, namely to pair analysis with fake-data simulation to check methods.

The simulation method is a built from four different simulators:

1. macs (Chen, Majoram & Wall 2009), which creates polymorphism data under neutral evolution from a given population history. They use macs to generate starting chromosomes from two ancestral populations.

2. Seq-Gen (Rambaut & Grassly 1997). Chromosomes from macs are strings of 0s and 1s representing the state at biallelic markers. If you want DNA-level realism, with base composition, nucleotide substitution models and so on, you need something else. I don’t really follow how they do this. You can tell from the source code that they use the local trees that macs spits out, which Seq-Gen can then simulate nucleotides from. As they put it, the resulting sequence ”lacks other complexities of real genome sequences such as repetitive elements and local variation in base composition”, but it is a step up from ”0000110100”.

3. SELAM (Corbett-Detig & Jones 2016), which simulates admixture between populations with population history and possibly selection. Here, SELAM‘s role is to simulate the actual recombination and interbreeding to create the patterns of local ancestry, that they will then fill with the sequences they generated before.

4. wgsim, which simulates short reads from a sequence. At this point, mixnmatch has turned a set of population genetic parameters into fasta files. That is pretty cool.

On the one hand, building on tried and true tools seems to be the right thing to do, less wheel-reinventing. It’s great that the phylogenetic simulator Seq-Gen from 1997 can be used in a paper published in 2020. On the other hand, looking at the dependencies for running mixnmatch made me a little pale: seven different bioinformatics or population genetics softwares (not including the dependencies you need to compile them), R, Perl and Python plus Biopython. Computational genetics is an adventure of software installation.

They use the simulator to test the performance of a hidden Markov model for inferring local ancestry (Corbett-Detig & Nielsen 2017) with different population histories and settings, and then apply it to swordtail fish data. In particular, one needs to set thresholds for picking ”ancestry informative” (i.e. sufficiently differentiated) markers between the ancestral populations, and that depends on population history and diversity.

In passing, they use the estimate the swordtail recombination landscape:

We used the locations of observed ancestry transitions in 139 F2 hybrids that we generated between X. birchmanni and X. malinche … to estimate the recombination rate in 5 Mb windows. … We compared inferred recombination rates in this F2 map to a linkage disequilibrium based recombination map for X. birchmanni that we had previously generated (Schumer et al., 2018). As expected, we observed a strong correlation in estimated recombination rate between the linkage disequilibrium based and crossover maps (R=0.82, Figure 4, Supporting Information 8). Simulations suggest that the observed correlation is consistent with the two recombination maps being indistinguishable, given the low resolution of the F2 map (Supporting Information 8).

Twin lambs with different fathers

I just learned that in sheep, lambs from the same litter pretty often have different fathers, if the ewe has mated with different males. Berry et al. (2020) looked at sheep flocks on Irland that used more than one ram, and:

Of the 539 pairs of twins included in the analysis, 160 (i.e. 30%) were sired by two different rams. Of the 137 sets of triplets included in the analysis, 73 (i.e. 53%) were sired by more than one ram. Of the nine sets of quadruplets, eight were sired by two rams with the remaining litter being mono‐paternal. The overall incidence of heteropaternal superfecundation among litters was therefore 35%. Given that the incidence of multiple births in these flocks was 65%, heteropaternal superfecundation is expected to be relatively common in sheep; this is especially true as all but two of the litter‐mates were polyzygotic.

They figured this out by looking at individuals genotyped on SNP chips with tens of thousands of SNPs, with both lambs and the potential parents genotyped, so there can’t be much uncertainty in the assignment. You don’t need that many genotyped markers to get a confident assignment, and they don’t have that many rams to choose from.

Time for some Mendelian inheritance

Let’s simulate a situation like this: We set up a population and a marker panel for genotyping, split them into ewes and rams, and make some lambs.

library(AlphaSimR)

founderpop <- runMacs(nInd = 105,
                      nChr = 10,
                      segSites = 100)

simparam <- SimParam$new(founderpop)
simparam$setGender("no")

simparam$addSnpChip(nSnpPerChr = 100)

parents <- newPop(founderpop,
                  simParam = simparam)

ewes <- parents[1:100]
rams <- parents[101:105]

lambs <- randCross2(females = ewes,
                    males = rams,
                    nCrosses = 100,
                    nProgeny = 2,
                    simParam = simparam)

Now, if we have the genotypes of a lamb and its mother, how do we know the father? In this paper, they use exclusion methods: They compared the genotypes from the offspring with the parents and used inheritance rules to exclude rams that can't be the father because if they were, the offspring couldn't have the genotypes it had. Such breaking of regular inheritance patterns would be a "Mendelian inconsistency". This is the simplest kind of parentage assignment; fancier methods will calculate the probabilities of different genotypes, and allow you to reconstruct unknown relationships.

We can do this in two ways:

  • ignore the ewe’s genotypes and look for opposite homozygotes between lamb and ram, which are impossible regardless of the mother’s genotype
  • use both the ewe’s and ram’s genotypes to look what lamb genotypes are possible from a cross between them; this adds a few more cases where we can exclude a ram even if the lamb is heterozygous

To do the first, we count the number of opposite homozygous markers. In this genotype coding, 0 and 2 are homozygotes, and 1 is a heterozygous marker.

opposite_homozygotes <- function(ram,
                                 lamb) {
    sum(lamb == 0 & ram == 2) +
        sum(lamb == 2 & ram == 0)

}

When we include the ewe's genotype, there are a few more possible cases. We could enumerate all of them, but here is some R code to generate them. We first get all possible gametes from each parent, we combine the gametes in all possible combinations, and that gives us the possible lamb genotypes at that marker. If the lamb does, in fact, not have any of those genotypes, we declare the marker inconsistent. Repeat for all markers.

## Generate the possible gametes from a genotype

possible_gametes <- function(genotype) {

    if (genotype == 0) {
        gametes <- 0
    } else if (genotype == 1) {
        gametes <- c(0, 1)
    } else if (genotype == 2) {
        gametes <- 1
    }

    gametes
}

## Generate the possible genotypes for an offspring from
## parent possible gametes

possible_genotypes <- function(father_gametes,
                               mother_gametes) {

    possible_combinations <- expand.grid(father_gametes, mother_gametes)
    resulting_genotypes <- rowSums(possible_combinations)
    unique(resulting_genotypes)
}

## Check offspring genotypes for consistency with parent genotypes

mendelian_inconsistency <- function(ewe,
                                    ram,
                                    lamb) {

    n_markers <- length(ewe)
    inconsistent <- logical(n_markers)

    for (marker_ix in 1:n_markers) {

        possible_lamb_genotypes <-
          possible_genotypes(possible_gametes(ewe[marker_ix]),
                             possible_gametes(ram[marker_ix]))

        inconsistent[marker_ix] <-
          !lamb[marker_ix] %in% possible_lamb_genotypes
    }

    sum(inconsistent)
}

(These functions assume that we have genotypes in vectors. The full code that extracts this information from the simulated data and repeats for all markers is on Gitbhub.)

Here is the outcome for a set of random lambs. The red dots point out the true fathers: because we have perfect genotype data simulated without errors, the true father always has 100% consistent markers.

If we compare how many markers are found inconsistent with the two methods, we get a pattern like this graph. Including the ewe’s genotypes lets us discover a lot more inconsistent markers, but in this case, with plentiful and error-free markers, it doesn’t make a difference.

Thresholds and errors

If I have any complaint with the paper, it’s that the parentage analysis isn’t really described in the methods. This is what it says:

Parentage testing using simple exclusion‐based approaches is determined by the proportion of opposing homozygotes in putative sire–offspring pairs.

/…/

Maternal verification was undertaken using the exclusion method (Double et al . 1997) comparing the genotype of the dam with that of her putative progeny and only validated dam–offspring pairs were retained. Genotypes of the mature rams in the flock were compared with all lambs born in that flock using the exclusion method.

(The reference is related to exclusion methods, but it’s describing how to calculate exclusion probabilities in a certain circumstance. That is, it’s part of a methodological conversation about exclusion methods, but doesn’t actually describe what they did.)

I don’t doubt that they did it well. Still, it would be interesting to know the details, because in the absence of perfect genotype data, they must have had some thresholds for error and some criterion for deciding which ram was right, even if it seemed obvious.

Literature

Berry, D. P., et al. ”Heteropaternal superfecundation frequently occurs in multiple‐bearing mob‐mated sheep.” Animal Genetics (2020).

Journal club of one: ”Eliciting priors and relaxing the single causal variant assumption in colocalisation analyses”

This paper (Wallace 2020) is about improvements to the colocalisation method for genome-wide association studies called coloc. If you have an association to trait 1 in a region, and another association with trait 2, coloc investigates whether they are caused by the same variant or not. I’ve never used coloc, but I’m interested because setting reasonable priors is related to getting reasonable parameters for genetic architecture.

The paper also looks at how coloc is used in the literature (with default settings, unsurprisingly), and extends coloc to relax the assumption of only one causal variant per region. In that way, it’s a solid example of thoughtfully updating a popular method.

(A note about style: This isn’t the clearest paper, for a few reasons. The structure of the introduction is indirect, talking a lot about Mendelian randomisation before concluding that coloc isn’t Mendelian randomisation. The paper also uses numbered hypotheses H1-H4 instead of spelling out what they mean … If you feel a little stupid reading it, it’s not just you.)

coloc is what we old QTL mappers call a pleiotropy versus linkage test. It tries to distinguish five scenarios: no association, trait 1 only, trait 2 only, both traits with linked variants, both traits with the same variant.

This paper deals with the priors: What is the prior probability of a causal association to trait 1 only p_1, trait 2 only p_2, or both traits p_{12} , and are the defaults good?

They reparametrise the priors so that it becomes possible to get some estimates from the literature. They work with the probability that a SNP is causally associated with each trait (which means adding the probabilities of association q_1 = p_1 + p_{12} ) … This means that you can look at single trait association data, and get an idea of the number of marginal associations, possibly dependent on allele frequency. The estimates from a gene expression dataset and a genome-wide association catalog work out to a prior around 10 ^ {-4} , which is the coloc default. So far so good.

How about p_{12} ?

If traits were independent, you could just multiply q_1 and q_2. But not all of the genome is functional. If you could straightforwardly define a functional proportion, you could just divide by it.

You could also look at the genetic correlation between traits. It makes sense that the overall genetic relationship between two traits should inform the prior that you see overlap at this particular locus. This gives a lower limit for p_{12} . Unfortunately, this still leaves us dependent on what kinds of traits we’re analysing. Perhaps, it’s not so surprising that there isn’t one prior that universally works for all kinds of pairs of trait:

Attempts to colocalise disease and eQTL signals have ranged from underwhelming to positive. One key difference between outcomes is the disease-specific relevance of the cell types considered, which is consistent with variable chromatin state enrichment in different GWAS according to cell type. For example, studies considering the overlap of open chromatin and GWAS signals have convincingly shown that tissue relevance varies by up to 10 fold, with pancreatic islets of greatest relevance for traits like insulin sensitivity and immune cells for immune-mediated diseases. This suggests that p_{12} should depend explicitly on the specific pair of traits under consideration, including cell type in the case of eQTL or chromatin mark studies. One avenue for future exploration is whether fold change in enrichment of open chromatin/GWAS signal overlap between cell types could be used to modulate p_{12} and select larger values for more a priori relevant tissues.

Literature

Wallace, Chris. ”Eliciting priors and relaxing the single causal variant assumption in colocalisation analyses.” PLoS Genetics 16.4 (2020): e1008720.

Adrian Bird on genome ecology

I recently read this essay by Adrian Bird on ”The Selfishness of Law-Abiding Genes”. That is a colourful title in itself, but it doesn’t stop there; this is an extremely metaphor-rich piece. In terms of the theoretical content, there is not much new under the sun. Properties of the organism like complexity, redundancy, and all those exquisite networks of developmental gene regulation may be the result of non-adaptive processes, like constructive neutral evolution and intragenomic conflict. As the title suggests, Bird argues that this kind of thinking is generally accepted about things like transposable elements (”selfish DNA”), but that the same logic applies to regular ”law-abiding” genes. They may also be driven by other evolutionary forces than a net fitness gain at the organismal level.

He gives a couple of possible examples: toxin–antitoxin gene pairs, RNA editing and MeCP2 (that’s probably Bird’s favourite protein that he has done a lot of work on). He gives this possible description of MeCP2 evolution:

Loss of MeCP2 via mutation in humans leads to serious defects in the brain, which might suggest that MeCP2 is a fundamental regulator of nervous system development. Evolutionary considerations question this view, however, as most animals have nervous systems, but only vertebrates, which account for a small proportion of the animal kingdom, have MeCP2. This protein therefore appears to be a late arrival in evolutionary terms, rather than being a core ancestral component of brain assembly. A conventional view of MeCP2 function is that by exerting global transcriptional restraint it tunes gene expression in neurons to optimize their identity, but it is also possible to devise a scenario based on self-interest. Initially, the argument goes, MeCP2 was present at low levels, as it is in non-neuronal tissues, and therefore played little or no role in creating an optimal nervous system. Because DNA methylation is sparse in the great majority of the genome, sporadic mutations that led to mildly increased MeCP2 expression would have had a minimal dampening effect on transcription that may initially have been selectively neutral. If not eliminated by drift, further chance increases might have followed, with neuronal development incrementally adjusting to each minor hike in MeCP2-mediated repression through compensatory mutations in other genes. Mechanisms that lead to ‘constructive neutral evolution’ of this kind have been proposed. Gradually, brain development would accommodate the encroachment of MeCP2 until it became an essential feature. So, in response to the question ‘why do brains need MeCP2?’, the answer under this speculative scenario would be: ‘they do not; MeCP2 has made itself indispensable by stealth’.

I think this is a great passage, and it can be read both as a metaphorical reinterpretation, and as substantive hypothesis. The empirical question ”Did MeCP2 offer an important innovation to vertebrate brains as it arose?”, is a bit hard to answer with data, though. On the other hand, if we just consider the metaphor, can’t you say the same about every functional protein? Sure, it’s nice to think of p53 as the Guardian of the Genome, but can’t it also be viewed as a gangster extracting protection money from the organism? ”Replicate me, or you might get cancer later …”

The piece argues for a gene-centric view, that thinks of molecules and the evolutionary pressures they face. This doesn’t seem so be the fashionable view (sorry, extended synthesists!) but Bird argues that it would be healthy for molecular cell biologists to think more about the alternative, non-adaptive, bottom-up perspective. I don’t think the point is to advocate that way of thinking to the exclusion of the all other. To me, the piece reads more like an invitation to use a broader set of metaphors and verbal models to aid hypothesis generation.

There are too may good quotes in this essay, so I’ll just quote one more from the end, where we’ve jumped from the idea of selfish law-abiding genes, over ”genome ecology” — not in the sense of using genomics in ecology, but in the sense of thinking of the genome as some kind of population of agents with different niches and interactions, I guess — to ”Genetics Meets Sociology?”

Biologists often invoke parallels between molecular processes of life and computer logic, but a gene-centered approach suggests that economics or social science may be a more appropriate model …

I feel like there is a circle of reinforcing metaphors here. Sometimes when we have to explain how something came to be, for example a document, a piece of computer code or a the we do things in an organisation, we say ”it grew organically” or ”it evolved”. Sometimes we talk about the genome as a computer program, and sometimes we talk about our messy computer program code as an organism. Like viruses are just like computer viruses, only biological.

Literature

Bird, Adrian. ”The Selfishness of Law-Abiding Genes.” Trends in Genetics 36.1 (2020): 8-13.