Computational Genetics Discussion Cookies

The Computational Genetics Discussion Group is an informal seminar series on anything quantitative genetics, genomics and breeding run by the Hickey group at the Roslin Institute. Over the last year and a half or so, I’ve been the one emailing people and bringing biscuits, and at some point, I got fed up with the biscuits available at my local Tesco. Here is my recipe for computational genetics discussion cookies.

CGDC
Makes ca 50 cookies

1. Melt and brown 250 g butter.

2. Mix 100 g white sugar, 100 g Demerara sugar, 65 g of golden syrup, and 2 teaspoons of vanilla extract.

3. Add the melted butter and two eggs and whisk together.

4. Mix 375 g of flour and 0.75 teaspoons of bicarbonate. Add this to the butter, egg and sugar mix.

5. Split the batter into two halves. To each half, add one of:

  • 300 g chopped chocolate
  • 5 crushed digestive biscuits and 2 teaspoons of ground cinnamon
  • 50 g of crushed mini pretzels and 200 g of chopped fruit jellies
  • 50 g of oats and 120 g of raisins (weigh the raisins dry and then soak in hot water)
  • 75 g of desiccated coconut and 120 g of raisins
  • 125 g of granola mix

6. Bake for 7.5 minutes at 200 degrees Celsius.

7. Let rest for at least two minutes before moving off of the tray.

Temple Grandin at Roslin: optimisation and overselection

A couple of months ago (16 May to be precise), I listened to a talk by Temple Grandin at the Roslin Institute.

Grandin is a captivating speaker, and as an animal scientist (of some kind), I’m happy to have heard her talk at least once. The lecture contained a mix of:

  • practical experiences from a career of troubleshooting livestock management and systems,
  • how thinking differently (visually) helps in working with animal behaviour,
  • terrific personal anecdotes, among other things about starting up her business as a livestock management consultant from a student room,
  • a recurring theme, throughout the talk, of unintended side-effects in animal breeding, framed as a risk of ”overselecting” for any one trait, uncertainty about ”what is optimal”, and the importance of measuring and soberly evaluating many different things about animals and systems.

This latter point interests me, because it concerns genetics and animal breeding. Judging by the question in the Q&A, it also especially interested rest of the audience, mostly composed of vet students.

Grandin repeatedly cautioned against ”overselecting”. She argued that if you take one trait, any trait, and apply strong directional selection, bad side-effects will emerge. As a loosely worded biological principle, and taken to extremes, this seems likely to be true. If we assume that traits are polygenic, that means both that variants are likely to be pleiotropic (because there are many causal variants and a limited number of genes; this one argument for the omnigenic model) and that variants are likely to be linked to other variants that affect other traits. So changing one trait a lot is likely to affect other traits. And if we assume that the animal was in a pretty well-functioning state before selection, we should expect that if some trait that we’re not consciously selecting on changes far enough from that state, that is likely to cause problems.

We can also safely assume that there are always more traits that we care about than we can actually measure, either because they haven’t become a problem yet, or because we don’t have a good way to measure them. Taken together, this sound like a case for being cautious, measuring a lot of things about animal performance and welfare, and continuously re-evaluating what one is doing. Grandin emphasised the importance of measurement, drumming in that: ”you will manage what you measure”, ”this happens gradually”, and therefore, there is a risk that ”the bad becomes the new normal” if one does not keep tabs on the situation by recording hard quantitative data.

Doesn’t this sound a lot like the conventional view of mainstream animal breeding? I guess it depends: breeding is a big field, covering a lot of experiences and views, from individual farmers’ decisions, through private and public breeding organisations, to the relative Castalia of academic research. However, the impression from my view of the field, is Grandin and mainstream animal breeders are in agreement about the importance of:

  1. recording lots of traits about all aspects of the performance and functioning of the animal,
  2. optimising them with good performance on the farm as the goal,
  3. constantly re-evaluating practice and updating the breeding goals and management to keep everything on track.

To me, what Grandin presented as if it was a radical message (and maybe it was, some time ago, or maybe it still is, in some places) sounded much like singing the praises of economic selection indices. I had expected something more controversial. Then again, that depends on what assumptions are built into words like ”good performance”, ”on track”, ”functioning of the animal” etc. For example, she talked a bit about the strand of animal welfare research that aims to quantify positive emotions in animals; one could take the radical stance that we should measure positive emotions and include that in the breeding goal.

”Overselection” as a term also carries connotations that I don’t agree with, because I don’t think that the framing as biological overload is helpful. To talk about overload and ”overselection” makes one think of selection as a force that strains the animal in itself, and the alternative as ”backing off” (an expression term Grandin repeatedly used in the talk). But if the breeding goal is off the mark, in the sense that it doesn’t get towards what’s actually optimal for the animal on the farm, breeding less efficiently is not getting you to a better outcome; it only gets towards the same, suboptimal, outcome more slowly. The problem isn’t efficiency in itself, but misspecification, and uncertainty about what the goal should be.

Grandin expands on this idea in the introductory chapter to ”Are we pushing animals to their biological limits? Welfare and ethical applications” (Grandin & Whiting 2018, eds). (I don’t know much about the pig case used as illustration, but I can think of a couple of other examples that illustrate the same point.) It ends with this great metaphor about genomic power tools, that I will borrow for later:

We must be careful not to repeat the mistakes that were made with conventional breeding where bad traits were linked with desirable traits. One of the best ways to prevent this is for both animal and plant breeders to do what I did in the 1980s and 1990s: I observed many different pigs from many places and the behaviour problems became obvious. This enabled me to compare animals from different lines in the same environment. Today, both animal and plant breeders have ‘genomic power tools’ for changing an organism’s genetics. Power tools are good things, but they must be used carefully because changes gan be made more quickly. A circular saw can chop your hand off much more easily than a hand saw. It has to be used with more care.

Using R: Correlation heatmap with ggplot2

(This post was originally written on 2013-03-23. Since then, it has persistently remained one of my most visited posts, and I’ve decided to revisit and update it. I may do the same to some other old R-related posts that people still arrive on through search engines. There was also this follow-up, which I’ve now incorporated here.)

Just a short post to celebrate when I learned how incredibly easy it is to make a heatmap of correlations with ggplot2 (with some appropriate data preparation, of course). Here is a minimal example using the reshape2 package for preparation and the built-in attitude dataset:

library(ggplot2)
library(reshape2)
qplot(x = Var1, y = Var2,
      data = melt(cor(attitude)),
      fill = value,
      geom = "tile")

attitude_heatmap

What is going on in that short passage?

  • cor makes a correlation matrix with all the pairwise correlations between variables (twice; plus a diagonal of ones).
  • melt takes the matrix and creates a data frame in long form, each row consisting of id variables Var1 and Var2 and a single value.
  • We then plot with the tile geometry, mapping the indicator variables to rows and columns, and value (i.e. correlations) to the fill colour.

However, there is one more thing that is really need, even if just for the first quick plot one makes for oneself: a better scale. The default scale is not the best for correlations, which range from -1 to 1, because it’s hard to tell where zero is. Let’s use the airquality dataset for illustration as it actually has some negative correlations. In ggplot2, a scale that has a midpoint and a different colour in each direction is called scale_colour_gradient2, and we just need to add it. I also set the limits to -1 and 1, which doesn’t change the colour but fills out the legend for completeness. Done!

data <- airquality[,1:4]
qplot(x = Var1, y = Var2,
      data = melt(cor(data, use = "p")),
      fill = value,
      geom = "tile") +
   scale_fill_gradient2(limits = c(-1, 1))

correlation_heatmap2

Finally, if you’re anything like me, you may be phasing out reshape2 in favour of tidyr. If so, you’ll need another function call to turn the matrix into a data frame, like so:

library(tidyr)

correlations <- data.frame(cor(data, use = "p"))
correlations$Var1 <- rownames(correlations)
melted <- gather(correlations, "value", "Var2", -Var1)

qplot(x = Var1, y = Var2,
      data = melted,
      fill = value,
      geom = "tile") +
   scale_fill_gradient2(limits = c(-1, 1))

The data preparation is no longer a oneliner, but, honestly, it probably shouldn’t be.

Okay, you won’t stop reading until we’ve made a solution with pipes? Sure, we can do that! It will be pretty gratuitous and messy, though. From the top!

library(magrittr)

airquality %>%
    '['(1:4) %>%
    data.frame %>%
    transform(Var1 = rownames(.)) %>%
    gather("Var2", "value", -Var1) %>%
    ggplot() +
        geom_tile(aes(x = Var1,
                      y = Var2,
                      fill = value)) +
        scale_fill_gradient2(limits = c(-1, 1))

Jennifer Doudna & Samuel Sternberg ”A Crack In Creation”

While the blog is on a relaxed summer schedule, you can read my book review of Jennifer Doudna’s and Samuel Sternberg’s CRISPR-related autobiography and discussion of the future of genome editing in University of Edinburgh’s Science Magazine EUSci issue #24.

The book is called A Crack in Creation, subtitled The New Power to Control Evolution, or depending on edition, Gene Editing and the Unthinkable Power to Control Evolution. There are a couple of dramatic titles for you. The book starts out similarly dramatic, with Jennifer Doudna dreaming of a beach in Hawaii, where she comes from, imagining a wave rising from the ocean to crash down on her. The wave is CRISPR/Cas9 genome editing, the technological force of nature that Doudna and colleagues have let loose on the world.

I like the book, talk about some notable omissions, and take issue with the bombastic and inaccurate title(s). Read the rest here.

Links

Pivotal CRISPR patent battle won by Broad Institute. Nature News. 2018.

Sharon Begley & Andrew Joseph. The CRISPR shocker: How genome-editing scientist He Jiankui rose from obscurity to stun the world. Stat News. 2018.

A Crack in Creation. The book’s website.

‘Simulating genetic data with R: an example with deleterious variants (and a pun)’

A few weeks ago, I gave a talk at the Edinburgh R users group EdinbR on the RAGE paper. Since this is an R meetup, the talk concentrated on the mechanics of genetic data simulation and with the paper as a case study. I showed off some of what Chris Gaynor’s AlphaSimR can do, and how we built on that to make the specifics of this simulation study. The slides are on the EdinbR Github.

Genetic simulation is useful for all kinds of things. Sure, they’re only as good as the theory that underpins them, but the willingness to try things out in simulations is one of the things I always liked about breeding research.

This is my description of the logic of genetic simulation: we think of the genome as a large table of genotypes, drawn from some distribution of allele frequencies.

To make an utterly minimal simulation, we could draw allele frequencies from some distribution (like a Beta distribution), and then draw the genotypes from a binomial distribution. Done!

However, there is a ton of nuance we would like to have: chromosomes, linkage between variants, sexes, mating, selection …

AlphaSimR addresses all of this, and allows you to throw individuals and populations around to build pretty complicated designs. Here is the small example simulation I used in the talk.


library(AlphaSimR)
library(ggplot2)

## Generate founder chromsomes

FOUNDERPOP <- runMacs(nInd = 1000,
                      nChr = 10,
                      segSites = 5000,
                      inbred = FALSE,
                      species = "GENERIC")


## Simulation parameters

SIMPARAM <- SimParam$new(FOUNDERPOP)
SIMPARAM$addTraitA(nQtlPerChr = 100,
                   mean = 100,
                   var = 10)
SIMPARAM$addSnpChip(nSnpPerChr = 1000)
SIMPARAM$setGender("yes_sys")


## Founding population

pop <- newPop(FOUNDERPOP,
              simParam = SIMPARAM)

pop <- setPheno(pop,
                varE = 20,
                simParam = SIMPARAM)


## Breeding

print("Breeding")
breeding <- vector(length = 11, mode = "list")
breeding[[1]] <- pop

for (i in 2:11) {
    print(i)
    sires <- selectInd(pop = breeding[[i - 1]],
                       gender = "M",
                       nInd = 25,
                       trait = 1,
                       use = "pheno",
                       simParam = SIMPARAM)

    dams <- selectInd(pop = breeding[[i - 1]],
                      nInd = 500,
                      gender = "F",
                      trait = 1,
                      use = "pheno",
                      simParam = SIMPARAM)

    breeding[[i]] <- randCross2(males = sires,
                                females = dams,
                                nCrosses = 500,
                                nProgeny = 10,
                                simParam = SIMPARAM)
    breeding[[i]] <- setPheno(breeding[[i]],
                              varE = 20,
                              simParam = SIMPARAM)
}



## Look at genetic gain and shift in causative variant allele frequency

mean_g <- unlist(lapply(breeding, meanG))
sd_g <- sqrt(unlist(lapply(breeding, varG)))

plot_gain <- qplot(x = 1:11,
                   y = mean_g,
                   ymin = mean_g - sd_g,
                   ymax = mean_g + sd_g,
                   geom = "pointrange",
                   main = "Genetic mean and standard deviation",
                   xlab = "Generation", ylab = "Genetic mean")

start_geno <- pullQtlGeno(breeding[[1]], simParam = SIMPARAM)
start_freq <- colSums(start_geno)/(2 * nrow(start_geno))

end_geno <- pullQtlGeno(breeding[[11]], simParam = SIMPARAM)
end_freq <- colSums(end_geno)/(2 * nrow(end_geno))

plot_freq_before <- qplot(start_freq, main = "Causative variant frequency before") 
plot_freq_after <- qplot(end_freq, main = "Causative variant frequency after") 

This code builds a small livestock population, breeds it for ten generations, and looks at the resulting selection response in the form of a shift of the genetic mean, and the changes in the underlying distribution of causative variants. Here are the resulting plots:

‘Approaches to genetics for livestock research’ at IASH, University of Edinburgh

A couple of weeks ago, I was at a symposium on the history of genetics in animal breeding at the Institute of Advanced Studies in the Humanities, organized by Cheryl Lancaster. There were talks by two geneticists and two historians, and ample time for discussion.

First geneticists:

Gregor Gorjanc presented the very essence of quantitative genetics: the pedigree-based model. He illustrated this with graphs (in the sense of edges and vertices) and by predicting his own breeding value for height from trait values, and from his personal genomics results.

Then, yours truly gave this talk: ‘Genomics in animal breeding from the perspectives of matrices and molecules’. Here are the slides (only slightly mangled by Slideshare). This is the talk I was preparing for when I collected the quotes I posted a couple of weeks ago.

I talked about how there are two perspectives on genomics: you can think of genomes either as large matrices of ancestry indicators (statistical perspective) or as long strings of bases (sequence perspective). Both are useful, and give animal breeders and breeding researchers different tools (genomic selection, reference genomes). I also talked about potential future breeding strategies that use causative variants, and how they’re not about stopping breeding and designing the perfect animal in a lab, but about supplementing genomic selection in different ways.

Then, historians:

Cheryl Lancaster told the story of how ABGRO, the Animal Breeding and Genetics Research Organisation in Edinburgh, lost its G. The organisation was split up in the 1950s, separating fundamental genetics research and animal breeding. She said that she had expected this split to be do to scientific, methodological or conceptual differences, but instead found when going through the archives, that it all was due to personal conflicts. She also got into how the ABGRO researchers justified their work, framing it as ”fundamental research”, and aspired to do long term research projects.

Jim Lowe talked about the pig genome sequencing and mapping efforts, how it was different from the human genome project in organisation, and how it used comparisons to the human genome a lot. Here he’s showing a photo of Alan Archibald using the gEVAL genome browser to quality-check the pig genome. He also argued that the infrastructural outcomes of a project like the human genome project, such as making it possible for pig genome scientists to use the human genome for comparisons, are more important and less predictable than usually assumed.

The discussion included comments by some of the people who were there (Chris Haley, Bill Hill), discussion about the breed concept, and what scientists can learn from history.

What is a breed? Is it a genetical thing, defined by grouping individuals based on their relatedness, a historical thing, based on what people think a certain kind of animal is supposed to look like, or a marketing tool, naming animals that come from a certain system? It is probably a bit of everything. (I talked with Jim Lowe during lunch; he had noticed how I referred to Griffith & Stotz for gene concepts, but omitted the ”post-genomic” gene concept they actually favour. This is because I didn’t find it useful for understanding how animal breeding researchers think. It is striking how comfortable biologists are with using fuzzy concepts that can’t be defined in a way that cover all corner cases, because biology doesn’t work that way. If the nominal gene concept is broken by trans-splicing, practicing genomicists will probably think of that more as a practical issue with designing gene databases than a something that invalidates talking about genes in principle.)

What would researchers like to learn from history? Probably how to succeed with large research endeavors and how to get funding for them. Can one learn that from history? Maybe not, but there might be lessons about thinking of research as ”basic”, ”fundamental”, ”applied” etc, and about what the long term effects of research might be.

What single step does with relationship

We had a journal club about the single step GBLUP method for genomic evaluation a few weeks ago. In this post, we’ll make a few graphs of how the single step method models relatedness between individuals.

Imagine you want to use genomic selection in a breeding program that already has a bunch of historical pedigree and trait information. You could use some so-called multistep evaluation that uses one model for the classical pedigree + trait quantitative genetics and one model for the genotype + trait genomic evaluation, and then mix the predictions from them together. Or you could use the single-step method, which combines pedigree, genotypes and traits into one model. It does this by combining the relationship estimates from pedigree and genotypes. That matrix can then go into your mixed model.

We’ll illustrate this with a tiny simulated population: five generations of 100 individuals per generation, where ten random pairings produce the next generation, with families of ten individuals. (The R code is on Github and uses AlphaSimR for simulation and AGHmatrix for matrices). Here is a heatmap of the pedigree-based additive relationship matrix for the population:

What do we see? In the lower left corner are the founders, and not knowing anything about their heritage, the matrix has them down as unrelated. The squares of high relatedness along the diagonal are the families in each generation. As we go upwards and to the right, relationship is building up.

Now, imagine the last generation of the population also has been genotyped with a SNP chip. Here is a heatmap of their genomic relationship matrix:

Genomic relationship is more detailed. We can still discern the ten families within the last generation, but no longer are all the siblings equally related to each other and to their ancestors. The genotyping helps track segregation within families, pointing out to us when relatives are more or less related than the average that we get from the pedigree.

Enter the single-step relationship matrix. The idea is to put in the genomic relationships for the genotyped individuals into the big pedigree-based relationship matrix, and then adjust the rest of the matrix to propagate that extra information we now have from the genotyped individuals to their ungenotyped relatives. Here is the resulting heatmap:

You can find the matrix equations in Legarra, Aguilar & Misztal (2009). The matrix, called H, is broken down into four partitions called H11, H12, H21, and H22. H22 is the part that pertains to the genotyped animals, and it’s equal to the genomic relationship matrix G (after some rescaling). The others are transformations of G and the corresponding parts of the additive relationship matrix, spreading G onto A.

To show what is going on, here is the difference between the additive relationship matrix and the single-step relationship matrix, with lines delineating the genotyped animals and breaking the matrix into the four partitions:

What do we see? In the top right corner, we have a lot of difference, where the genomic relationship matrix has been plugged in. Then, fading as we go from top to bottom and from right to left, we see the influence of the genomic relationship on relatives, diminishing the further we get from the genotyped individuals.

Literature

Legarra, Andres, I. Aguilar, and I. Misztal. ”A relationship matrix including full pedigree and genomic information.” Journal of dairy science 92.9 (2009): 4656-4663.