Computational Genetics Discussion Cookies

The Computational Genetics Discussion Group is an informal seminar series on anything quantitative genetics, genomics and breeding run by the Hickey group at the Roslin Institute. Over the last year and a half or so, I’ve been the one emailing people and bringing biscuits, and at some point, I got fed up with the biscuits available at my local Tesco. Here is my recipe for computational genetics discussion cookies.

CGDC
Makes ca 50 cookies

1. Melt and brown 250 g butter.

2. Mix 100 g white sugar, 100 g Demerara sugar, 65 g of golden syrup, and 2 teaspoons of vanilla extract.

3. Add the melted butter and two eggs and whisk together.

4. Mix 375 g of flour and 0.75 teaspoons of bicarbonate. Add this to the butter, egg and sugar mix.

5. Split the batter into two halves. To each half, add one of:

  • 300 g chopped chocolate
  • 5 crushed digestive biscuits and 2 teaspoons of ground cinnamon
  • 50 g of crushed mini pretzels and 200 g of chopped fruit jellies
  • 50 g of oats and 120 g of raisins (weigh the raisins dry and then soak in hot water)
  • 75 g of desiccated coconut and 120 g of raisins
  • 125 g of granola mix

6. Bake for 7.5 minutes at 200 degrees Celsius.

7. Let rest for at least two minutes before moving off of the tray.

Temple Grandin at Roslin: optimisation and overselection

A couple of months ago (16 May to be precise), I listened to a talk by Temple Grandin at the Roslin Institute.

Grandin is a captivating speaker, and as an animal scientist (of some kind), I’m happy to have heard her talk at least once. The lecture contained a mix of:

  • practical experiences from a career of troubleshooting livestock management and systems,
  • how thinking differently (visually) helps in working with animal behaviour,
  • terrific personal anecdotes, among other things about starting up her business as a livestock management consultant from a student room,
  • a recurring theme, throughout the talk, of unintended side-effects in animal breeding, framed as a risk of ”overselecting” for any one trait, uncertainty about ”what is optimal”, and the importance of measuring and soberly evaluating many different things about animals and systems.

This latter point interests me, because it concerns genetics and animal breeding. Judging by the question in the Q&A, it also especially interested rest of the audience, mostly composed of vet students.

Grandin repeatedly cautioned against ”overselecting”. She argued that if you take one trait, any trait, and apply strong directional selection, bad side-effects will emerge. As a loosely worded biological principle, and taken to extremes, this seems likely to be true. If we assume that traits are polygenic, that means both that variants are likely to be pleiotropic (because there are many causal variants and a limited number of genes; this one argument for the omnigenic model) and that variants are likely to be linked to other variants that affect other traits. So changing one trait a lot is likely to affect other traits. And if we assume that the animal was in a pretty well-functioning state before selection, we should expect that if some trait that we’re not consciously selecting on changes far enough from that state, that is likely to cause problems.

We can also safely assume that there are always more traits that we care about than we can actually measure, either because they haven’t become a problem yet, or because we don’t have a good way to measure them. Taken together, this sound like a case for being cautious, measuring a lot of things about animal performance and welfare, and continuously re-evaluating what one is doing. Grandin emphasised the importance of measurement, drumming in that: ”you will manage what you measure”, ”this happens gradually”, and therefore, there is a risk that ”the bad becomes the new normal” if one does not keep tabs on the situation by recording hard quantitative data.

Doesn’t this sound a lot like the conventional view of mainstream animal breeding? I guess it depends: breeding is a big field, covering a lot of experiences and views, from individual farmers’ decisions, through private and public breeding organisations, to the relative Castalia of academic research. However, the impression from my view of the field, is Grandin and mainstream animal breeders are in agreement about the importance of:

  1. recording lots of traits about all aspects of the performance and functioning of the animal,
  2. optimising them with good performance on the farm as the goal,
  3. constantly re-evaluating practice and updating the breeding goals and management to keep everything on track.

To me, what Grandin presented as if it was a radical message (and maybe it was, some time ago, or maybe it still is, in some places) sounded much like singing the praises of economic selection indices. I had expected something more controversial. Then again, that depends on what assumptions are built into words like ”good performance”, ”on track”, ”functioning of the animal” etc. For example, she talked a bit about the strand of animal welfare research that aims to quantify positive emotions in animals; one could take the radical stance that we should measure positive emotions and include that in the breeding goal.

”Overselection” as a term also carries connotations that I don’t agree with, because I don’t think that the framing as biological overload is helpful. To talk about overload and ”overselection” makes one think of selection as a force that strains the animal in itself, and the alternative as ”backing off” (an expression term Grandin repeatedly used in the talk). But if the breeding goal is off the mark, in the sense that it doesn’t get towards what’s actually optimal for the animal on the farm, breeding less efficiently is not getting you to a better outcome; it only gets towards the same, suboptimal, outcome more slowly. The problem isn’t efficiency in itself, but misspecification, and uncertainty about what the goal should be.

Grandin expands on this idea in the introductory chapter to ”Are we pushing animals to their biological limits? Welfare and ethical applications” (Grandin & Whiting 2018, eds). (I don’t know much about the pig case used as illustration, but I can think of a couple of other examples that illustrate the same point.) It ends with this great metaphor about genomic power tools, that I will borrow for later:

We must be careful not to repeat the mistakes that were made with conventional breeding where bad traits were linked with desirable traits. One of the best ways to prevent this is for both animal and plant breeders to do what I did in the 1980s and 1990s: I observed many different pigs from many places and the behaviour problems became obvious. This enabled me to compare animals from different lines in the same environment. Today, both animal and plant breeders have ‘genomic power tools’ for changing an organism’s genetics. Power tools are good things, but they must be used carefully because changes gan be made more quickly. A circular saw can chop your hand off much more easily than a hand saw. It has to be used with more care.

Kauai field trip 2018

Let’s keep the tradition of delayed travel posts going!

In August last year, I joined Dom Wright, Rie Henriksen, and Robin Abbey-Lee, as part of Dom’s FERALGEN project, on their field work on Kauai. I did some of my dissertation work on the Kauai feral chickens, but I never saw them live until now. Our collaborator Eben Gering was also on the islands, but the closest we got to each other was Skyping between the islands. It all went smoothly until the end of the trip, when a hurricane came uncomfortably close to the island for a while. Here are some pictures. In time, I promise to blog about the actual research too.

Look! Chickens by the sea, chickens on parking lots, a sign telling people not to feed the chickens on a sidewalk in central Kapaa! Lots of chickens.

I’m not kidding: lots of chickens.

Links

An old Nature News feature from a previous field trip (without me)

My post about our 2016 paper on Kauai feralisation genomics

Various positions

What use is there in keeping a blog if you can’t post your arbitrary idiosyncratic opinions as if you were an authority? Here is a list of opinions about life in the scientific community.

Social media for scientists

People who promote social media for scientists by humblebragging about how they got a glam journal paper because of Twitter should stop. An unknown PhD student from the middle of nowhere must be a lot more likely to get into trouble than get on a paper because of Twitter.

Speaking of that, who thinks that that writing an angry letter to someone’s boss is the appropriate response to disagreeing with someone on Twitter? Please stop with that.

Poster sessions

Poster sessions are a pain. Not only do you suffer the humiliation of not begin cool enough to give a talk, you also get to haul a poster tube to the conference. The trouble is that we can’t do away with poster sessions, because they fulfill the important function of letting a lot of people contribute to the conference so that they have a reason to go there.

Now cue comments of this kind: ”That’s not true! I’ve had some of my best conference conversations at poster sessions. Maybe you just don’t know how to make a poster …” It is true that I don’t know how to make a good poster. Regardless, my ad hoc hypothesis for why people say things like this is that they’re already known and connected enough to have good conversations anywhere at the conference, and that the poster served as a signpost for their colleagues to find them.

How can one make a poster session as good as possible? Try to make lots of space so people won’t have to elbow each other. Try to find a room that won’t be incredibly noisy and full of echos. Try to avoid having some posters hidden behind pillars and in corners.

Also, don’t organize a poster competition unless you also organize a keynote competition.

Theory

There is way way way too little theory in biology education, as far as I can tell. Much like computer programming — a little of which is recognized as a useful skill to have even for empirically minded biologists who are not going to be programming experts — it is very useful to be able to read a paper without skipping the equations, or tell whether a paper is throwing dust when it states that ”[unspecified] Theory predicts …” this or that. But somehow, materials for theory manage to be even more threatening than computer documentation, which is an impressive feat. If you disagree, please post in the comments a reference to an introduction to coalescent theory that is accessible for, say, a biology PhD student who hasn’t taken a probability course in a few years.

Language corrections

That thing when reviewers suggest that a paper be checked by a native English speaker, when they mean that it needs language editing, is rude. Find a way of phrasing it that won’t offend that one native English speaker who invariably is on the paper, but doesn’t have an English enough name and affiliation for you!

Boring meta-post of the year

Really, it’s the second boring meta-post of the year, since I’ve already posted this one.

There were some rumours recently that the Scienceblogs blog network would shut down the site. It appears to still be up, and there are still blogs going there, so I don’t know about that, but this reminded me that Scienceblogs existed. I don’t think I’ve read anything on Scienceblogs in years, but it was one of my inspirations when I started blogging. It’s not that I wanted to be a science writer, but Scienceblogs and the also now defunct ResearchBlogging RSS feed (Fausto & al 2012) made me figure out that blogging about science was a thing people did.

Slowly, this thing took shape and became a ”science community blog”, in the terminology of Saunders & al (2017). That is, this blog is not so much about outreach or popular science, but ”aimed at the academic community”. I think of it as part of a conversation about genetics, even if it may be largely a conversation with myself.

So what is the state of the blog now? In September 2016, I decided to try to post once or twice a month (and also to make sure that both posts weren’t pointless filler posts). This panned out pretty well up until October 2017, when I ran out of steam for a while. Probably unrelated to that, 2017 was also the year my blog traffic suddenly increased by more than a factor of two. I don’t know for sure why, but looking at the numbers of individual posts, it seems the increase is because a lot of R users are looking for tidyverse-related things. If I went by viewer statistics, I would post less about genetics and more about hip R packages.

Instead, 2018 I will:

  • Attempt to keep up the pace of writing one or two things every month. Some, but not all, of them will be pointless fillers.
  • Hopefully produce a couple of posts about papers, if those things get out of the pipeline eventually. The problem with this, as anyone who writes papers knows, is that once something is out of the pipeline, one has grown so enormously tired of it.
  • Write a few more posts about other scientific papers I read. I’ve heard that there is limited interest in that sorts of thing, but I enjoy it, and writing should make me think harder about what I read.

Peerage of Science Reviewer Prize 2017

I won a prize! Hurrah! I’m obviously very happy.

If you want to hear me answer a couple of questions and see the Peerage of Science crew engaged in some amusing video editing, look at the interview.

How did that happen? After being told, about a year ago to check out the peer review platform Peerage of Science, I decided to keep reviewing manuscripts that showed up and were relevant to my interests. Reading and commenting on unpublished manuscripts is stimulating, and I thought it would help improve my reviewing and, maybe, writing.

Maybe this is a testament to the power of gamification. I admit that I’ve occasionally been checking my profile to see what the score is even without thinking of any reviewer prize.