Selected, causal, and relevant

What is ”function”? In discussions about junk DNA people often make the distinction between ”selected effects” and ”causal roles”. Doolittle & Brunet (2017) put it like this:

By the first (selected effect, or SE), the function(s) of trait T is that (those) of its effects E that was (were) selected for in previous generations. They explain why T is there. … [A]ny claim for an SE trait has an etiological justification, invoking a history of selection for its current effect.


ENCODE assumed that measurable effects of various kinds—being transcribed, having putative transcription factor binding sites, exhibiting (as chromatin) DNase hypersensitivity or histone modifications, being methylated or interacting three-dimensionally with other sites — are functions prima facie, thus embracing the second sort of definition of function, which philosophers call causal role …

In other words, their argument goes: a DNA sequence can be without a selected effect while it has, potentially several, causal roles. Therefore, junk DNA isn’t dead.

Two things about these ideas:

First, if we want to know the fraction of the genome that is functional, we’d like to talk about positions in some reference genome, but the selected effect definition really only works for alleles. Positions aren’t adaptive, but alleles can be. They use the word ”trait”, but we can think of an allele as a trait (with really simple genetics — its genetic basis its presence or absence in the genome).

Also, unfortunately for us, selection doesn’t act on alleles in isolation; there is linked selection, where alleles can be affected by selection without causally contributing anything to the adaptive trait. In fact, they may counteract the adaptive trait. It stands to reason that linked variants are not functional in the selected effect sense, but they complicate analysis of recent adaptation.

The authors note that there is a problem with alleles that have not seen positive selection, but only purifying selection (that could happen in constructive neutral evolution, which is when something becomes indispensable through a series of neutral or deleterious substitutions). Imagine a sequence where most mutations are neutral, but deleterious mutations can happen rarely. A realistic example could be the causal mutation for Freidreich’s ataxia: microsatellite repeats in an intron that occasionally expand enough to prevent transcription (Bidichandani et al. 1998, Ohshima et al. 1998; I recently read about it in Nessa Carey’s ”Junk DNA”). In such cases, selection does not preserve any function of the microsatellite. That a thing can break in a dangerous way is not enough to know that it was useful when whole.

Second, these distinctions may be relevant to the junk DNA debate, but for any research into the genetic basis of traits currently or in the future, such as medical genetics or breeding, neither of these perspectives is what we need. The question is not what parts of the genome come from adaptive alleles, nor what parts of the genome have causal roles. The question is what parts of the genome have causal roles that are relevant to the traits we care about.

The same example is relevant. It seems like the Friedriech’s ataxia-associated microsatellite does not fulfill the selected effect criterion. It does, however, have a causal role, and a causal role relevant to human disease, at that.

I do not dare to guess whether the set of sequences with causal roles relevant to human health is bigger or smaller than the set of sequences with selected effects. But they are not identical. And I will dare to guess that the relevant set, like the selected effect set, is a small fraction of the genome.


Doolittle, W. Ford, and Tyler DP Brunet. ”On causal roles and selected effects: our genome is mostly junk.” BMC biology 15.1 (2017): 116.

Nessa Carey ”Junk DNA”

I read two popular science books over Christmas. The other one was in Swedish, so I’ll do that in Swedish.

Nessa Carey’s ”Junk DNA: A Journey Through the Dark Matter of the Genome” is about noncoding DNA in the human genome. ”Coding” in this context means that it serves as template for proteins. ”Noncoding” is all the rest of the genome, 98% or so.

The book is full of fun molecular genetics: X-inactivation, rather in-depth discussion of telomeres and centromeres, the mechanism of noncoding microsatellite disease mutations, splicing — some of which isn’t often discussed at such length and clarity. It gives the reader a good look at how messy genomics can be. It has wonderful metaphors — two baseball bats with magnetic paint and velcro, for example. It even has an amusing account of the ENCODE debate. I wonder if it’s true that evolutionary biologists are more emotional than other biologists?

But it really suffers from the framing as a story about how noncoding DNA used to be dismissed as pointless, and now, surprisingly, turns out to have regulatory functions. This makes me a bit hesitant to recommend the book; you may come away from reading it with a lot of neat details, but misled about the big picture. In particular, you may believe a false history of all this was thought to be junk; look how wrong they were in the 70s, and the very dubious view that most of the human genome is important for our health.

On the first page of the book, junk DNA is defined like this:

Anything that doesn’t code for protein will be described as junk, as it originally was in the old days (second half of the twentieth century). Purists will scream, and that’s OK.

We should scream, or at least shake our heads, because this definition leads, for example, to describing ribosomes and transfer-RNA as ”junk” (chapter 11), even if both of them have been known to be noncoding and functional since at least the 60s. I guess the term ”junk” sticks, and that is why the book uses it, and why biologists love to argue about it. You couldn’t call the book something unspeakably dry like ”Noncoding DNA”.

So, this is a fun a popular science book about genomics. Read it, but keep in mind that if you want to define ”junk DNA” for any other purpose than to immediately shoot it down, it should be something like this:

For most of the 50 years since Ohno’s article, many of us accepted that most of our genome is ”junk”, by which we would loosely have meant DNA that is neither protein-coding nor involved in regulating the expression of DNA that is. (Doolittle & Brunet 2017)

The point of the term is not to dismiss everything that is not coding for a protein. The point is that the bulk of DNA in the genome is neither protein coding nor regulatory. This is part of why molecular genetics is so tricky: it is hard to find the important parts among all the rest. Researchers have become much better at sifting through the noncoding parts of the genome to find the sequences that are interesting and useful. Think of lots of tricky puzzles being solved, rather than of a paradigm being overthrown by revolution.


Carey, Nessa. (2015) Junk DNA: A Journey Through the Dark Matter of the Genome. Icon Books, London.

Doolittle, W. Ford, and Tyler DP Brunet. (2017) ”On causal roles and selected effects: our genome is mostly junk.” BMC Biology.

Boring meta-post of the year

Really, it’s the second boring meta-post of the year, since I’ve already posted this one.

There were some rumours recently that the Scienceblogs blog network would shut down the site. It appears to still be up, and there are still blogs going there, so I don’t know about that, but this reminded me that Scienceblogs existed. I don’t think I’ve read anything on Scienceblogs in years, but it was one of my inspirations when I started blogging. It’s not that I wanted to be a science writer, but Scienceblogs and the also now defunct ResearchBlogging RSS feed (Fausto & al 2012) made me figure out that blogging about science was a thing people did.

Slowly, this thing took shape and became a ”science community blog”, in the terminology of Saunders & al (2017). That is, this blog is not so much about outreach or popular science, but ”aimed at the academic community”. I think of it as part of a conversation about genetics, even if it may be largely a conversation with myself.

So what is the state of the blog now? In September 2016, I decided to try to post once or twice a month (and also to make sure that both posts weren’t pointless filler posts). This panned out pretty well up until October 2017, when I ran out of steam for a while. Probably unrelated to that, 2017 was also the year my blog traffic suddenly increased by more than a factor of two. I don’t know for sure why, but looking at the numbers of individual posts, it seems the increase is because a lot of R users are looking for tidyverse-related things. If I went by viewer statistics, I would post less about genetics and more about hip R packages.

Instead, 2018 I will:

  • Attempt to keep up the pace of writing one or two things every month. Some, but not all, of them will be pointless fillers.
  • Hopefully produce a couple of posts about papers, if those things get out of the pipeline eventually. The problem with this, as anyone who writes papers knows, is that once something is out of the pipeline, one has grown so enormously tired of it.
  • Write a few more posts about other scientific papers I read. I’ve heard that there is limited interest in that sorts of thing, but I enjoy it, and writing should make me think harder about what I read.

Using R: reshape2 to tidyr

Tidy data — it’s one of those terms that tend to confuse people, and certainly confused me. It’s Codd’s third normal form, but you can’t go around telling that to people and expect to be understood. One form is ”long”, the other is ”wide”. One form is ”melted”, another ”cast”. One form is ”gathered”, the other ”spread”. To make matters worse, I often botch the explanation and mix up at least two of the terms.

The word is also associated with the tidyverse suite of R packages in a somewhat loose way. But you don’t need to write in a tidyverse-style (including the %>%s and all) to enjoy tidy data.

But Hadley Wickham’s definition is straightforward:

In tidy data:
1. Each variable forms a column.
2. Each observation forms a row.
3. Each type of observational unit forms a table.

In practice, I don’t think people always take their data frames all the way to tidy. For example, to make a scatterplot, it is convenient to keep a couple of variables as different columns. The key is that we need to move between different forms rapidly (brain time-rapidly, more than computer time-rapidly, I might add).

And not everything should be organized this way. If you’re a geneticist, genotypes are notoriously inconvenient in normalized form. Better keep that individual by marker matrix.

The first serious piece of R code I wrote for someone else was a function to turn data into long form for plotting. I suspect plotting is often the gateway to tidy data. The function was like what you’d expect from R code written by a beginner who comes from C-style languages: It reinvented the wheel, and I bet it had nested for loops, a bunch of hard bracket indices, and so on. Then I discovered reshape2.

fake_data <- data.frame(id = 1:20,
                        variable1 = runif(20, 0, 1),
                        variable2 = rnorm(20))
melted <- melt(fake_data, id.vars = "id")

The id.vars argument is to tell the function that the id column is the key, a column that tells us which individual each observation comes from. As the name suggests, id.vars can name multiple columns in a vector.

So the is the data before:

  id   variable1    variable2
1  1 0.938173781  0.852098580
2  2 0.408216233  0.261269134
3  3 0.341325188  1.796235963
4  4 0.958889279 -0.356218000

And this is after. We go from 20 rows to 40: two variables times 20 individuals.

  id  variable       value
1  1 variable1 0.938173781
2  2 variable1 0.408216233
3  3 variable1 0.341325188
4  4 variable1 0.958889279

And now: tidyr. tidyr is the new tidyverse package for rearranging data like this.

The tidyr equivalent of the melt function is called gather. There are two important differences that messed with my mind at first.

The melt and gather functions take the opposite default assumption about what columns should be treated as keys and what columns should be treated as containing values. In melt, as we saw above, we need to list the keys to keep them with each observation. In gather, we need to list the value columns, and the rest will be treated as keys.

Also, the second and third arguments (and they would be the first and second if you piped something into it), are the variable names that will be used in the long form data. In this case, to get a data frame that looks exactly the same as the first, we will stick with ”variable” and ”value”.

Here are five different ways to get the same long form data frame as above:

melted <- gather(fake_data, variable, value, 2:3)

## Column names instead of indices
melted <- gather(fake_data, variable, value, variable1, variable2)

## Excluding instead of including
melted <- gather(fake_data, variable, value, -1)

## Excluding using column name
melted <- gather(fake_data, variable, value, -id)

## With pipe
melted <- fake_data %>% gather(variable, value, -id)

Usually, this is the transformation we need: wide to long. If we need to go the other way, we can use plyr’s cast functions, and tidyr’s gather. This code recovers the original data frame:

## plyr
dcast(melted, id ~  variable)

## tidyr
spread(melted, variable, value)

Peerage of Science Reviewer Prize 2017

I won a prize! Hurrah! I’m obviously very happy.

If you want to hear me answer a couple of questions and see the Peerage of Science crew engaged in some amusing video editing, look at the interview.

How did that happen? After being told, about a year ago to check out the peer review platform Peerage of Science, I decided to keep reviewing manuscripts that showed up and were relevant to my interests. Reading and commenting on unpublished manuscripts is stimulating, and I thought it would help improve my reviewing and, maybe, writing.

Maybe this is a testament to the power of gamification. I admit that I’ve occasionally been checking my profile to see what the score is even without thinking of any reviewer prize.

Griffin & Nesseth ”The science of Orphan Black: the official companion”

I didn’t know that science fiction series Orphan Black actually had a real Cosima: Cosima Herter, science consultant. After reading this interview and finishing season 5, I realised that there is also a new book I needed to read: The science of Orphan Black: The official companion by PhD candidate in development, stem cells and regenerative medicine Casey Griffin and science communicator Nina Nesseth with a foreword by Cosima Hertner.

(Warning: This post contains serious spoilers for Orphan Black, and a conceptual spoiler for GATTACA.)

One thing about science fiction struck me when I was watching the last episodes of Orphan Black: Sometimes it makes a lot more sense if we don’t believe everything the fictional scientists tell us. Like real scientists, they may be wrong, or they may be exaggerating. The genetically segregated future of GATTACA becomes no less chilling when you realise that the silly high predictive accuracies claimed are likely just propaganda from a oppressive society. And as you realise that the dying P.T. Westmorland is an imposter, you can break your suspension of disbelief about LIN28A as a fountain of youth gene … Of course, genetics is a little more complicated than that, and he is just another rich dude who wants science to make him live forever.

However, it wouldn’t be Orphan Black if there weren’t a basis in reality: there are several single gene mutations in model animals (e.g. Kenyon & al 1993) that can make them live a lot longer than normal, and LIN28A is involved in ageing (reviewed by Jun-Hao & al 2016). It’s not out of the question that an engineered single gene disruption that substantially increases longevity in humans could be possible. Not practical, and not necessarily without unpleasant side effects, but not out of the question.

Orphan Black was part slightly scary adventure, part festival of ideas about science and society, part character-driven web of relationships, and part, sadly, bricolage of clichés. I found when watching season five that I’d forgotten most of the plots of seasons two through four, and I will probably never make the effort to sit through them again. The first and last seasons make up for it, though.

The series seems to have been set on squeezing as many different biological concepts as possible in there, so the book has to try to do the same. It has not just clones and transgenes, but also gene therapy, stem cells, prion disease, telomeres, dopamine, ancient DNA, stem cells in cosmetics and so on. Two chapters try valiantly to make sense of the clone disease and the cure. It shows that the authors have encyclopedic knowledge of life science, with a special interest in development and stem cells.

But I think they slightly oversell how accurate the show is. Like when Cosima tells Scott to ”run a PCR on these samples, see if there are any genetic markers” and ”can you sequence for cytochrome c?”, and Scott replies ”the barcode gene? that’s the one we use for species differentiation” … That’s what screen science is like. The right words, but not always in the right order.

Cosima and Scott sciencing at university, before everything went pear-shaped. One of the good thing about Orphan Black was the scientist characters. There was a ton of them! The good ones, geniuses with sparse resources and self experimentation, the evil ones, well funded and deeply unethical, and Delphine. This scene is an exception in that it plays the cringe-inducing nerd angle. Cosima and Scott grew after than this.

There are some scientific oddities. They must be impossible to avoid. For example, the section on epigenetics treats it as a completely new field, sort of missing the history of the subfield. DNA methylation research was going on already in the 1970s (Gitschier 2009). Genomic imprinting, arguably the only solid example of transgenerational epigenetic effects in humans, and X inactivation were both being discovered during 70s and 80s (reviewed by Ferguson-Smith 2011). The book also makes a hash of genome sequencing, which is a shame but understandable. It would have taken a lot of effort to disentangle how sequencing worked when the fictional clone experiment started and how it got to how it works in season five, when Cosima runs Nanopore sequencing.

The idea of human cloning is evocative. Orphan Black flipped it on its head by making the main clone characters strikingly different. It also cleverly acknowledged that human cloning is a somewhat dated 20th century idea, and that the cutting edge of life science has moved on. But I wish the book had been harder on the premise of the clone experiment:

By cloning the human genome and fostering a set of experimental subjects from birth, the scientists behind the project would gain many insights into the inner workings of the human body, from the relay of genetic code into observable traits (called phenotypes), to the viability of manipulated DNA as a potential therapeutic tool, to the effects of environmental factors on genetics. It’s a scientifically beautiful setup to learn myriad things about ourselves as humans, and the doctors at Dyad were quick to jump at that opportunity. (Chapter 1)

This is the very problem. Of course, sometimes ethically atrocious fictional science would, in principle, generate useful knowledge. But when when fictional science is near useless, let’s not pretend that it would produce a lot of valuable knowledge. When it comes to genetics and complex traits like human health, small sample studies of this kind (even if it was using clones) would be utterly useless. Worse than useless, they would likely be biased and misleading.

Researchers still float the idea of a ”baseline”, though, but in the form of a cell line, where it makes more sense. See the the (Human) Genome Project-write (Boeke & al 2016), suggesting the construction of an ideal baseline cell line for understanding human genome function:

Additional pilot projects being considered include … developing a homozygous reference genome bearing the most common pan-human allele (or allele ancestral to a given human population) at each position to develop cells powered by ”baseline” human genomes. Comparison with this baseline will aid in dissecting complex phenotypes, such as disease susceptibility.

In the end, the most important part of science in science fiction isn’t to be a factually correct, nor to be a coherent prediction about the future. If Orphan Black has raised interest in science, and I’m sure it has, that is great. And if it has stimulated discussions about the relationship between biological science, culture and ethics, that is even better.

The timeline of when relevant scientific discoveries happened in the real world and in Orphan Black is great. The book has a partial bibliography. The ”Clone Club Q&A” boxes range from silly fun to great open questions.

Orphan Black was probably the best genetics TV show around, and this book is a wonderful companion piece.

Plaque at the Roslin Institute to the sheep that haunts Orphan Black. ”Baa.”


Boeke, JD et al (2016) The genome project-write. Science.

Ferguson-Smith, AC (2011) Genomic imprinting: the emergence of an epigenetic paradigm. Nature reviews Genetics.

Gitschier, J. (2009). On the track of DNA methylation: An interview with Adrian Bird. PLOS Genetics.

Jun-Hao, E. T., Gupta, R. R., & Shyh-Chang, N. (2016). Lin28 and let-7 in the Metabolic Physiology of Aging. Trends in Endocrinology & Metabolism.

Kenyon, C., Chang, J., Gensch, E., Rudner, A., & Tabtiang, R. (1993). A C. elegans mutant that lives twice as long as wild type. Nature, 366(6454), 461-464.