Interpreting genome scans, with wisdom

Eric Fauman is a scientist at Pfizer who also tweets out interpretations of genome-wide association scans.

Background: There is a GWASbot twitter account which posts Manhattan plots with links for various traits from the UK Biobank. The bot was made by the Genetic Epidemiology lab at the Finnish Institute for Molecular Medicine and Harvard. The source of the results is these genome scans (probably; it’s little bit opaque); the bot also links to heritability and genetic correlation databases. There is also an EnrichrBot that replies with enrichment of chromatin marks (Chen et al. 2013). Fauman’s comments on some of the genome scans on his Twitter account.

Here are a couple of recent ones:

And here is his list of these threads as a Google Document.

This makes me thing of three things, two good, and one bad.

1. The ephemeral nature of genome scans

Isn’t it great that we’re now at a stage where a genome scan can be something to be tweeted or put en masse in a database, instead of published one paper per scan with lots of boilerplate. The researchers behind the genome scans say as much in their 2017 blog post on the first release:

To further enhance the value of this resource, we have performed a basic association test on ~337,000 unrelated individuals of British ancestry for over 2,000 of the available phenotypes. We’re making these results available for browsing through several portals, including the Global Biobank Engine where they will appear soon. They are also available for download here.

We have decided not to write a scientific article for publication based on these analyses. Rather, we have described the data processing in a detailed blog post linked to the underlying code repositories. The decision to eschew scientific publication for the basic association analysis is rooted in our view that we will continue to work on and analyze these data and, as a result, writing a paper would not reflect the current state of the scientific work we are performing. Our goal here is to make these results available as quickly as possible, for any geneticist, biologist or curious citizen to explore. This is not to suggest that we will not write any papers on these data, but rather only write papers for those activities that involve novel method development or more complex analytic approaches. A univariate genome-wide association analysis is now a relatively well-established activity, and while the scale of this is a bit grander than before, that in and of itself is a relatively perfunctory activity. [emphasis mine] Simply put, let the data be free.

That being said, when starting to write this post, first I missed a paper. It was pretty frustrating to find a detailed description of the methods: after circling back and forth between the different pages that link to each other, I landed on the original methods post, which is informative, and written in a light conversational style. On the internet, one would fear that this links may rot and die eventually, and a paper would probably (but not necessarily …) be longer-lasting.

2. Everything is a genome scan, if you’re brave enough

Another thing that the GWAS bot drives home is that you can map anything that you can measure. The results are not always straightforward. On the other hand, even if the trait in question seems a bit silly, the results are not necessarily nonsense either.

There is a risk, for geneticists and non-geneticists alike, to reify traits based on their genetic parameters. If we can measure the heritability coefficient of something, and localise it in the genome with a genome-wide association study, it better be a real and important thing, right? No. The truth is that geneticists choose traits to measure the same way all researchers choose things to measure. Sometimes for great reasons with serious validation and considerations about usefulness. Sometimes just because. The GWAS bot also helpfully links to the UK Biobank website that describes the traits.

Look at that bread intake genome scan above. Here, ”bread intake” is the self-reported number of slices of bread eaten per week, as entered by participants on a touch screen questionnaire at a UK Biobank assessment centre. I think we can be sure that this number doesn’t reveal any particularly deep truth about bread and its significance to humanity. It’s a limited, noisy, context-bound number measured, I bet, because once you ask a battery of lifestyle questions, you’ll ask about bread too. Still, the strongest association is at a region that contains olfactory receptor genes and also shows up two other scans about food (fruit and ice cream). The bread intake scan hits upon a nugget of genetic knowledge about human food preference. A small, local truth, but still.

Now substitute bread intake for some more socially relevant trait, also imperfectly measured.

3. Lost, like tweets in rain

Genome scan interpretation is just that: interpretation. It means pulling together quantitative data, a knowledge of biology, previous literature, and writing an unstructured text, such as a Discussion section or a Twitter thread. This makes them harder to organise, store and build on than the genome scans themselves. Sure, Fauman’s Twitter threads are linked from the above Google Document, and our Discussion sections are available from the library. But they’re spread out in different places, they mix (as they should) evidence with evaluation and speculation, and it’s not like we have a structured vocabulary for describing genetic mechanisms of quantitative trait loci, and the levels of evidence for them. Maybe we could, with genome-wide association study ontologies and wikis.

You’re not funny, but even if you were

Here is a kind of humour that is all too common in scientific communication; I’ll just show you the caricature, and I think you’ll recognize the shape of it:

Some slogan about how a married man is a slave or a prisoner kneeling and holding a credit card. Some joke where the denouement relies on: the perception that blondes are dumb, male preference for breast size, perceived associations between promiscuity and nationality, or anything involving genital size. Pretty much any one-panel cartoon taken from the Internet.

Should you find any of this in your own talk, here is a message to you: That may be funny to you; that isn’t the problem. To a fair number of the people who are listening, it’s likely to be trite, sad and annoying.

Humour totally has a place in academic speech and writing—probably more than one place. There is the laughter that is there to relieve tension. That is okay sometimes. There are jokes that are obviously put-downs. Those are probably only a good idea in private company, or in public forums where the object of derision is powerful enough that you’re not punching down, but powerless enough to not punch you back. Say, the ever-revered and long dead founder of your field—they may deserve a potshot at their bad manners and despicable views on eugenics.

Then there is that elusive ‘sudden perception of the incongruity between a concept and the real objects which have been thought through it in some relation’ (Schopenhauer, quoted in Stanford Encyclopedia of Philosophy). When humour is used right, a serious lecturer talking about serious issues has all kinds of opportunities to amuse the listener with incongruities between the expectations and what they really are like. So please don’t reveal yourself to be predictably trite.

Sequencing-based methods called Dart

Some years ago James Hadfield at Enseqlopedia made a spreadsheet of acronyms for sequencing-based methods with some 50 rows. I can only imagine how long it would be today.

The overloading of acronyms is becoming a bit ridiculous. I recently saw a paper about DART-seq, a method for detecting N6-methyladenosine in RNA (Meyer 2019), and thought, ”wait a minute, isn’t DART-seq a reduced representation genotyping method?” It is, only stylised as DArTseq (seriously). Apparently, it’s also a droplet RNA-sequencing method (Saikia et al. 2018).

What are these methods doing?

  • DArT, diversity array technology, is a way to enrich for a part of a genome. It was originally developed with array technology in mind (Jaccoud et al. 2001). They take some DNA, cut it with restriction enzymes, add adapters and amplify regions close to the cut. Then they clone the resulting DNA, and then attach it to a slide, and that gives a custom microarray of anonymous fragments from the genome. For the Dart-seq version, it seems they make a sequencing library instead of going on to cloning (Ren et al. 2015). It falls in the same family as GBS and RAD-seq methods.
  • DART-seq, droplet-assisted RNA targeting, builds on Drop-seq, where they put single cells and beads that carry primers into the same oil droplet. As cells lyse, the RNA sticks to the primer. The beads also have a barcode so they can be identified in sequencing. Then they break the emulsion, reverse transcribe the RNA attached to beads, amplify and sequence. That is cool. However, because they capture the RNA with oligo-dT primers, they sequence from the 3′ end of the RNA. The Dart method adds primers to the beads, so they can target some specific RNAs and amplify more of them. It’s the super-high-tech version of gene-specific primers for reverse transcription..
  • DART-seq, deamination adjacent to RNA modification targets, uses a synthetic fusion protein that combines APOBEC1, which deaminates cytidines, with a protein domain from YTHDF2 which binds N6-methyladenosine. If an RNA has N6-methyladenosine, cytidines that are close to it, as is usually the case with this base modification, will be deaminated to uracil. After RNA-sequencing, this will look like Cs next to As turning into Ts. Neat! It’s a little bit like bisulfite sequencing of methylated DNA, but with RNA.

On the one hand: Don’t people search the internet before they name their methods, or do they not care? On the other hand, realistically, the genotyping method Dart and the single cell RNA-seq method Dart are unlikely to show up in the same work. If you can call your groups ”treatment” and ”control” for the purpose of a paper, maybe you can call your method ”Dart”, and no-one gets too confused.

‘Approaches to genetics for livestock research’ at IASH, University of Edinburgh

A couple of weeks ago, I was at a symposium on the history of genetics in animal breeding at the Institute of Advanced Studies in the Humanities, organized by Cheryl Lancaster. There were talks by two geneticists and two historians, and ample time for discussion.

First geneticists:

Gregor Gorjanc presented the very essence of quantitative genetics: the pedigree-based model. He illustrated this with graphs (in the sense of edges and vertices) and by predicting his own breeding value for height from trait values, and from his personal genomics results.

Then, yours truly gave this talk: ‘Genomics in animal breeding from the perspectives of matrices and molecules’. Here are the slides (only slightly mangled by Slideshare). This is the talk I was preparing for when I collected the quotes I posted a couple of weeks ago.

I talked about how there are two perspectives on genomics: you can think of genomes either as large matrices of ancestry indicators (statistical perspective) or as long strings of bases (sequence perspective). Both are useful, and give animal breeders and breeding researchers different tools (genomic selection, reference genomes). I also talked about potential future breeding strategies that use causative variants, and how they’re not about stopping breeding and designing the perfect animal in a lab, but about supplementing genomic selection in different ways.

Then, historians:

Cheryl Lancaster told the story of how ABGRO, the Animal Breeding and Genetics Research Organisation in Edinburgh, lost its G. The organisation was split up in the 1950s, separating fundamental genetics research and animal breeding. She said that she had expected this split to be do to scientific, methodological or conceptual differences, but instead found when going through the archives, that it all was due to personal conflicts. She also got into how the ABGRO researchers justified their work, framing it as ”fundamental research”, and aspired to do long term research projects.

Jim Lowe talked about the pig genome sequencing and mapping efforts, how it was different from the human genome project in organisation, and how it used comparisons to the human genome a lot. Here he’s showing a photo of Alan Archibald using the gEVAL genome browser to quality-check the pig genome. He also argued that the infrastructural outcomes of a project like the human genome project, such as making it possible for pig genome scientists to use the human genome for comparisons, are more important and less predictable than usually assumed.

The discussion included comments by some of the people who were there (Chris Haley, Bill Hill), discussion about the breed concept, and what scientists can learn from history.

What is a breed? Is it a genetical thing, defined by grouping individuals based on their relatedness, a historical thing, based on what people think a certain kind of animal is supposed to look like, or a marketing tool, naming animals that come from a certain system? It is probably a bit of everything. (I talked with Jim Lowe during lunch; he had noticed how I referred to Griffith & Stotz for gene concepts, but omitted the ”post-genomic” gene concept they actually favour. This is because I didn’t find it useful for understanding how animal breeding researchers think. It is striking how comfortable biologists are with using fuzzy concepts that can’t be defined in a way that cover all corner cases, because biology doesn’t work that way. If the nominal gene concept is broken by trans-splicing, practicing genomicists will probably think of that more as a practical issue with designing gene databases than a something that invalidates talking about genes in principle.)

What would researchers like to learn from history? Probably how to succeed with large research endeavors and how to get funding for them. Can one learn that from history? Maybe not, but there might be lessons about thinking of research as ”basic”, ”fundamental”, ”applied” etc, and about what the long term effects of research might be.

Greek in biology

This is a fun essay about biological terms borrowed from or inspired by Greek, written by a group of (I presume) Greek speakers: Iliopoulos & al (2019), Hypothesis, analysis and synthesis, it’s all Greek to me.

We hope that this contribution will encourage scientists to think about the terminology used in modern science, technology and medicine (Wulff, 2004), and to be more careful when seeking to introduce new words and phrases into our vocabulary.

First, I like how they celebrate the value of knowing more than one language. I feel like bi- and multilingualism in science is most often discussed as a problem: Either we non-native speakers have problems catching up with the native speakers, or we’re burdening them with our poor writing. Here, the authors seem to argue that knowing another language (Greek) helps both your understanding of scientific language, and the style and grace with which you use it.

I think this is the central argument:

Non-Greek speakers will, we are sure, be surprised by the richness and structure of the Greek language, despite its often inept naturalization in English or other languages, and as a result be better able to understand their own areas of science (Snell, 1960; Montgomery, 2004). Our favorite example is the word ‘analysis’: everyone uses it, but few fully understand it. ‘Lysis’ means ‘breaking up’, while ‘ana-‘ means ‘from bottom to top’ but also ‘again/repetitively’: the subtle yet ingenious latter meaning of the term implies that if you break up something once, you might not know how it works; however, if you break up something twice, you must have reconstructed it, so you must understand the inner workings of the system.

I’m sure it is true that some of the use of Greek-inspired terms in scientific English is inept, and would benefit from checking by someone who knows Greek. However, this passage invites two objections.

First, why would anyone think that the Greek language has less richness and structure then English? Then again, if I learned Greek, it is possible that I would find that the richness would be even more than I expected.

Second, does knowing Greek mean that you have a deeper appreciation for the nuances of a concept like analysis? Maybe ‘analysis’ as understood without those double meanings of the ‘ana-‘ prefix is less exciting, but if it is true that most people don’t know about this subtlety, this can’t be what they mean by ‘analysis’. So, if that etymological understanding isn’t part of how most people use the word, do we really understand it better by learning that story? It sounds like they think that the word is supposed to have a true meaning separate from how it is used, and I’m not sure that is helpful.

So what are some less inept uses of Greek? They like the term ‘epigenomics’, writing that it is being ‘introduced in a thoughtful and meaningful way’. To me, this seems like an unfortunate example, because I can think of few terms in genomics that cause more confusion. ‘Epigenomics’ is the upgraded version of ‘epigenetics’, a word which was, unfortunately, coined at least twice with different meanings. And now, epigenetics is this two-headed beast that feeds on geneticists’s energy as they try to understand what on earth other geneticists are saying.

First, Conrad Waddington glued ‘epigenesis’ and ‘genetics’ together to define epigenetics as ‘the branch of biology that studies the causal interactions between genes and their products which bring the phenotype into being’ (Waddington 1942, quoted in Deans & Maggert 2015). That is, it is what we today might call developmental genetics. Later, David Nanney connected it to gene regulatory mechanisms that are stable through cell division, and we get the modern view of epigenetics as a layer of regulatory mechanisms on top of the DNA sequence. I would be interested to know which of these two intertwined meanings it is that the authors like.

Judging by the affiliations of the authors, the classification of the paper (by the way, how is this ‘computational and systems biology, genetics and genomics’, eLife?), and the citations (16 of 27 to medicine and science journals, a lot of which seems to be similar opinion pieces), this feels like a missed opportunity to connect with language scholarship. I’m no better myself–I’m not a scholar of language, and I haven’t tried to invite one to co-write this blog post with me … But there must be scholarship and expertise outside biomedicine relevant to this topic, and language sources richer than an etymological online dictionary?

Finally, the table of new Greek-inspired terms that ‘might be useful’ is a fun thought exercise, and if it serves as inspiration for someone to have an eureka moment about a concept they need to investigate, great (‘… but what is a katagenome, really? Oh, maybe …’). But I think that telling scientists to coin new words is inviting catastrophe. I’d much rather take the lesson that we need fewer new tortured terms borrowed from Greek, rather than more of them. It’s as if I, driven by the nuance and richness I recognise in my own first language, set out to coin övergenome, undergenome and pågenome.

Neutral citation again

Here is a piece of advice about citation:

Rule 4: Cite transparently, not neutrally

Citing, even in accordance with content, requires context. This is especially important when it happens as part of the article’s argument. Not all citations are a part of an article’s argument. Citations to data, resources, materials, and established methods require less, if any, context. As part of the argument, however, the mere inclusion of a citation, even when in the right spot, does not convey the value of the reference and, accordingly, the rationale for including it. In a recent editorial, the Nature Genetics editors argued against so-called neutral citation. This citation practice, they argue, appears neutral or procedural yet lacks required displays of context of the cited source or rationale for including [11]. Rather, citations should mention assessments of value, worth, relevance, or significance in the context of whether findings support or oppose reported data or conclusions.

This flows from the realisation that citations are political, even though that term is rarely used in this context. Researchers can use them to accurately represent, inflate, or deflate contributions, based on (1) whether they are included and (2) whether their contributions are qualified. Context or rationale can be qualified by using the right verbs. The contribution of a specific reference can be inflated or deflated through the absence of or use of the wrong qualifying term (‘the authors suggest’ versus ‘the authors establish’; ‘this excellent study shows’ versus ‘this pilot study shows’). If intentional, it is a form of deception, rewriting the content of scientific canon. If unintentional, it is the result of sloppy writing. Ask yourself why you are citing prior work and which value you are attributing to it, and whether the answers to these questions are accessible to your readers.

When Nature Genetics had an editorial condemning neutral citation, I took it to be a demand that authors show that they’ve read and thought about the papers they cite.

This piece of advice seems to ask something different: that authors be honest about their opinions about a work they cite. That is a radical suggestion, because if people were, I believe readers would get offended. That is, if the paper wasn’t held back by offended peer reviewers before it reached any readers. Honestly, as a reviewer, I would probably complain if I saw a value-laden and vacuous statement like ‘this excellent study’ in front of a citation. It would seem to me an rude attempt to tell the reader what to think.

So how are we to cite a study? On the one hand, we can’t just drop the citation in a sentence, but are obliged to ‘mention assessments of value, worth, relevance or significance’. On the other hand, we must make sure that they are ‘qualified by using the right verbs’. And if citation is political, then whether a study ‘suggests’ or ‘establishes’ conclusions is also political.

Disclaimer: I don’t like the 10 simple rules format at all. I find that they belong on someone’s personal blog and not in a scientific journal, given that their evidence for their assertions usually amounts to nothing more than my own meandering experience … This one is an exception, because Bart Penders does research on how scientists collaborate and communicate (even if he cites no research in this particular part of the text).

Penders B (2018) Ten simple rules for responsible referencing. PLoS Computional Biology