Sequencing-based methods called Dart

Some years ago James Hadfield at Enseqlopedia made a spreadsheet of acronyms for sequencing-based methods with some 50 rows. I can only imagine how long it would be today.

The overloading of acronyms is becoming a bit ridiculous. I recently saw a paper about DART-seq, a method for detecting N6-methyladenosine in RNA (Meyer 2019), and thought, ”wait a minute, isn’t DART-seq a reduced representation genotyping method?” It is, only stylised as DArTseq (seriously). Apparently, it’s also a droplet RNA-sequencing method (Saikia et al. 2018).

What are these methods doing?

  • DArT, diversity array technology, is a way to enrich for a part of a genome. It was originally developed with array technology in mind (Jaccoud et al. 2001). They take some DNA, cut it with restriction enzymes, add adapters and amplify regions close to the cut. Then they clone the resulting DNA, and then attach it to a slide, and that gives a custom microarray of anonymous fragments from the genome. For the Dart-seq version, it seems they make a sequencing library instead of going on to cloning (Ren et al. 2015). It falls in the same family as GBS and RAD-seq methods.
  • DART-seq, droplet-assisted RNA targeting, builds on Drop-seq, where they put single cells and beads that carry primers into the same oil droplet. As cells lyse, the RNA sticks to the primer. The beads also have a barcode so they can be identified in sequencing. Then they break the emulsion, reverse transcribe the RNA attached to beads, amplify and sequence. That is cool. However, because they capture the RNA with oligo-dT primers, they sequence from the 3′ end of the RNA. The Dart method adds primers to the beads, so they can target some specific RNAs and amplify more of them. It’s the super-high-tech version of gene-specific primers for reverse transcription..
  • DART-seq, deamination adjacent to RNA modification targets, uses a synthetic fusion protein that combines APOBEC1, which deaminates cytidines, with a protein domain from YTHDF2 which binds N6-methyladenosine. If an RNA has N6-methyladenosine, cytidines that are close to it, as is usually the case with this base modification, will be deaminated to uracil. After RNA-sequencing, this will look like Cs next to As turning into Ts. Neat! It’s a little bit like bisulfite sequencing of methylated DNA, but with RNA.

On the one hand: Don’t people search the internet before they name their methods, or do they not care? On the other hand, realistically, the genotyping method Dart and the single cell RNA-seq method Dart are unlikely to show up in the same work. If you can call your groups ”treatment” and ”control” for the purpose of a paper, maybe you can call your method ”Dart”, and no-one gets too confused.

Genes do not form networks

As a wide-eyed PhD student, I read a lot of papers about gene expression networks and was mightily impressed by their power. You can see where this is going, can’t you?

Someone on Twitter talked about their doubts about gene networks: how networks ”must” be how biology works, but that they weren’t sure that network methods actually had helped genetics that much, how there are compelling annotation term enrichments, and individual results that ”make sense”, but not many hard predictions. I promise I’m not trying to gossip about them behind their back, but I couldn’t find the tweets again. If you think about it, however, I don’t think genes must form networks at all, quite the opposite. But there are probably reasons why the network idea is so attractive.

(Edit: Here is the tweet I was talking about by Jeffrey Barrett! Thanks to Guillaume Devailly for pointing me to it.)

First, network representations are handy! There are all kinds of things about genes that can be represented as networks: coexpression, protein interactions, being mentioned in the same PubMed abstract, working on the same substrate, being annotated by the same GO term, being linked in a database such as STRING which tries to combine all kinds of protein–protein interactions understood broadly (Szklarczyk & al 2018), differential coexpression, co-differential expression (Hudson, Reverter & Dalrymple 2009), … There are all kinds of ways of building networks between genes: correlations, mutual information, Bayesian networks, structural equations models … Sometimes one of them will make an interesting biological phenomena stand out and become striking to the eye, or to one of the many ways to cluster nodes and calculate their centrality.

Second, networks are appealing. Birgitte Nerlich has this great blog post–On books, circuits and life–about metaphors for gene editing (the book of life, writing, erasing, cutting and editing) and systems biology (genetic engineering, circuits, wiring, the genetic program). Maybe the view of gene networks fits into the latter category, if we imagine that the extremely dated analogy with cybernetics (Peluffo 2015) has been replaced with the only slightly dated idea of a universal network science. After Internet and Albert, Jeong & Barabási (1999), what could be more apt than understanding genes as forming networks?

I think it’s fair to say that for genes to form networks, the system needs to be reasonably well described by a graph of nodes and edges. If you look at systems of genes that are really well understood, like the gap gene ”network”, you will see that they do not look like this at all. Look at Fig 3 in Jaeger (2011). Here, there is dynamic and spatial information not captured by the network topology that needs to be overlaid for the network view to make sense.

Or look at insulin signalling, in Fig 1 of Nyman et al (2014). Here, there are modified versions of proteins, non-gene products such as glucose and the plasma membrane, and again, dynamics, including both RNA and protein synthesis themselves. There is no justification for assuming that any of that will be captured by any topology or any weighting of genes with edges between them.

We are free to name biological processes networks if we want to; there’s nothing wrong with calling a certain process and group of related genes ”the gap gene network”. And we are free to use any network representation we want when it is useful or visually pleasing, if that’s what we’re going for. However, genes do not actually form networks.

Literature

Szklarczyk, D, et al. (2018) STRING v11: protein–protein association networks with increased coverage, supporting functional discovery in genome-wide experimental datasets. Nucleic acids research.

Hudson, N. J., Reverter, A., & Dalrymple, B. P. (2009). A differential wiring analysis of expression data correctly identifies the gene containing the causal mutation. PLoS computational biology, 5(5), e1000382.

Peluffo, A. E. (2015). The ”Genetic Program”: behind the genesis of an influential metaphor. Genetics, 200(3), 685-696.

Albert, R., Jeong, H., & Barabási, A. L. (1999). Diameter of the world-wide web. Nature, 401(6749), 130.

Jaeger, J. (2011). The gap gene network. Cellular and Molecular Life Sciences, 68(2), 243-274.

Nyman, E., Rajan, M. R., Fagerholm, S., Brännmark, C., Cedersund, G., & Strålfors, P. (2014). A single mechanism can explain network-wide insulin resistance in adipocytes from obese patients with type 2 diabetes. Journal of Biological Chemistry, 289(48), 33215-33230.

Journal club: ”Template plasmid integration in germline genome-edited cattle”

(This time it’s not just a Journal Club of One, because this post is based on a presentation given at the Hickey group journal club.)

The backstory goes like this: Polled cattle lack horns, and it would be safer and more convenient if more cattle were born polled. Unfortunately, not all breeds have a lot of polled cattle, and that means that breeding hornless cattle is difficult. Gene editing could help (see Bastiaansen et al. (2018) for a model).

In 2013, Tan et al. reported taking cells from horned cattle and editing them to carry the polled allele. In 2016, Carlson et al. cloned bulls based on a couple of these cell lines. The plan was to use the bulls, now grown, to breed polled cattle in Brazil (Molteni 2019). But a few weeks ago, FDA scientists (Norris et al 2019) posted a preprint that found inadvertent plasmid insertion in the bulls, using the public sequence data from 2016. Recombinetics, the company making the edited bulls, conceded that they’d missed the insertion.

”We weren’t looking for plasmid integrations,” says Tad Sonstegard, CEO of Recombinetics’ agriculture subsidiary, Acceligen, which was running the research with a Brazilian consulting partner. ”We should have.”

Oops.

For context: To gene edit a cell, one needs to bring both the editing machinery (proteins in the case of TALENS, the method used here; proteins and RNA in the case of CRISPR) and the template DNA into the cell. The template DNA is the DNA you want to put in instead of the piece that you’re changing. There are different ways to get the components into the cell. In this case, the template was delivered as part of a plasmid, which is a bacterially-derived circular DNA.

The idea is that the editing machinery should find a specific place in the genome (where the variant that causes polledness is located), make a cut in the DNA, and the cell, in its efforts to repair the cut, will incorporate the template. Crucially, it’s supposed to incorporate only the template, and not the rest of the plasmid. But in this case, the plasmid DNA snuck in too, and became part of the edited chromosome. Biological accidents happen.

How did they miss that, and how did the FDA team detect it? Both the 2016 and 2019 paper are short letters where a lot of the action is relegated to the supplementary materials. Here are pertinent excerpts from Carlson & al 2016:

A first PCR assay was performed using (btHP-F1: 5’- GAAGGCGGCACTATCTTGATGGAA; btHP-R2- 5’- GGCAGAGATGTTGGTCTTGGGTGT) … The PCR creates a 591 bp product for Pc compared to the 389 bp product from the horned allele.

Secondly, clones were analyzed by PCR using the flanking F1 and R1 primers (HP1748-F1- 5’- GGGCAAGTTGCTCAGCTGTTTTTG; HP1594_1748-R1- 5’-TCCGCATGGTTTAGCAGGATTCA) … The PCR creates a 1,748 bp product for Pc compared to the 1,546 bp product from the horned allele.

All PCR products were TOPO cloned and sequenced.

Thus, they checked that the animals were homozygotes for the polled allele (called ”Pc”) by amplifying two diagnostic regions and sequenced them to check the edit. This shows that the target DNA is there.

Then, they used whole-genome short read sequencing to check for off-target edits:

Samples were sequenced to an average 20X coverage on the Illumina HiSeq 2500 high output mode with paired end 125 bp reads were compared to the bovine reference sequence (UMD3.1).

Structural variations were called using CLC probabilistic variant detection tools, and those with >7 reads were further considered even though this coverage provides only a 27.5% probability of accurately detecting heterozygosity.

Upon indel calls for the original non-edited cell lines and 2 of the edited animals, we screened for de novo indels in edited animal RCI-001, which are not in the progenitor cell-line, 2120.

We then applied PROGNOS4 with reference bovine genome build UMD3.1 to compute all potential off-targets likely caused by the TALENs pair.

For all matching sequences computed, we extract their corresponding information for comparison with de novo indels of RCI-001 and RCI-002. BEDTools was adopted to find de novo indels within 20 bp distance of predicted potential targets for the edited animal.

Only our intended edit mapped to within 10 bp of any of the identified degenerate targets, revealing that our animals are free of off-target events and further supporting the high specificity of TALENs, particularly for this locus.

That means, they sequenced the animals’ genomes in short fragment, puzzled it together by aligning it to the cow reference genome, and looked for insertions and deletions in regions that look similar enough that they might also be targeted by their TALENs and cut. And because they didn’t find any insertions or deletions close to these potential off-target sites, they concluded that the edits were fine.

The problem is that short read sequencing is notoriously bad at detecting larger insertions and deletions, especially of sequences that are not in the reference genome. In this case, the plasmid is not normally part of a cattle genome, and thus not in the reference genome. That means that short reads deriving from the inserted plasmid sequence would probably not be aligned anywhere, but thrown away in the alignment process. The irony is that with short reads, the bigger something is, the harder it is to detect. If you want to see a plasmid insertion, you have to make special efforts to look for it.

Tan et al. (2013) were aware of the risk of plasmid insertion, though, at least when concerned with the plasmid delivering the TALEN. Here is a quote:

In addition, after finding that one pair of TALENs delivered as mRNA had similar activity as plasmid DNA (SI Appendix, Fig. S2), we chose to deliver TALENs as mRNA to eliminate the possible genomic integration of TALEN expression plasmids. (my emphasis)

As a sidenote, the variant calling method used to look for off-target effects (CLC Probabilistic variant detection) doesn’t even seem that well suited to the task. The manual for the software says:

The size of insertions and deletions that can be found depend on how the reads are mapped: Only indels that are spanned by reads will be detected. This means that the reads have to align both before and after the indel. In order to detect larger insertions and deletions, please use the InDels and Structural Variation tool instead.

The CLC InDels and Structural Variation tool looks at the unaligned (soft-clipped) ends of short sequence reads, which is one way to get at structural variation with short read sequences. However, it might not have worked either; structural variation calling is a hard task, and the tool does not seem to be built for this kind of task.

What did Norris & al (2019) do differently? They took the published sequence data and aligned it to a cattle reference genome with the plasmid sequence added. Then, they loaded the alignment into the trusty Integrative Genomics Viewer and manually looked for reads aligning to the plasmid and reads supporting junctions between plasmid, template DNA and genome. This bespoken analysis is targeted to find plasmid insertions. The FDA authors must have gone ”nope, we don’t buy this” and decided to look for the plasmid.

Here is what they claim happened (Fig 1): The template DNA is there, as evidenced by the PCR genotyping, but it inserted twice, with the rest of the plasmid in-between.

F1.large-5.jpg

Here is the evidence (Supplementary figs 1 and 2): These are two annotated screenshots from IGV. The first shows alignments of reads from the calves and the unedited cell lines to the plasmid sequence. In the unedited cells, there are only stray reads, probably misplaced, but in the edited calves, ther are reads covering the plasmid throughout. Unless somehow else contaminated, this shows that the plasmid is somewhere in their genomes.

igv.png

Where is it then? This second supplementary figure shows alignments to expected junctions: where template DNA and genome are supposed to join. The colourful letters are mismatches, showing where unexpected DNA shows up. This is the evidence for where the plasmid integrated and what kind of complex rearrangement of template, plasmid and genome happened at the cut site. This must have been found by looking at alignments, hypothesising an insertion, and looking for the junctions supporting it.

igv2.png

Why didn’t the PCR and targeted sequencing find this? As this third supplementary figure shows, the PCRs used could, theoretically, produce longer products including plasmid sequence. But they are way too long for regular PCR.

pcr.png

Looking at this picture, I wonder if there were a few attempts to make a primer pair that went from insert into the downstream sequence, that failed and got blamed on bad primer design or PCR conditions.

In summary, the 2019 preprint finds indirect evidence of the plasmid insertion by looking hard at short read alignments. Targeted sequencing or long read sequencing could give better evidence by observing he whole insertion. Recombinetics have acknowledged the problem, which makes me think that they’ve gone back to the DNA samples and checked.

Where does that leave us with quality control of gene editing? There are three kinds of problems to worry about:

  • Off-target edits in similar places in other parts of the genome; this seems to be what people used to worry about the most, and what Carlson & al checked for
  • Complex rearrangements around cut site (probably due to repeated cutting; this became a big concern after Kosicki & al (2018), and should apply both to on- and off-target cuts
  • Insertion of plasmid or mutated target; this is what happened in here

The ways people check gene edits (targeted Sanger sequencing and short read sequencing) doesn’t detect any of them particularly well, at least not without bespoke analysis. Maybe the kind of analysis that Norris & al do could be automated to some extent, but currently, the state of the art seems to be to manually look closely at alignments. If I was reviewing the preprint, I would have liked it if the manuscript had given a fuller description of how they arrived at this picture, and exactly what the evidence for this particular complex rearrangement is. This is a bit hard to follow.

Finally, is this embarrassing? On the one hand, this is important stuff, plasmid integration is a known problem, so the original researchers probably should have looked harder for it. On the other hand, the cell lines were edited and the clones born before a lot of the discussion and research of off-target edits and on-target rearrangements that came out of CRISPR being widely applied, and when long read sequencing was a lot less common. Maybe it was easier to think that the sort read off-target analysis was enough then. In any case, we need a solid way to quality check edits.

Literature

Molteni M. (2019) Brazil’s plan for gene edited-cows got scrapped–here’s why. Wired.

Carlson DF, et al. (2016) Production of hornless dairy cattle from genome-edited cell lines. Nature Biotechnology.

Norris AL, et al. (2019) Template plasmid integration in germline genome-edited cattle. BioRxiv.

Tan W, et al. (2013) Efficient nonmeiotic allele introgression in livestock using custom endonucleases. Proceedings of the National Academy of Sciences.

Bastiaansen JWM, et al. (2018) The impact of genome editing on the introduction of monogenic traits in livestock. Genetics Selection Evolution.

Kosicki M, Tomberg K & Bradley A. (2018) Repair of double-strand breaks induced by CRISPR–Cas9 leads to large deletions and complex rearrangements. Nature Biotechnology.

På dna-dagen: dna-metaforer

Det finns olika metaforer för deoxyribonukleinsyran och vad den betyder för oss. Dna kan vara en ritning, ett recept, ett program eller skrift.

Det är nästan omöjligt att säga något om molekylärgenetik utan metaforer. Med kvantitativ genetik går det lite lättare, i all fall tills de statistiska modellerna och beräkningarna kommer fram. Kvantitativ genetik handlar om saker som alla kan se i vardagen, som familjelikhet och släktskap. Molekylärgenetik handlar om saker som, i och för sig finns i det allmäna medvetandet, men inte syns omkring oss.

Men metaforer kan vara ohjälpsamma och leda tanken fel. Bilden av dna som en ritning av organismen kan verka för enkel och leda tanken till genetisk determinism. Nu vet jag, trots att jag ska föreställa ingenjör, inte mycket om ritningar. På flera sätt är det inte så tokigt: en ritning representerar det som ska byggas med ett specialiserat bildspråk i en lägre dimension. Ett hus är i 3D, men en ritning i 2D. Proteiner är tredimensionella; den genetiska koden beskriver dem i en dimension. Men det kanske är sant att ordet ”ritning” (eller ”blåkopia”) för tanken till något som är för exakt och för avbildande.

Ett alternativ är att dna är ett recept (det är många som föreslagit det; bland annat Richard Dawkins i The Blind Watchmaker, 1986). Receptet har den fördelen att det beskriver en process med både ingredienser och instruktioner. Det är lite som organismens utveckling från ett befruktat ägg till en vuxen. ”Tillsätt maternell bicoid i ena änden och nanos i andra änden; låt proteinerna blandas fritt”, och så vidare (Gilbert 2000). En annan fördel är att det naturligt påminner om att dna inte är allt. Samma recept med lokala skillnader i ingredienser och improvisationer från den som lagar blir olika anrättningar. Å andra sidan överdriver receptet vad som finns i dna. Vilka gener som uttrycks var och när är ett samspel av dna och de proteiner och rna som redan finns i en cell vid en viss tidpunkt.

Eller så är dna ett program. Program är också instruktioner, så det har samma fördelar och nackdelar som receptet på den punkten. Å andra sidan är program abstrakta och fria från konkreta ingredienser och associationer till matlagning. Lite som en ritning låter det mekaniskt och exakt. Det spelar tydligt också roll vad dna skulle vara en ritning av eller ett recept på. Det är viss skillnad att kalla dna en ritning av proteiner än ett recept på en organism.

Till sist finns det metaforer inskrivna i själva terminologin. När genetiker pratar om dna, hur det förs vidare och används, pratar vi om det som ett skriftspråk. Det kallas kopiering när dna reproduceras när celler ska dela sig. Det kallas transkription, alltså kopiering men med en ton av överföring till en annan form eller ett annat medium, när rna produceras från dna. Det kallas translation, översättning, när rna i sin tur fungerar som mall för proteinsyntes. Till råga på allt skriver vi dna med ett alfabet på fyra bokstäver: A, C, T, G. Det är en bild som är så passande att den nästan är sann.

(Den 25 april 1953 publicerades artiklarna som presenterade dna-molekylens struktur. Därav dna-dagen. Gamla dna-dagsposter: Genetik utan dna (2016), Gener, orsak och verkan (2015), På dna-dagen (2014))

Undervisning: Molekylärgenetik

NBIC45 utgår! Leve NBIC52! Den senaste varianten av molekylärgenetikkursen har just börjat. Nu var det inte tänkt att jag skulle undervisa något i år, men jag hoppar in som ställföreträdande skägg. Så läraruppställningen ändras lite mindre än det var tänkt från början.

Provrörsställ, rör, lösningar, pipetter och blåsippor som inte har med saken att göra.

Laborationerna, där en kan träffa mig, handlar om nöjsamma saker som genotypning med polymeraskedjereaktionen och att transformera bakterier med plasmider. Och att tolka inte alltid helt tydliga band på geler, samt stå i kö till centrifugen. Jag tycker det är rätt roligt. Att stå i kö till centrifugen är kanske inte det roligaste i världen. Men alla som arbetat i ett molekylärt laboratorium kan intyga att det i alla fall är realistiskt.

Jag har skrivit (och twittrat) något om innehållet i labbarna förut.

På dna-dagen: Gener, orsak och verkan

”DNA, livets molekyl” … Visst, DNA är en viktig och snygg biomolekyl. Men varför skulle inte en komplex kolhydrat, ett protein eller en membranlipid förtjäna det namnet?

Det finns två perspektiv på genetik som jag brukar tjata om. Å ena sidan: genetik som handlar om vad molekylära gener gör och vad de har för funktion. Å andra sidan: genetik som är studiet av ärftliga skillnader mellan individer, och i förlängningen populationer och arter. Genetik beskrivs ibland som en vetenskap som handlar om ”koder” och ”information”. Det ligger något i det, men jag tror det är bra att vara lite försiktig med metaforerna. Jag misstänker att koder och information inte är något vi bara hittar liggande ute i naturen, så att säga, utan mänskliga tolkningar.

Ja, vissa DNA-sekvenser skrivs av till mRNA som kodar för proteiner. Här betyder ”kodar för” att sekvensen har tripletter av baser som är komplementära mot tRNA-molekyler som bär aminosyror. Andra sekvenser motsvarar RNA-molekyler som har någon annan funktion. Men de orsakande faktorerna till att ett visst RNA uttrycks vid en viss tid finns inte i DNA, utan någon annan stans. DNA är en del av mekanismen, men det är också RNA-polymeraset som skriver av det, spliceosomen som sätter ihop aktivt mRNA, de system av enzymer som tillverkar nukleotiderna och så vidare, och så vidare. Processen aktiveras av vad som händer i organismens miljö, interna processer som omfattar många delar av cellen eller helt olika delar av kroppen osv. På så sätt är kärnan med sitt DNA en organell vilken som helst.

Men! Det finns ett sammanhang där det är befogat att prata om genetiska orsaker, nämligen ärftliga skillnader mellan individer. Det går att hitta (och faktiskt konstruera) exempel på individer där dramatiska skillnader i egenskaper som utseende och beteende beror på en skillnad i DNA-sekvens — en genetisk variant eller ”gen” i den klassiska bemärkelsen. Det förstås, det kan finnas andra typer av ärftlighet som inte beror på DNA, och i så fall borde de också räknas med här. Men de flesta saker som inuti celler kan göra skillnad i en organisms egenskaper — proteiner, membranlipider, kolhydrater, små organiska molekyler osv — nollställs mellan generationerna, när könsceller bildas och utvecklingen så att säga börjar om varje generation. Men DNA går i arv — med sin ”information”, om en så vill.

(Den 25 april 1953 publicerades artiklarna som presenterade DNA-molekylens struktur. Därav DNA-dagen. Min DNA-dagspost från förra året: På dna-dagen)

Morning coffee: cost per genome

I recently heard this thing referred to as ”the most overused slide in genomics” (David Klevebring). It might be: what it shows is some estimate of the cost of sequencing a human genome over time, and how it plummets around 2008. Before that, the curve is Sanger sequencing, and then the costs show second generation sequencing (454, Illumina and SOLiD).

cost_genome

The source is the US National Human Genome Research Institute, and they’ve put some thought into how to estimate costs so that machines, reagents, analysis and people to do the work are included and that the different platforms are somewhat comparable. One must first point out that downstream analysis to make any sense of the data (assembly and variant calling) isn’t included. But the most important thing that this graph hides, even if the estimates of the cost would be perfect, is that to ”sequence a genome” means something completely different in 2001 and 2015. (Well, with third generation sequencers that give long reads coming up, the old meaning might come back.)

For data since January 2008 (representing data generated using ‘second-generation’ sequencing platforms), the ”Cost per Genome” graph reflects projects involving the ‘re-sequencing’ of the human genome, where an available reference human genome sequence is available to serve as a backbone for downstream data analyses.

The human genome project was of course about sequencing and assembling the genome into high quality sequences. Very few of the millions of human genomes resequenced since are anywhere close. As people in the sequencing loop know, resequencing with short reads doesn’t give you a genome sequence (and neither does trying to assemble a messy eukaryote genome with short reads only). It gives you a list of variants compared to the reference sequence. The usual short read business has no way of detect anything but single nucleotide variants and small indels. (And the latter depends … Also, you can detect copy number variants, but large scale structural variants are mostly off the table.) Of course, you can use these edits to reconstruct a consensus sequence from the reference, but it would be a total lie.

Again, none of this is news for people who deal with sequencing, and I’m not knocking second-generation sequencing. It’s very useful and has made a lot of new things possible. It’s just something I think about every time I see that slide.