Interpreting genome scans, with wisdom

Eric Fauman is a scientist at Pfizer who also tweets out interpretations of genome-wide association scans.

Background: There is a GWASbot twitter account which posts Manhattan plots with links for various traits from the UK Biobank. The bot was made by the Genetic Epidemiology lab at the Finnish Institute for Molecular Medicine and Harvard. The source of the results is these genome scans (probably; it’s little bit opaque); the bot also links to heritability and genetic correlation databases. There is also an EnrichrBot that replies with enrichment of chromatin marks (Chen et al. 2013). Fauman’s comments on some of the genome scans on his Twitter account.

Here are a couple of recent ones:

And here is his list of these threads as a Google Document.

This makes me thing of three things, two good, and one bad.

1. The ephemeral nature of genome scans

Isn’t it great that we’re now at a stage where a genome scan can be something to be tweeted or put en masse in a database, instead of published one paper per scan with lots of boilerplate. The researchers behind the genome scans say as much in their 2017 blog post on the first release:

To further enhance the value of this resource, we have performed a basic association test on ~337,000 unrelated individuals of British ancestry for over 2,000 of the available phenotypes. We’re making these results available for browsing through several portals, including the Global Biobank Engine where they will appear soon. They are also available for download here.

We have decided not to write a scientific article for publication based on these analyses. Rather, we have described the data processing in a detailed blog post linked to the underlying code repositories. The decision to eschew scientific publication for the basic association analysis is rooted in our view that we will continue to work on and analyze these data and, as a result, writing a paper would not reflect the current state of the scientific work we are performing. Our goal here is to make these results available as quickly as possible, for any geneticist, biologist or curious citizen to explore. This is not to suggest that we will not write any papers on these data, but rather only write papers for those activities that involve novel method development or more complex analytic approaches. A univariate genome-wide association analysis is now a relatively well-established activity, and while the scale of this is a bit grander than before, that in and of itself is a relatively perfunctory activity. [emphasis mine] Simply put, let the data be free.

That being said, when starting to write this post, first I missed a paper. It was pretty frustrating to find a detailed description of the methods: after circling back and forth between the different pages that link to each other, I landed on the original methods post, which is informative, and written in a light conversational style. On the internet, one would fear that this links may rot and die eventually, and a paper would probably (but not necessarily …) be longer-lasting.

2. Everything is a genome scan, if you’re brave enough

Another thing that the GWAS bot drives home is that you can map anything that you can measure. The results are not always straightforward. On the other hand, even if the trait in question seems a bit silly, the results are not necessarily nonsense either.

There is a risk, for geneticists and non-geneticists alike, to reify traits based on their genetic parameters. If we can measure the heritability coefficient of something, and localise it in the genome with a genome-wide association study, it better be a real and important thing, right? No. The truth is that geneticists choose traits to measure the same way all researchers choose things to measure. Sometimes for great reasons with serious validation and considerations about usefulness. Sometimes just because. The GWAS bot also helpfully links to the UK Biobank website that describes the traits.

Look at that bread intake genome scan above. Here, ”bread intake” is the self-reported number of slices of bread eaten per week, as entered by participants on a touch screen questionnaire at a UK Biobank assessment centre. I think we can be sure that this number doesn’t reveal any particularly deep truth about bread and its significance to humanity. It’s a limited, noisy, context-bound number measured, I bet, because once you ask a battery of lifestyle questions, you’ll ask about bread too. Still, the strongest association is at a region that contains olfactory receptor genes and also shows up two other scans about food (fruit and ice cream). The bread intake scan hits upon a nugget of genetic knowledge about human food preference. A small, local truth, but still.

Now substitute bread intake for some more socially relevant trait, also imperfectly measured.

3. Lost, like tweets in rain

Genome scan interpretation is just that: interpretation. It means pulling together quantitative data, a knowledge of biology, previous literature, and writing an unstructured text, such as a Discussion section or a Twitter thread. This makes them harder to organise, store and build on than the genome scans themselves. Sure, Fauman’s Twitter threads are linked from the above Google Document, and our Discussion sections are available from the library. But they’re spread out in different places, they mix (as they should) evidence with evaluation and speculation, and it’s not like we have a structured vocabulary for describing genetic mechanisms of quantitative trait loci, and the levels of evidence for them. Maybe we could, with genome-wide association study ontologies and wikis.

You’re not funny, but even if you were

Here is a kind of humour that is all too common in scientific communication; I’ll just show you the caricature, and I think you’ll recognize the shape of it:

Some slogan about how a married man is a slave or a prisoner kneeling and holding a credit card. Some joke where the denouement relies on: the perception that blondes are dumb, male preference for breast size, perceived associations between promiscuity and nationality, or anything involving genital size. Pretty much any one-panel cartoon taken from the Internet.

Should you find any of this in your own talk, here is a message to you: That may be funny to you; that isn’t the problem. To a fair number of the people who are listening, it’s likely to be trite, sad and annoying.

Humour totally has a place in academic speech and writing—probably more than one place. There is the laughter that is there to relieve tension. That is okay sometimes. There are jokes that are obviously put-downs. Those are probably only a good idea in private company, or in public forums where the object of derision is powerful enough that you’re not punching down, but powerless enough to not punch you back. Say, the ever-revered and long dead founder of your field—they may deserve a potshot at their bad manners and despicable views on eugenics.

Then there is that elusive ‘sudden perception of the incongruity between a concept and the real objects which have been thought through it in some relation’ (Schopenhauer, quoted in Stanford Encyclopedia of Philosophy). When humour is used right, a serious lecturer talking about serious issues has all kinds of opportunities to amuse the listener with incongruities between the expectations and what they really are like. So please don’t reveal yourself to be predictably trite.

Using R: Installing GenABEL and RepeatABEL

GenABEL is an R package for performing genome-wide association with linear mixed models and a genomic relationship matrix. RepeatABEL is a package for such genome-wide association studies that also need repeated measures.

Unfortunately, since 2018, GenABEL is not available on CRAN anymore, because of failed checks that were not fixed. (Checks are archived on CRAN, but this means very little to me.) As a consequence, RepeatABEL is also missing.

Fair enough, the GenABEL creators probably aren’t paid to maintain old software. It is a bit tragic, however, to think that in 2016, GenABEL was supposed to be the core of a community project to develop a suite of genomic analysis packages, two years before it was taken of CRAN:

The original publication of the GenABEL package for statistical analysis of genotype data has led to the evolution of a community which we now call the GenABEL project, which brings together scientists, software developers and end users with the central goal of making statistical genomics work by openly developing and subsequently implementing statistical models into user-friendly software.

The project has benefited from an open development model, facilitating communication and code sharing between the parties involved. The use of a free software licence for the tools in the GenABEL suite promotes quick uptake and widespread dissemination of new methodologies and tools. Moreover, public access to the source code is an important ingredient for active participation by people from outside the core development team and is paramount for reproducible research. Feedback from end users is actively encouraged through a web forum, which steadily grows into a knowledge base with a multitude of answered questions. Furthermore, our open development process has resulted in transparent development of methods and software, including public code review, a large fraction of bugs being submitted by members of the community, and quick incorporation of bug fixes.

I have no special insight about the circumstances here, but obviously the situation is far from ideal. You can still use the packages, though, with a little more effort to install. Who knows how long that will be the case, though. In a complex web of dependencies like the R package ecosystem, an unmaintained package probably won’t last.

GenABEL can probably be replaced by something like GEMMA. It does mixed models for GWAS, and while it isn’t an R package, it’s probably about as convenient. However, I don’t know of a good alternative to RepeatABEL.

These are the steps to install GenABEL and RepeatABEL from archives:

  1. We go to the CRAN archive and get the tarballs for GenABEL, GenABEL.data which it needs, and RepeatABEL.
    curl -O https://cran.r-project.org/src/contrib/Archive/GenABEL/GenABEL_1.8-0.tar.gz
    curl -O https://cran.r-project.org/src/contrib/Archive/GenABEL.data/GenABEL.data_1.0.0.tar.gz
    curl -O https://cran.r-project.org/src/contrib/Archive/RepeatABEL/RepeatABEL_1.1.tar.gz
    

    We don’t need to unpack them.

  2. Install GenABEL.data and GenABEL from a local source. Inside R, we can use install.packages, using the files we’ve just downloaded instead of the online repository.
    install.packages(c("GenABEL.data_1.0.0.tar.gz", "GenABEL_1.8-0.tar.gz"), repos = NULL)
    
  3. To install RepeatABEL, we first need hglm, which we can get from CRAN. After that has finished, we install RepeatABEL, again from local source:
    install.packages("hglm")
    install.packages("RepeatABEL_1.1.tar.gz", repos = NULL)
    

This worked on R version 3.6.1 running on Ubuntu 16.04, and also on Mac OS X.

Literature

Karssen, Lennart C., Cornelia M. van Duijn, and Yurii S. Aulchenko. ”The GenABEL Project for statistical genomics.” F1000Research 5 (2016).

Using R: From gather to pivot

Since version 1.0.0, released in September, the tidyr package has a new replacement for the gather/spread pair of functions, called pivot_longer/pivot_wider. (See the blog post about the release. It can do a lot of cool things.) Just what we needed, another pair of names for melt/cast, right?

Yes, I feel like this might just be what we need!

My journey started with reshape2, and after a bit of confusion, I internalised the logic of melt/cast. Look at this beauty:

library(reshape2)
fake_data <- data.frame(id = 1:20,
                        variable1 = runif(20, 0, 1),
                        variable2 = rnorm(20))
melted <- melt(fake_data, id.vars = "id")

This turns a data frame that looks like this …

  id  variable1   variable2
1  1 0.10287737 -0.21740708
2  2 0.04219212  1.36050438
3  3 0.78119150  0.09808656
4  4 0.44304613  0.48306900
5  5 0.30720140 -0.45028374
6  6 0.42387957  1.16875579

… into a data frame that looks like this:

  id  variable      value
1  1 variable1 0.10287737
2  2 variable1 0.04219212
3  3 variable1 0.78119150
4  4 variable1 0.44304613
5  5 variable1 0.30720140
6  6 variable1 0.42387957

This is extremely useful. Among other things it comes up all the time when using ggplot2.

Then, as I detailed in a post two years ago, I switched to tidyr as that became the replacement package. ”Gather” and ”spread” made no sense to me as descriptions of operations on a data frame. To be fair, ”melt” and ”cast” felt equally arbitrary, but by that time I was used to them. Getting the logic of the arguments, the order, what needed quotation marks and not, took some staring at examples and a fair bit of trial and error.

Here are some examples. If you’re not used to these functions, just skip ahead, because you will want to learn the pivot functions instead!

library(tidyr)
melted <- gather(fake_data, variable, value, 2:3)
 
## Column names instead of indices
melted <- gather(fake_data, variable, value, variable1, variable2)
 
## Excluding instead of including
melted <- gather(fake_data, variable, value, -1)
 
## Excluding using column name
melted <- gather(fake_data, variable, value, -id)

Enter the pivot functions. Now, I have never used pivot tables in any spreadsheet software, and in fact, the best way to explain them to me was to tell me that they were like melt/cast (and summarise) … But pivot_longer/pivot_wider are friendlier on first use than gather/spread. The naming of both the functions themselves and their arguments feel like a definite improvement.

long <- pivot_longer(fake_data, 2:3,
                     names_to = "variable",
                     values_to = "value")
# A tibble: 40 x 3
      id variable    value
           
 1     1 variable1  0.103 
 2     1 variable2 -0.217 
 3     2 variable1  0.0422
 4     2 variable2  1.36  
 5     3 variable1  0.781 
 6     3 variable2  0.0981
 7     4 variable1  0.443 
 8     4 variable2  0.483 
 9     5 variable1  0.307 
10     5 variable2 -0.450 
# … with 30 more rows

We tell it into what column we want the names to go, and into what column we want the values to go. The function is named after a verb that is associated with moving things about in tables all the way to matrix algebra, followed by an adjective (in my opinion the most descriptive, out of the alternatives) that describes the layout of the data that we want.

Or, to switch us back again:

wide <- pivot_wider(long,
                    names_from = "variable",
                    values_from = "value")
# A tibble: 20 x 3
      id variable1 variable2
             
 1     1    0.103    -0.217 
 2     2    0.0422    1.36  
 3     3    0.781     0.0981
 4     4    0.443     0.483 
 5     5    0.307    -0.450 
 6     6    0.424     1.17  

Here, instead, we tell it where we want the new column names taken from and where we want the new values taken from. None of this is self-explanatory, by any means, but they are thoughtful choices that make a lot of sense.

We’ll see what I think after trying to explain them to beginners a few times, and after I’ve fought warning messages involving list columns for some time, but so far: well done, tidyr developers!

#TidyTuesday: horror films, squirrels and commuters

Tidy Tuesday is a fun weekly activity where a lot of R enthusiasts make different visualisations, and possibly modelling, of the same dataset. You can read more about it at their Github page. I participated for three weeks, and here is a recap. I will show excerpts of the code, but you can read the whole thing by clicking through to Github.

2019-10-22 Horror films

Data: https://github.com/rfordatascience/tidytuesday/tree/master/data/2019/2019-10-22

My code: https://github.com/mrtnj/rstuff/blob/master/tidytuesday/horror_movies.R

In time for Halloween, we got a dataset with horror film data from IMDB. (Yes, I will be mixing the terms ”film” and ”movie” wildly.)

The first week, I started with making a pretty boring plot, the way I’d normally plot things (white background, small multiples, you know the drill). I wanted to look at distribution over the year, so I plotted what month films are released and the distribution of review scores and budgets each month. After thinking about it for a while, I thought a logarithmic scale would make sense for budgets, that span a huge range. Also, after realising that the budget column actually didn’t contain dollars, but a mix of currencies, I decided not to try to convert, but use only the US dollar budgets.

I don’t often run into dates, to using the date functions from readr and lubridate was new to me, as was the built-in vector month.abb:

library(dplyr)
library(egg)
library(ggplot2)
library(ggimage)
library(lubridate)
library(readr)
library(stringr)

movies <- read_csv("horror_movies.csv")

## Parse dates

movies$release_parsed  <- parse_date(movies$release_date,
                                     format = "%d-%b-%y",
                                     locale = locale("en")) 

movies$release_year <- ifelse(is.na(movies$release_parsed),
                              movies$release_date,
                              year(movies$release_parsed))

movies$release_month  <- month.abb[month(movies$release_parsed)]

Here, we parse the release data, and extract the release year, treating films that only have a release year separately.

I also put in means with confidence intervals, like so, and a line for the mean review rating:

model  <- lm(review_rating ~ release_month, movies)

fit  <- data.frame(release_month = month.abb,
                   predict(model,
                           newdata = data.frame(release_month = month.abb),
                                                interval = "confidence"),
                   stringsAsFactors = FALSE)

grand_mean_rating  <- mean(movies$review_rating,
                           na.rm = TRUE)

As an example of the plotting code, here is the middle panel for ratings. As usual with ggplot2, we layer geometries on top of each other (here: violin plots, points with range bars, and a horizontal line, followed by a lot of formatting.

plot_rating <- ggplot() +
    geom_violin(aes(x = release_month,
                    y = review_rating),
                fill = "grey",
                colour = NA,
                data = movies) +
    scale_x_discrete(limits = month.abb) +
    geom_pointrange(aes(x = release_month,
                        y = fit,
                        ymax = upr,
                        ymin = lwr),
                    data = fit) +
    geom_hline(yintercept = grand_mean_rating,
               linetype = 2,
               colour = "red") +
    ylim(0, 10) +
    theme_bw(base_size = 12) +
    theme(panel.grid = element_blank()) +
    xlab("") +
    ylab("Review rating")

There is similar code for the other two panels. Finally, I used ggarrange from the egg package to put everything together. In summary, most horror films are released in October, probably around Halloween. The review ratings of films released in this horror season are also a tiny bit higher than during the rest of the year, but there is not much of a difference in the budgets.

After that, and after seeing some of the fun horror-themed graphs other people made, I decided to make something more colourful. Here is a plot on the same theme, showing each day and year separately, an appropriately horrendous colour scheme, and a pumpkin icon to indicate the date of Halloween. I like this plot better because it shows more of the data. It shows the increase at Halloween. We also see some spikes at other dates, like 1 January of some years. It also shows how the dataset ends at Halloween 2017.

The code for this plot is mostly a lot of theme formatting. The ggplot2 theme function takes a lot of arguments I’ve never used before.

movies$yday  <- yday(movies$release_parsed)

daycount <- summarise(group_by(movies, yday, release_year), n = n())

First, we turn dates into days of the year, and count the number of film releases.

halloween  <-  yday("2019-10-31")

pumpkin_data  <- data.frame(x = halloween,
                            y = -1,
                            image = "pumpkin.png",
                            stringsAsFactors = FALSE)

Then, we set up the date of Halloween and a data frame for the pumpkin icon. We’re going to use geom_image from the ggimage package to add this icon to each subplot.

breaks  <- yday(paste("2019-", 1:12, "-01", sep = ""))

plot_year <- ggplot() +
    geom_point(aes(x = yday,
                   y = n),
               colour = "green",
               data = na.exclude(dc)) +
    geom_image(aes(x = x,
                   y = y,
                   image = image),
               data = pumpkin_data) +
    facet_wrap(~ release_year,
               ncol = 2) +
    scale_x_continuous(breaks = breaks,
                       labels = month.abb) +
    ylim(-3, NA) +
    labs(caption = "Pumpkin icon by Good Ware from www.flatiron.com.") +
    theme(panel.grid = element_blank(),
          strip.background = element_blank(),
          text = element_text(family = "mono",
                              colour = "grey",
                              size = 16),
          axis.text = element_text(family = "mono",
                                   colour = "green",
                                   size = 14),
          axis.ticks = element_line(colour = "green"),
          strip.text = element_text(family = "mono",
                                    colour = "grey",
                                    size = 16),
          plot.background = element_rect(fill = "black"),
          panel.background = element_rect(fill = "black")) +
    xlab("") +
    ylab("Horror films released on this day") +
    ggtitle("When horror films are released")

A lot of other people made graphs that highlight the increase in horror film releases around Halloween in different ways. Here are some that I like:

And, looking deeper, there is a pattern within months too:

Finally, I also like this plot, that makes a case for a U-shaped relationship between budget and rating:

And for contrast, another that makes a different case with the same data:

This seems to be a recurrent theme when it comes to interpretation and quantitative analysis in the Tidy Tuesday datasets. People make different modeling choices, or visualisation choices (which are modeling choices) about what to lump together, what to separate into bins, how to transform the data, and how to show uncertainty. In some cases, as with the pattern of film releases around Halloween, they all find similar results. In some other cases, they don’t.

2019-10-28 NYC Squirrel Census

Data: https://github.com/rfordatascience/tidytuesday/tree/master/data/2019/2019-10-29

My code: https://github.com/mrtnj/rstuff/blob/master/tidytuesday/nyc_squirrels.R

This week, the data was about the location and activities of squirrels in New York central park on certain times. I had this vision of an animated map of squirrel locations. I ended up with an animation, but no map. The colour of the squirrel icon shows the main fur colour of the squirrels (grey, black, cinnamon), and the size shows adults and juveniles.

I had never used gganimate before (only animation, as in this post about the Game of Life), but I had seen Thomas Lin Pedersen tweet about it, and I wanted to try.

library(dplyr)
library(gganimate)
library(ggimage)
library(ggplot2)
library(readr)

squirrels <- read_csv("nyc_squirrels.csv")

## Parse the date
squirrels$date_parsed  <- parse_date(as.character(squirrels$date), format = "%m%d%Y")

## Give each observation a unique ID (to use as group in the
## animation, so as to not have points turn into one another but fade
## instead.
squirrels$key  <- 1:nrow(squirrels)

## Associate the different squirrel colours with the filenames of
## icons in different colours (manually filled with GIMP).
squirrels$image  <- "squirrel.png"
squirrels$image[squirrels$primary_fur_color == "Cinnamon"]  <- "squirrel_cinnamon.png"
squirrels$image[squirrels$primary_fur_color == "Gray"]  <- "squirrel_grey.png"
squirrels$image[is.na(squirrels$primary_fur_colour)]  <- NA

Again, we need to parse the date. We already have latitude and longitude. We need a unique identifier for each observation, to tell gganimate that we want each squirrel to be in its own group. Then, we associate squirrel colours with three different files with a squirrel icon in different colours.

First, we make two image scatterplot layers, setting the sizes of adults and juveniles manually. The colour is deal with by mapping the image column containing the file names to the image aesthetic. We add some formatting, and then, the transition_states layer, which is where the graph turns from still and boring to magical moving pictures. This will animate a series of discrete ”states”, which here consist of the date pasted together with the shift (AM or PM squirrel observation shift). The special ”{closest_state}” variable in the title string puts this state name as plot title.

plot_colour <- ggplot() +
    geom_image(aes(y = long, x = lat, image = image, group = key),
               size = 0.04,
               data = filter(squirrels, age == "Adult")) +
    geom_image(aes(y = long, x = lat, image = image, group = key),
               size = 0.03,
               data = filter(squirrels, age == "Juvenile")) +
    theme_bw(base_size = 16) +
    theme(panel.grid = element_blank()) +
    xlab("Latitude") +
    ylab("Longitude") +
    labs(title = "{closest_state}",
         caption = "Data from NYC Squirrel Census. Squirrel icon made by Freepik from www.flatiron.com.") +
    transition_states(paste(date_parsed, shift),
                      state_length = 2,
                      transition_length = 1)

## Render it and write to file
animate(plot_colour,
        fps = 10,
        nframes = 400,
        end_pause = 20,
        rewind = FALSE,
        width = 1000,
        height = 1000)

I was faffing around unsuccessfully with different map packages to try to find something of Central Park. It seems ggmaps is the way to go. Several other participants made nice maps:

However, I think this was my favourite:

https://github.com/ryantimpe/TidyTuesday/blob/master/2019w44/2019w44.R

The original Squirrel Census Report seems to be amazing object, too, with a beautiful map.

2019-11-05 Biking and walking to work in the US (and Sweden)

Data: https://github.com/rfordatascience/tidytuesday/tree/master/data/2019/2019-11-05

My code: https://github.com/mrtnj/rstuff/blob/master/tidytuesday/commute.R

This week I felt I had to make a map. The end result doesn’t look like much, but it took a while. Here are the average percentages of commuters who walk and bike to work in different US states 2008-2012 with data from the American Community Survey:

library(dplyr)
library(ggplot2)
library(readr)
library(usmap)

commute <- read_csv("commute.csv")

## Map data from the usmap package
state_map  <- us_map(regions = "state")

## There are some incompletely labelled states; fix them
missing  <- setdiff(commute$state, state_map$full)

commute$state_modified <- commute$state
commute$state_modified[commute$state == "Ca"] <- "California"
commute$state_modified[commute$state == "Massachusett"]  <- "Massachusetts"

We get map coordinates for the US states from the usmap package (because the one in maps doesn’t have Alaska and Hawaii).

Then we fix some mislabelling in the data.

## Get the average per state
state_average  <- summarise(group_by(commute, state_modified, mode),
                            average = sum(percent * n)/sum(n))

## Combine averages and coordinates
combined  <- inner_join(state_average,
                        state_map,
                        by = c("state_modified" = "full"))

We take a weighted average of the percentages per state and join the state averages with the state map coordinates. The map I posted on Twitter didn’t weight the average, but I think that is a bit better. There is still the issue that states have different populations and different distributions of large and small cities, but that’s the nature of things. In summary, there is not much biking going on, but some more walking to work.

plot_map  <- ggplot() +
    geom_polygon(aes(x = x, y = y, fill = average, group = group),
                 colour = "black",
                 data = combined) +
    facet_wrap(~ mode) +
    scale_fill_continuous(low = "white",
                          high = "blue",
                          name = "Percent commuters") +
    theme_bw(base_size = 16) +
    theme(panel.grid = element_blank(),
          strip.background = element_blank(),
          axis.text = element_blank(),
          axis.ticks = element_blank(),
          legend.position = "bottom") +
    xlab("") +
    ylab("") +
    labs(caption = "Cycling and walking to work 2008-2012 in the American Community Survey.")

The US seems to live up to its reputation as a motorised country. But I have no feeling for the scale of the data. For comparision, here is a map of Sweden with some not too recent data (2005-2006, from this VTI report>). The map is from the swemap package.

Exploratory analysis of a banana

This post is just me amusing myself by exploring a tiny data set I have lying around. The dataset and the code is on Github.

In 2014 (I think), I was teaching the introductory cell biology labs (pictures in the linked post) in Linköping. We were doing a series of simple preparations to look at cells and organelles: a cheek swab gives you a view of dead mammalian cells with bacteria on them; Elodea gives you a nice chloroplast view; a red bell pepper gives you chromoplasts; and a banana stained with iodine gives you amyloplasts. Giving the same lab six times in a row, it became apparent how the number of stained amyloplasts decreased as the banana ripened.

I took one banana, sliced in into five pieces (named A-E), and left it out to ripen. Then I stained (with Lugol’s iodine solution) and counted the number of amyloplasts per cell in a few cells (scraped off with a toothpick) from the end of each piece at day 1, 5, and 9.

First, here is an overview of the data. On average, we go from 17 stained amyloplasts on day 1, to 5 on day five and 2 on day nine.

If we break the plot up by slices, we see decline in every slice and variability between them. Because I only sampled each slice once per day, there is no telling whether this is variation between parts of the banana or between samples taken (say, hypothetically, because I might have stuck the toothpick in more or less deeply, or because the ripeness varies from the middle to the peel).

How can we model this? Let’s first fit a linear model where the number of amyloplasts decline at a constant rate per day, allowing for different starting values and different declines for each slice. We can anticipate that a Gaussian linear model will have some problems in this situation.

We fit a linear model and pull out the fitted values for each day–slice combination:

model_lm  <- lm(amyloplasts ~ day * slice,
                data = banana)

levels <- expand.grid(slice = unique(banana$slice),
                      day = unique(banana$day),
                      stringsAsFactors = FALSE)

pred_lm  <- cbind(levels,
                  predict(model_lm,
                          newdata = levels,
                          interval = "confidence"))

Then, to investigate the model’s behaviour, we can simulate data from the model, allowing for uncertainty in the fitted parameters, with the sim function from the arm package.

We make a function to simulate data from the linear model given a set of parameters, then simulate parameters and feed the first parameter combination to the function to get ourselves a simulated dataset.

y_rep_lm  <- function(coef_lm, sigma, banana) {
    slice_coef  <- c(0, coef_lm[3:6])
    names(slice_coef)  <- c("A", "B", "C", "D", "E")

    slice_by_day_coef  <- c(0, coef_lm[7:10])
    names(slice_by_day_coef)  <- c("A", "B", "C", "D", "E")   

    banana$sim_amyloplasts  <- 
        coef_lm[1] +
        slice_coef[banana$slice] +
        banana$day * (coef_lm[2] + slice_by_day_coef[banana$slice]) +
        rnorm(nrow(banana), 0, sigma)
    banana
}

sim_lm  <- sim(model_lm)

sim_banana  <- y_rep_lm(sim_lm@coef[1,], sim_lm@sigma[1], banana)

The result looks like this (black dots) compared with the real data (grey dots).

The linear model doesn’t know that the number of amyloplasts can’t go below zero, so it happily generates absurd negative values. While not apparent from the plots, the linear model also doesn’t know that amyloplasts counts are restricted to be whole numbers. Let’s fit a generalized linear model with a Poisson distribution, which should be more suited to this kind of discrete data. The log link function will also turn the linear decrease into an exponential decline, which seems appropriate for the decline in amyloplasts.

model_glm <- glm(amyloplasts ~ day * slice,
                 data = banana,
                 family = poisson(link = log))

pred_glm <- predict(model_glm,
                    newdata = levels,
                    se.fit = TRUE)

results_glm <- data.frame(levels,
                          average = pred_glm$fit,
                          se = pred_glm$se.fit,
                          stringsAsFactors = FALSE)
  
y_rep_glm  <- function(coef_glm, banana) {
    slice_coef  <- c(0, coef_glm[3:6])
    names(slice_coef)  <- c("A", "B", "C", "D", "E")

    slice_by_day_coef  <- c(0, coef_glm[7:10])
    names(slice_by_day_coef)  <- c("A", "B", "C", "D", "E")
    

    latent  <- exp(coef_glm[1] +
        slice_coef[banana$slice] +
        banana$day * (coef_glm[2] + slice_by_day_coef[banana$slice])) 

    banana$sim_amyloplasts  <- rpois(n = nrow(banana),
                                     lambda = latent)
    banana
}

sim_glm  <- sim(model_glm)

sim_banana_glm  <- y_rep_glm(sim_glm@coef[2,], banana)

This code is the same deal as above, with small modifications: glm instead of lm, with some differences in the interface. Then a function to simulate data from a Poisson model with an logarithmic link, that we apply to one set of parameters values.

There are no impossible zeros anymore. However, there seems to be many more zeros in the real data than in the simulated data, and consequently, as the number of amyloplasts grow small, we overestimate how many there should be.

Another possibility among the standard arsenal of models is a generalised linear model with a negative binomial distribution. As opposed to the Poisson, this allows greater spread among the values. We can fit a negative binomial model with Stan.

library(rstan)

model_nb  <- stan(file = "banana.stan",
                  data = list(n = nrow(banana),
                              n_slices = length(unique(banana$slice)),
                              n_days = length(unique(banana$day)),
                              amyloplasts = banana$amyloplasts,
                              day = banana$day - 1,
                              slice = as.numeric(factor(banana$slice)),
                              prior_phi_scale = 1))

y_rep  <- rstan::extract(model_nb, pars = "y_rep")[[1]]

Here is the Stan code in banana.stan:

data {
    int n;
    int n_slices;
    int <lower = 0> amyloplasts[n];
    real <lower = 0> day[n];
    int <lower = 1, upper = n_slices> slice[n];
    real prior_phi_scale;
}
parameters {
    real initial_amyloplasts[n_slices];
    real decline[n_slices];
    real < lower = 0> phi_rec;
}
model {
    phi_rec ~ normal(0, 1);
    for (i in 1:n) {
        amyloplasts[i] ~ neg_binomial_2_log(initial_amyloplasts[slice[i]] +
		                            day[i] * decline[slice[i]],
					    (1/phi_rec)^2);
    }
}
generated quantities {
    vector[n] y_rep;
    for (i in 1:n) {
        y_rep[i] = neg_binomial_2_rng(exp(initial_amyloplasts[slice[i]] +
		                          day[i] * decline[slice[i]]),
				      (1/phi_rec)^2);
    }
}

This model is similar to the Poisson model, except that the negative binomial allows an overdispersion parameter, a small value of which corresponds to large variance. Therefore, we put the prior on the reciprocal of the square root of the parameter.

Conveniently, Stan can also make the simulated replicated data for us in the generated quantities block.

What does the simulated data look like?

Here we have a model that allows for more spread, but in the process, generates some extreme data, with hundreds of amyloplasts per cell in some slices. We can try to be procrustean with the prior and constrain the overdispersion to smaller values instead:

model_nb2 <- stan(file = "banana.stan",
                  data = list(n = nrow(banana),
                              n_slices = length(unique(banana$slice)),
                              n_days = length(unique(banana$day)),
                              amyloplasts = banana$amyloplasts,
                              day = banana$day - 1,
                              slice = as.numeric(factor(banana$slice)),
                              prior_phi_scale = 0.1))

y_rep2  <- rstan::extract(model_nb2, pars = "y_rep")[[1]]

That looks a little better. Now, we’ve only looked at single simulated datasets, but we can get a better picture by looking at replicate simulations. We need some test statistics, so let us count how many zeroes there are in each dataset, what the maximum value is, and the sample variance, and then do some visual posterior predictive checks.

 
check_glm  <- data.frame(n_zeros = numeric(1000),
                         max_value = numeric(1000),
                         variance = numeric(1000),
                         model = "Poisson",
                         stringsAsFactors = FALSE)

check_nb  <- data.frame(n_zeros = numeric(1000),
                        max_value = numeric(1000),
                        variance = numeric(1000),
                        model = "Negative binomial",
                        stringsAsFactors = FALSE)

check_nb2  <- data.frame(n_zeros = numeric(1000),
                         max_value = numeric(1000),
                         variance = numeric(1000),
                         model = "Negative binomial 2",
                         stringsAsFactors = FALSE)


for (sim_ix in 1:1000) {
    y_rep_data  <- y_rep_glm(sim_glm@coef[sim_ix,], banana)
    check_glm$n_zeros[sim_ix]  <- sum(y_rep_data$sim_amyloplasts == 0)
    check_glm$max_value[sim_ix] <- max(y_rep_data$sim_amyloplasts)
    check_glm$variance[sim_ix] <- var(y_rep_data$sim_amyloplasts)

    check_nb$n_zeros[sim_ix]  <- sum(y_rep[sim_ix,] == 0)
    check_nb$max_value[sim_ix]  <- max(y_rep[sim_ix,])
    check_nb$variance[sim_ix]  <- var(y_rep[sim_ix,])

    check_nb2$n_zeros[sim_ix]  <- sum(y_rep2[sim_ix,] == 0)
    check_nb2$max_value[sim_ix]  <- max(y_rep2[sim_ix,])
    check_nb2$variance[sim_ix]  <- var(y_rep2[sim_ix,])
}

check  <- rbind(check_glm,
                check_nb,
                check_nb2)

melted_check  <- gather(check, "variable", "value", -model)

check_data  <- data.frame(n_zeros = sum(banana$amyloplasts == 0),
                          max_value = max(banana$amyloplasts),
                          variance = var(banana$amyloplasts))

Here is the resulting distribution of these three discrepancy statistics in 1000 simulated datasets for the three models (generalised linear model with Poisson distribution and the two negative binomial models). The black line is the value for real data.

When viewed like this, it becomes apparent how the negative binomial models do not fit that well. The Poisson model struggles with the variance and the number of zeros. The negative binomial models get closer to the number of zeros in the real data, they still have too few, while at the same time having way too high maximum values and variance.

Finally, let’s look at the fitted means and intervals from all the models. We can use the predict function for the linear model and Poisson model, and for the negative binomial models, we can write our own:

pred_stan <- function(model, newdata) {
    samples <- rstan::extract(model)
    initial_amyloplasts <- data.frame(samples$initial_amyloplasts)
    decline  <- data.frame(samples$decline)
    names(initial_amyloplasts) <- names(decline) <- c("A", "B", "C", "D", "E")

    ## Get posterior for levels
    pred  <- matrix(0,
                    ncol = nrow(newdata),
                    nrow = nrow(initial_amyloplasts))

    for (obs in 1:ncol(pred)) {
        pred[,obs]  <- initial_amyloplasts[,newdata$slice[obs]] +
            (newdata$day[obs] - 1) * decline[,newdata$slice[obs]]
    }

    ## Get mean and interval
    newdata$fit  <- exp(colMeans(pred))
    intervals <- lapply(data.frame(pred), quantile, probs = c(0.025, 0.975))
    newdata$lwr  <- exp(unlist(lapply(intervals, "[", 1)))
    newdata$upr  <- exp(unlist(lapply(intervals, "[", 2)))

    newdata
}

pred_nb <- pred_stan(model_nb, levels)
pred_nb2 <- pred_stan(model_nb2, levels)

In summary, the three generalised linear models with log link function pretty much agree about the decline of amyloplasts during the later days, which looks more appropriate than a linear decline. They disagree about the uncertainty about the numbers on the first day, which is when there are a lot. Perhaps coincidentally, this must also be where the quality of my counts are the lowest, because it is hard to count amyloplasts on top of each other.

X-related genes

It is hard to interpret gene lists. But before we would even get into the statistical properties of annotation term enrichment, or whether network models are appropriate, or anything like that, we have the simpler problem of how to talk, colloquially, about genes connected with a biological process. In particular, there is a weak way to describe gene function one ought to avoid.

What is, for example, an immune-related gene? Why, it’s a gene that is important to immune function, of course! Is beta-catenin an immune-related gene? Wnt signalling is certainly important to immune cell differentiation (Chae & Bothwell 2018), and beta-catenin is certainly important to Wnt signalling function.

Similarly, Paris is a city in France. Therefore, all cities in France are Paris-related.

The thing is, any indirect mechanism can be a mechanism of genuine genetic causation, and this one isn’t even very roundabout. I couldn’t find a known Mendelian disorder with a mechanism that fit the above story, but I don’t think it’s out of the question. At the same time, labeling everything Wnt ”immune-related” would be a little silly, because those genes also do all sorts of other things. If the omnigenenic hypothesis of near-universal pleiotropy is correct, we should expect a lot of genetic causation to be like that: indirect, based on common pathways that do many kinds of different work in different parts of the organism.

That leaves X-related genes a vague notion that can contract or expand at will. From now on, I will think twice before using it.