# Using R: Animal model with hglm and Stan (with Cholesky trick)

A few weeks ago I posted about fitting the quantitative genetic animal model with MCMCglmm and R-INLA. Since then, I listened to a talk by Lars Rönnegård, one of the creators of the hglm package, and this paper was published in GSE about animal models in Stan.

# hglm

The hglm package fits hierarchical generalised linear models. That includes the animal model with pedigree or genomic relatedness. Hierarchical generalised linear models also allow you to model the dispersion of random effects, which lets you do tricks like variance QTL mapping (Rönnegård & Valdar 2011), breeding values for variances (Rönnegård et al. 2010) or genomic prediction models with predictors of marker variance (Mouresan, Selle & Rönnegård 2019). But let’s not get ahead of ourselves. How do we fit an animal model?

Here is the matrix formulation of the animal model that we skim through in every paper. It’s in this post because we will use the design matrix interface to hglm, which needs us to give it these matrices (this is not a paper, so we’re not legally obliged to include it):

$\mathbf{y} = \mu + \mathbf{X} \mathbf{b} + \mathbf{Z} \mathbf{a} + \mathbf{e}$

The terms are the the trait value, intercept, fixed coefficients and their design matrix, genetic coefficients and their design matrix, and the residual. The design matrix Z will contain one row and column for each individual, with a 1 to indicate its position in the phenotype table and pedigree and the rest zeros. If we sort our files, it’s an identity matrix.

The trick with the genetic coefficients is that they’re correlated, with a specific known correlation structure that we know from the pedigree (or in genomic models, from markers). It turns out (Lee, Nelder & Pawitan 2017, chapter 8) that you can change the Z matrix around so that it lets you fit the model with an identity covariance matrix, while still accounting for the correlations between relatives. You replace the random effects for relatedness with some transformed random effects that capture the same structure. One way to do this is with Cholesky decomposition.

$\mathbf{Z_{fudged}} = \mathbf{Z_0} \mathbf{L}$

As an example of what the Cholesky decomposition does, here is slice of the additive relationship matrix of 100 simulated individuals (the last generation of one replicate of these simulations) and the resulting matrix from Cholesky decomposition.

$\mathbf{a} \sim N(0, \mathbf{A} \sigma)$

We can fit

$\mathbf{a_{fudged}} \sim N(0, \mathbf{I} \sigma)$

This lets us fit the animal model with hglm, by putting in a modified Z matrix.

Assuming we have data frames with a pedigree and a phenotype (like, again, from these simulations):

library(AGHmatrix)
library(hglm)

A  <- Amatrix(ped)

Z0  <- diag(1000)
L <- t(chol(A))
Z  <- Z0 %*% L
X <- model.matrix(~1, pheno)

model <- hglm(y = pheno$pheno, X = X, Z = Z, conv = 1e-8) est_h2 <- model$varRanef / (model$varRanef + model$varFix)



(I found the recommendation to decrease the convergence criterion from the default for animal models in a YouTube video by Xia Chen.)

# Stan

When we turn to Stan, we will meet the Cholesky trick again. Stan is a software for Markov Chain Monte Carlo, built to fit hierarchical linear models, and related high-dimensional models, more effectively than other sampling strategies (like Gibbs). rstan is a helpful package for running Stan from within R.

Nishio & Arakawa (2019) recently published a Stan script to fit an animal model, comparing Stan to a Gibbs sampler (and a related MCMC sampler that they also didn’t publish the code for). If we look into their Stan model code, they also do a Cholesky decomposition to be able to use an identity matrix for the variance.

First, they decompose the additive relationship matrix that the program takes in:

transformed data{
matrix[K,K] LA;
LA = cholesky_decompose(A);
}


And then, they express the model like this:

vector[N] mu;
vector[K] a;
a_decompose ~ normal(0, 1);
a = sigma_G * (LA * a_decompose);
mu = X * b + Z * a;
Y ~ normal(mu, sigma_R);


We can add this line to the generated quantities block of the Stan program to get heritability estimates directly:

real h2;
h2 = sigma_U / (sigma_U + sigma_E)


Here, we’ve saved their model to a stan file, and now we can run it from R:

pheno$scaled_pheno <- as.vector(scale(pheno$pheno))

model_stan <- stan(file = "nishio_arakawa.stan",
data = list(Y = pheno$scaled_pheno, X = X, A = A, Z = Z0, J = 1, K = 1000, N = 1000)) est_h2_stan <- summary(model_stan, pars = "h2")$summary


Important note that I always forget: It's important to scale your traits before you run this model. If not, the priors might be all wrong.

The last line pulls out the summary for the heritability parameter (that we added above). This gives us an estimate and an interval.

The paper also contains this entertaining passage about performance, which reads as if it was a response to a comment, actual or anticipated:

R language is highly extensible and provides a myriad of statistical and graphical techniques. However, R language has poor computation time compared to Fortran, which is especially well suited to numeric computation and scientific computing. In the present study, we developed the programs for GS and HMC in R but did not examine computation time; instead, we focused on examining the performance of estimating genetic parameters and breeding values.

Yes, two of their samplers (Gibbs and HMC) were written in R, but the one they end up advocating (and the one used above), is in Stan. Stan code gets translated into C++ and then compiled to machine code.

# Stan with brms

If rstan lets us run Stan code from R and examine the output, brms lets us write down models in relatively straightforward R syntax. It’s like the MCMCglmm of the Stan world. We can fit an animal model with brms too, by directly plugging in the relationship matrix:

model_brms <- brm(scaled_pheno ~ 1 + (1|animal),
data = pheno,
family = gaussian(),
cov_ranef = list(animal = A),
chains = 4,
cores = 1,
iter = 2000)


Then, we can pull out the posterior samples for the variability, here expressed as standard errors, compute the heritability and then get the estimates (and interval, if we want):

posterior_brms <- posterior_samples(model_brms,
pars = c("sd_animal", "sigma"))

h2_brms  <- posterior_brms[,1]^2 /
(posterior_brms[,1]^2 + posterior_brms[,2]^2)

est_h2_brms <- mean(h2_brms)


(Code is on GitHub: both for the graphs above, and the models.)

# Using R: Animal model with simulated data

Last week’s post just happened to use MCMCglmm as an example of an R package that can get confused by tibble-style data frames. To make that example, I simulated some pedigree and trait data. Just for fun, let’s look at the simulation code, and use MCMCglmm and AnimalINLA to get heritability estimates.

First, here is some AlphaSimR code that creates a small random mating population, and collects trait and pedigree:

library(AlphaSimR)

## Founder population
FOUNDERPOP <- runMacs(nInd = 100,
nChr = 20,
inbred = FALSE,
species = "GENERIC")

## Simulation parameters
SIMPARAM <- SimParam$new(FOUNDERPOP) SIMPARAM$addTraitA(nQtlPerChr = 100,
mean = 100,
var = 10)
SIMPARAM$setGender("yes_sys") SIMPARAM$setVarE(h2 = 0.3)

## Random mating for 9 more generations
generations <- vector(mode = "list", length = 10)
generations[[1]] <- newPop(FOUNDERPOP,
simParam = SIMPARAM)

for (gen in 2:10) {

generations[[gen]] <- randCross(generations[[gen - 1]],
nCrosses = 10,
nProgeny = 10,
simParam = SIMPARAM)

}

## Put them all together
combined <- Reduce(c, generations)

## Extract phentoypes
pheno <- data.frame(animal = combined@id,
pheno = combined@pheno[,1])

## Extract pedigree
ped <- data.frame(id = combined@id,
dam = combined@mother,
sire =combined@father)
ped$dam[ped$dam == 0] <- NA
ped$sire[ped$sire == 0] <- NA

## Write out the files
write.csv(pheno,
file = "sim_pheno.csv",
row.names = FALSE,
quote = FALSE)

write.csv(ped,
file = "sim_ped.csv",
row.names = FALSE,
quote = FALSE)


In turn, we:

1. Set up a founder population with a AlphaSimR’s generic livestock-like population history, and 20 chromosomes.
2. Choose simulation parameters: we have an organism with separate sexes, a quantitative trait with an additive polygenic architecture, and we want an environmental variance to give us a heritability of 0.3.
3. We store away the founders as the first generation, then run a loop to give us nine additional generations of random mating.
4. Combine the resulting generations into one population.
5. Extract phenotypes and pedigree into their own data frames.
6. Optionally, save the latter data frames to files (for the last post).

Now that we have some data, we can fit a quantitative genetic pedigree model (”animal model”) to estimate genetic parameters. We’re going to try two methods to fit it: Markov Chain Monte Carlo and (the unfortunately named) Integrated Nested Laplace Approximation. MCMC explores the posterior distribution by sampling; I’m not sure where I heard it described as ”exploring a mountain by random teleportation”. INLA makes approximations to the posterior that can be integrated numerically; I guess it’s more like building a sculpture of the mountain.

First, a Gaussian animal model in MCMCglmm:

library(MCMCglmm)

## Gamma priors for variances
prior_gamma <- list(R = list(V = 1, nu = 1),
G = list(G1 = list(V = 1, nu = 1)))

## Fit the model
model_mcmc  <- MCMCglmm(scaled ~ 1,
random = ~ animal,
family = "gaussian",
prior = prior_gamma,
pedigree = ped,
data = pheno,
nitt = 100000,
burnin = 10000,
thin = 10)

## Calculate heritability for heritability from variance components
h2_mcmc_object  <- model_mcmc$VCV[, "animal"] / (model_mcmc$VCV[, "animal"] + model_mcmc$VCV[, "units"]) ## Summarise results from that posterior h2_mcmc <- data.frame(mean = mean(h2_mcmc_object), lower = quantile(h2_mcmc_object, 0.025), upper = quantile(h2_mcmc_object, 0.975), method = "MCMC", stringsAsFactors = FALSE)  And here is a similar animal model in AnimalINLA: library(AnimalINLA) ## Format pedigree to AnimalINLA's tastes ped_inla <- ped ped_inla$id  <- as.numeric(ped_inla$id) ped_inla$dam  <- as.numeric(ped_inla$dam) ped_inla$dam[is.na(ped_inla$dam)] <- 0 ped_inla$sire  <- as.numeric(ped_inla$sire) ped_inla$sire[is.na(ped_inla$sire)] <- 0 ## Turn to relationship matrix A_inv <- compute.Ainverse(ped_inla) ## Fit the model model_inla <- animal.inla(response = scaled, genetic = "animal", Ainverse = A_inv, type.data = "gaussian", data = pheno, verbose = TRUE) ## Pull out summaries from the model object summary_inla <- summary(model_inla) ## Summarise results h2_inla <- data.frame(mean = summary_inla$summary.hyperparam["Heritability", "mean"],
lower = summary_inla$summary.hyperparam["Heritability", "0.025quant"], upper = summary_inla$summary.hyperparam["Heritability", "0.975quant"],
method = "INLA",
stringsAsFactors = FALSE)


If we wrap this all in a loop, we can see how the estimation methods do on replicate data (full script on GitHub). Here are estimates and intervals from ten replicates (black dots show the actual heritability in the first generation):

As you can see, the MCMC and INLA estimates agree pretty well and mostly hit the mark. In the one replicate dataset where they falter, they falter together.

# Using R: When weird errors occur in packages that used to work, check that you’re not feeding them a tibble

There are some things that are great about the tidyverse family of R packages and the style they encourage. There are also a few gotchas. Here’s a reminder to myself about this phenomenon: tidyverse-style data frames (”tibbles”) do not simplify to vectors upon extracting a single column with hard bracket indexing.

Because some packages rely on specific data.frame behaviours that tibbles don’t show, functions that work nicely with data frames, and normally have nice interpretable error messages, may mysteriously collapse in all kinds of ways when fed a tibble.

Here’s an example with MCMCglmm. This is not to pick on MCMCglmm; it just happened to be one of the handful of packages where I’ve run into this issue. Here, we use readr, the tidyverse alternative to the read.table family of functions to read some simulated data. The base function is called read.csv, and the readr alternative is read_csv.

Reading in tabular data is a surprisingly hard problem: tables can be formatted in any variety of obnoxious ways, and the reading function also needs to be fast enough to deal with large files. Using readr certainly isn’t always painless, but it reduces the friction a lot compared to read.table. One of the improvements is that read_csv will return a data.frame with the class tbl_df, affectionately called ”tibble

After reading the data, we centre and scale the trait, set up some priors and run an animal model. Unfortunately, MCMCglmm will choke on the tibble, and deliver a confusing error message.

library(MCMCglmm)

pheno$scaled <- scale(pheno$pheno)

prior_gamma <- list(R = list(V = 1, nu = 1),
G = list(G1 = list(V = 1, nu = 1)))

model <- MCMCglmm(scaled ~ 1,
random = ~ animal,
family = "gaussian",
prior = prior_gamma,
pedigree = ped,
data = pheno,
nitt = 100000,
burnin = 10000,
thin = 10)


Error in inverseA(pedigree = pedigree, scale = scale, nodes = nodes) :
individuals appearing as dams but not in pedigree
In if (attr(pedigree, "class") == "phylo") { :
the condition has length > 1 and only the first element will be used


In this pedigree, it is not the case that there are individuals appearing as dams but not listed. If we turn the data and pedigree into vanilla data frames instead, it will work:

ped <- as.data.frame(ped)
pheno <- as.data.frame(pheno)

model <- MCMCglmm(scaled ~ 1,
random = ~ animal,
family = "gaussian",
prior = prior_gamma,
pedigree = ped,
data = pheno,
nitt = 100000,
burnin = 10000,
thin = 10)


                       MCMC iteration = 0

MCMC iteration = 1000

MCMC iteration = 2000


# Journal club of one: ”Maternal and additive gentic effects contribute to variation in offspring traits in a lizard”

The posts this week have been about epigenetics. However, let’s step back from the molecular mechanisms and what not to look at the bigger picture. This recent paper by Noble, McFarlane, Keogh and Whiting (2014) looks at maternal effects and additive genetic effects on fitness-related traits in a lizard. Now we are in quantitative genetics territory where one uses pedigrees and phenotypes to look at the determinants of a trait while abstracting away the mechanistic details. Nowadays, quantitative genetics is also equipped with Bayesian animal models and the ability to do parentage assignment with molecular methods.

The authors measured at size, body mass, and growth and as well as the speed and endurance when running. The fun part is that while only endurance had a substantial heritability (0.4), the other traits had maternal components in the 0.2-0.5 range. So for most of the traits there’s little heritability while a big chunk of the trait variance is explained by maternal effects.