# Using R: plyr to purrr, part 1

This is the second post about my journey towards writing more modern Tidyverse-style R code; here is the previous one. We will look at the common case of taking subset of data out of a data frame, making some complex R object from them, and then extracting summaries from those objects.

I miss the plyr package. Especially ddply, ldply, and dlply, my favourite trio of R functions of all times. Yes, the contemporary Tidyverse package dplyr is fast and neat. And those plyr functions operating on arrays were maybe overkill; I seldom used them. But plyr was so smooth, so beautiful, and — after you’ve bashed your head against it for some time and it changed your mind — so intuitive. The package still exists, but it’s retired, and we shouldn’t keep writing R code like it’s 2009, should we?

I used to like to do something like this: take a data frame, push it through a function that returns some complex object, store those objects in a list, and then map various functions over that list to extract the parts I care about. What is the equivalent in the new idiom? To do the same thing but with the purrr package, of course! purrr replaces the list-centric parts of plyr, while dplyr covers data frame-centric summarisation, mutation and so on.

For this example we will be using the lm function on subsets of data and store the model object. It’s the simple case that everyone reaches for to demonstrate these features, but it’s a bit dubious. If you find yourself fitting a lot of linear models to subsets, maybe there are other models you should think about Especially here, when the fake data just happens to come from a multilevel model with varying intercepts … But in this case, let’s simulate a simple linear regression and look at the coefficients we get out.

```set.seed(20210807)

n_groups <- 10
group_sizes <- rpois(n_groups, 10)
n <- sum(group_sizes)

fake_data <- tibble(id = 1:n,
group = rep(1:n_groups,
times = group_sizes),
predictor = runif(n, 0, 1))
group_intercept <- rnorm(n_groups, 0, 1)

fake_data\$response <- fake_data\$predictor * 10 +
group_intercept[fake_data\$group] +
rnorm(n)
```

And here is the plyr code: First, dlply takes us from a data frame, splitting it by group, to a list of linear models. Then, ldply takes us from the list of models to a data frame of coefficients. tidy is a function from the wonderful broom package which extracts the same information as you would get in the rather unwieldy object from summary(lm), but as a data frame.

```library(plyr)
library(broom)

fit_model <- function(data) {
lm(response ~ predictor, data)
}

models <- dlply(fake_data,
"group",
fit_model)
result <- ldply(models, tidy)
```

This is what the results might looks like. Notice how ldply adds the split labels nicely into the group column, so we know which rows came from which subset.

```   group        term   estimate std.error  statistic      p.value
1      1 (Intercept) -0.2519167 0.5757214 -0.4375670 6.732729e-01
2      1   predictor 10.6136902 1.0051490 10.5593207 5.645878e-06
3      2 (Intercept)  3.1528489 0.6365294  4.9531864 7.878498e-04
4      2   predictor  8.2075766 1.1458702  7.1627452 5.292586e-05
5      3 (Intercept) -0.8103777 0.6901212 -1.1742542 2.786901e-01
...
```

# split/map: The modern synthesis

If we pull out purrr, we can get the exact same table like so. The one difference is that we get a tibble (that is, a contemporary, more well-behaved data frame) out of it instead of a base R data.frame.

```library(purrr)

models <- map(split(fake_data,
fake_data\$group),
fit_model)
result <- map_df(models,
tidy,
.id = "group")
```
```# A tibble: 80 x 6
group term        estimate std.error statistic  p.value

1 1     (Intercept)     1.67     0.773      2.16 6.32e- 2
2 1     predictor       8.67     1.36       6.39 2.12e- 4
3 2     (Intercept)     4.11     0.566      7.26 4.75e- 5
4 2     predictor       8.19     1.11       7.36 4.30e- 5
5 3     (Intercept)    -7.50     0.952     -7.89 9.99e- 5
6 3     predictor      11.5      1.75       6.60 3.03e- 4
7 4     (Intercept)   -19.8      0.540    -36.7  7.32e-13
8 4     predictor      11.5      0.896     12.8  5.90e- 8
9 5     (Intercept)   -12.4      1.03     -12.0  7.51e- 7
10 5     predictor       9.69     1.82       5.34 4.71e- 4
# … with 70 more rows
```

First, the base function split lets us break the data into subsets based on the values of a variable, which in this case is our group variable. The output of this function is a list of data frames, one for each group.

Second, we use map to apply a function to each element of that list. The function is the same modelling function that we used above, which shoves the data into lm. We now have our list of linear models.

Third, we apply the tidy function to each element of that list of models. Because we want the result to be one single data frame consolidating the output from each element, we use map_df, which will combine the results for us. (If we’d just use map again, we would get a list of data frames.) The .id argument tells map to add the group column that indicates what element of the list of models each row comes from. We want this to be able to identify the coefficients.

If we want to be fancy, we can express with the Tidyverse-related pipe and dot notation:

```library(magrittr)

result <- fake_data %>%
split(.\$group) %>%
map(fit_model) %>%
map_df(tidy, .id = "group")
```

# Nesting data into list columns

This (minus the pipes) is where I am at in most of my R code nowadays: split with split, apply with map and combine with map_dfr. This works well and looks neater than the lapply/Reduce solution discussed in part 0.

We can push it a step further, though. Why take the linear model out of the data frame, away from its group labels any potential group-level covariates — isn’t that just inviting some kind of mix-up of the elements? With list columns, we could store the groups in a data frame, keeping the data subsets and any R objects we generate from them together. (See Wickham’s & Grolemund’s R for Data Science for a deeper case study of this idea.)

```library(dplyr)
library(tidyr)

fake_data_nested <- nest(group_by(fake_data, group),
data = c(id, predictor, response))

fake_data_models <- mutate(fake_data_nested,
model = map(data,
fit_model),
estimates = map(model,
tidy))

result <- unnest(fake_data_models, estimates)

```

First, we use the nest function to create a data frame where each row is a group, and the ”data” column contains the data for that subset.

Then, we add two list columns to that data frame: our list of models, and then our list of data frames with the coefficients.

Finally, we extract the estimates into a new data frame with unnest. The result is the same data frame of coefficients and statistics, also carrying along the data subsets, linear models and coefficents.

The same code with pipes:

```fake_data %>%
group_by(group) %>%
nest(data = c(id, predictor, response)) %>%
mutate(model = map(data, fit_model),
estimates = map(model, tidy)) %>%
unnest(estimates) -> ## right way assignment just for kicks
result

```

I’m still a list column sceptic, but I have to concede that this is pretty elegant. It gets the job done, it keeps objects that belong together together so that there is no risk of messing up the order, and it is not that much more verbose. I especially like that we can run the models and extract the coefficients in the same mutate call.

# Coda: mixed model

Finally, I can’t resist comparing the separate linear models to a linear mixed model of all the data.

We use lme4 to fit a varying-intercept model, a model that puts the same coefficient on the slope between the response and predictor in each group, but allows the intercepts to vary between groups, assuming them to be drawn from the same normal distribution. We put the coefficients from the linear models fit in each group separately and the linear mixed model together in the same data frame to be able to plot them.

```library(ggplot2)
library(lme4)

model <- lmer(response ~ (1|group) + predictor,
fake_data)

lm_coef <- pivot_wider(result,
names_from = term,
values_from = estimate,
id_cols = group)

lmm_coef <- cbind(group = levels(model@flist\$group),
coef(model)\$group)

model_coef <- rbind(transform(lm_coef, model = "lm"),
transform(lmm_coef, model = "lmm"))

colnames(model_coef)[2] <- "intercept"

ggplot() +
geom_point(aes(x = predictor,
y = response,
colour = factor(group)),
data = fake_data) +
geom_abline(aes(slope = predictor,
intercept = intercept,
colour = factor(group),
linetype = model),
data = model_coef) +
theme_bw() +
theme(panel.grid = element_blank())

```

As advertised, the linear mixed model has the same slope in every group, and intercepts pulled closer to the mean. Now, we know that this is a good model for these fake data because we created them, and in the real world, that is obviously not the case … The point is: if we are going to fit a more complex model of the whole data, we want to be able to explore alternatives and plot them together with the data. Having elegant ways to transform data frames and summarise models at our disposal makes that process smoother.

# Using R: 10 years with R

Yesterday, 29 Feburary 2020, was the 20th anniversary of the release R 1.0.0. Jozef Hajnala’s blog has a cute anniversary post with some trivia. I realised that it is also (not to the day, but to the year) my R anniversary.

I started using R in 2010, during my MSc project in Linköping. Daniel Nätt, who was a PhD student there at the time, was using it for gene expression and DNA methylation work. I think that was the reason he was pulled into R; he needed the Bioconductor packages for microarrays. He introduced me. Thanks, Daniel!

I think I must first have used it to do something with qPCR melting curves. I remember that I wrote some function to reshape/pivot data between long and wide format. It was probably an atrocity of nested loops and hard bracket indexing. Coming right from an undergraduate programme with courses using Ada and C++, even if we had also used Minitab for statistics and Matlab for engineering, I spoke R with a strong accent. At any rate, I was primed to think that doing my data analysis with code was a good idea, and jumped at the opportunity to learn a tool for it. Thanks, undergraduate programme!

I think the easiest thing to love about R is the package system. You can certainly end up in dependency hell with R and metaphorically shoot your own foot, especially on a shared high performance computing system. But I wouldn’t run into any of that until after several years. I was, and still am, impressed by how packages just worked, and could do almost anything. So, the Bioconductor packages were probably, indirectly, why I was introduced to R, and after that, my R story can be told in a series of packages. Thanks, CRAN!

The next package was R/qtl, that I relied on for my PhD. I had my own copy of the R/qtl book. For a period, I probably wrote thing every day:

```library(qtl)

cross <- read.cross(file = "F8_geno_trim.csv", format = "csv")
```

R/qtl is one of my favourite pieces or research software, relatively friendly and with lots of documentation. Thanks, R/qtl developers!

Of course it was Dom Wright, who was my PhD supervisor, who introduced me to R/qtl, and I think it was also he who introduced me to ggplot2. At least he used it, and at some point we were together trying to fix the formatting of a graph, probably with some ugly hack. I decided to use ggplot2 as much as possible, and as it is wont to, ggplot2 made me care about rearranging data, thus leading to reshape2 and plyr. ”The magic is not in plotting the data but in tidying and rearranging the data for plotting.” After a while, most everything I wrote used the ddply function in some way. Thank you, Hadley Wickham!

Then came the contemporary tidyverse. For the longest time, I was uneasy with tidyr, and I’m still not a regular purrr user, but one can’t avoid loving dplyr. How much? My talk at the Swedish Bioinformatics Workshop in 2016 had a slide expressing my love of the filter function. It did not receive the cheers that the function deserves. Maybe the audience were Python users. With new file reading functions, new data frames and functions to manipulate data frames, modern R has become smoother and friendlier. Thanks, tidyverse developers!

The history of R on this blog started in 2011, originally as a way to make notes for myself or, ”a fellow user who’s trying to google his or her way to a solution”. This turned into a series of things to help teach R to biologists around me.

There was the Slightly different introduction to R series of blog posts. It used packages that feel somewhat outdated, and today, I don’t think there’s anything even slightly different about advocating RStudio, and teaching ggplot2 from the beginning.

This spawned a couple of seminars in course for PhD students, which were updated for the Wright lab computation lunches, and eventually turned into a course of its own given in 2017. It would be fun to update it and give it again.

The last few years, I’ve been using R for reasonably large genome datasets in a HPC environment, and gotten back to the beginnings, I guess, by using Bioconducor a lot more. However, the package that I think epitomises the last years of my R use is AlphaSimR, developed by colleagues in Edinburgh. It’s great to be able throw together a quick simulation to check how some feature of genetics behaves. AlphaSimR itself is also an example of how far the R/C++ integration has come with RCpp and RCppArmadillo. Thanks, Chris!

In summary, R is my tool of choice for almost anything. I hope we’ll still be using it, in new and interesting ways, in another ten years. Thank you, R core team!

# Using R: from plyr to purrr, part 0 out of however many

This post is me thinking out loud about applying functions to vectors or lists and getting data frames back.

Using R is an ongoing process of finding nice ways to throw data frames, lists and model objects around. While tidyr has arrived at a comfortable way to reshape dataframes with pivot_longer and pivot_wider, I don’t always find the replacements for the good old plyr package as satisfying.

Here is an example of something I used to like to do with plyr. Don’t laugh!

Assume we have a number of text files, all in the same format, that we need to read and combine. This arises naturally if you run some kind of analysis where the dataset gets split into chunks, like in genetics, where chunks might be chromosomes.

```## Generate vector of file names
files <- paste("data/chromosome", 1:20, ".txt", sep = "")

library(plyr)
```

This gives us one big data frame, containing the rows from all those files.

If we want to move on from plyr, what are our options?

We can go old school with base R functions lapply and Reduce.

```library(readr)

genome <- Reduce(rbind, chromosomes)
```

Here, we first let lapply read each file and store it in a list. Then we let Reduce fold the list with rbind, which binds the data frames in the list together, one below the other.

If that didn’t make sense, here it is again: lapply maps a function to each element of a vector or list, collecting the results in a list. Reduce folds the elements in a list together, using a function that takes in two arguments. The first argument will be the results it’s accumulated so far, and the second argument will be the next element of the list.

In the end, this leaves us, as with ldply, with one big data frame.

We can also use purrr‘s map_dfr. This seems to be the contemporary most elegant solution:

```library(purrr)

```

map_dfr, like good old ldply will map over a vector or list, and collect resulting data frames. The ”r” in the name means adding the next data frame as rows. There is also a ”c” version (map_dfc) for adding as columns.

# #TidyTuesday: horror films, squirrels and commuters

Tidy Tuesday is a fun weekly activity where a lot of R enthusiasts make different visualisations, and possibly modelling, of the same dataset. You can read more about it at their Github page. I participated for three weeks, and here is a recap. I will show excerpts of the code, but you can read the whole thing by clicking through to Github.

# 2019-10-22 Horror films

In time for Halloween, we got a dataset with horror film data from IMDB. (Yes, I will be mixing the terms ”film” and ”movie” wildly.)

The first week, I started with making a pretty boring plot, the way I’d normally plot things (white background, small multiples, you know the drill). I wanted to look at distribution over the year, so I plotted what month films are released and the distribution of review scores and budgets each month. After thinking about it for a while, I thought a logarithmic scale would make sense for budgets, that span a huge range. Also, after realising that the budget column actually didn’t contain dollars, but a mix of currencies, I decided not to try to convert, but use only the US dollar budgets.

I don’t often run into dates, to using the date functions from readr and lubridate was new to me, as was the built-in vector month.abb:

```library(dplyr)
library(egg)
library(ggplot2)
library(ggimage)
library(lubridate)
library(stringr)

## Parse dates

movies\$release_parsed  <- parse_date(movies\$release_date,
format = "%d-%b-%y",
locale = locale("en"))

movies\$release_year <- ifelse(is.na(movies\$release_parsed),
movies\$release_date,
year(movies\$release_parsed))

movies\$release_month  <- month.abb[month(movies\$release_parsed)]
```

Here, we parse the release data, and extract the release year, treating films that only have a release year separately.

I also put in means with confidence intervals, like so, and a line for the mean review rating:

```model  <- lm(review_rating ~ release_month, movies)

fit  <- data.frame(release_month = month.abb,
predict(model,
newdata = data.frame(release_month = month.abb),
interval = "confidence"),
stringsAsFactors = FALSE)

grand_mean_rating  <- mean(movies\$review_rating,
na.rm = TRUE)
```

As an example of the plotting code, here is the middle panel for ratings. As usual with ggplot2, we layer geometries on top of each other (here: violin plots, points with range bars, and a horizontal line, followed by a lot of formatting.

```plot_rating <- ggplot() +
geom_violin(aes(x = release_month,
y = review_rating),
fill = "grey",
colour = NA,
data = movies) +
scale_x_discrete(limits = month.abb) +
geom_pointrange(aes(x = release_month,
y = fit,
ymax = upr,
ymin = lwr),
data = fit) +
geom_hline(yintercept = grand_mean_rating,
linetype = 2,
colour = "red") +
ylim(0, 10) +
theme_bw(base_size = 12) +
theme(panel.grid = element_blank()) +
xlab("") +
ylab("Review rating")
```

There is similar code for the other two panels. Finally, I used ggarrange from the egg package to put everything together. In summary, most horror films are released in October, probably around Halloween. The review ratings of films released in this horror season are also a tiny bit higher than during the rest of the year, but there is not much of a difference in the budgets.

After that, and after seeing some of the fun horror-themed graphs other people made, I decided to make something more colourful. Here is a plot on the same theme, showing each day and year separately, an appropriately horrendous colour scheme, and a pumpkin icon to indicate the date of Halloween. I like this plot better because it shows more of the data. It shows the increase at Halloween. We also see some spikes at other dates, like 1 January of some years. It also shows how the dataset ends at Halloween 2017.

The code for this plot is mostly a lot of theme formatting. The ggplot2 theme function takes a lot of arguments I’ve never used before.

```movies\$yday  <- yday(movies\$release_parsed)

daycount <- summarise(group_by(movies, yday, release_year), n = n())
```

First, we turn dates into days of the year, and count the number of film releases.

```halloween  <-  yday("2019-10-31")

pumpkin_data  <- data.frame(x = halloween,
y = -1,
image = "pumpkin.png",
stringsAsFactors = FALSE)
```

Then, we set up the date of Halloween and a data frame for the pumpkin icon. We’re going to use geom_image from the ggimage package to add this icon to each subplot.

```breaks  <- yday(paste("2019-", 1:12, "-01", sep = ""))

plot_year <- ggplot() +
geom_point(aes(x = yday,
y = n),
colour = "green",
data = na.exclude(dc)) +
geom_image(aes(x = x,
y = y,
image = image),
data = pumpkin_data) +
facet_wrap(~ release_year,
ncol = 2) +
scale_x_continuous(breaks = breaks,
labels = month.abb) +
ylim(-3, NA) +
labs(caption = "Pumpkin icon by Good Ware from www.flatiron.com.") +
theme(panel.grid = element_blank(),
strip.background = element_blank(),
text = element_text(family = "mono",
colour = "grey",
size = 16),
axis.text = element_text(family = "mono",
colour = "green",
size = 14),
axis.ticks = element_line(colour = "green"),
strip.text = element_text(family = "mono",
colour = "grey",
size = 16),
plot.background = element_rect(fill = "black"),
panel.background = element_rect(fill = "black")) +
xlab("") +
ylab("Horror films released on this day") +
ggtitle("When horror films are released")
```

A lot of other people made graphs that highlight the increase in horror film releases around Halloween in different ways. Here are some that I like:

And, looking deeper, there is a pattern within months too:

Finally, I also like this plot, that makes a case for a U-shaped relationship between budget and rating:

And for contrast, another that makes a different case with the same data:

This seems to be a recurrent theme when it comes to interpretation and quantitative analysis in the Tidy Tuesday datasets. People make different modeling choices, or visualisation choices (which are modeling choices) about what to lump together, what to separate into bins, how to transform the data, and how to show uncertainty. In some cases, as with the pattern of film releases around Halloween, they all find similar results. In some other cases, they don’t.

# 2019-10-28 NYC Squirrel Census

This week, the data was about the location and activities of squirrels in New York central park on certain times. I had this vision of an animated map of squirrel locations. I ended up with an animation, but no map. The colour of the squirrel icon shows the main fur colour of the squirrels (grey, black, cinnamon), and the size shows adults and juveniles.

I had never used gganimate before (only animation, as in this post about the Game of Life), but I had seen Thomas Lin Pedersen tweet about it, and I wanted to try.

```library(dplyr)
library(gganimate)
library(ggimage)
library(ggplot2)

## Parse the date
squirrels\$date_parsed  <- parse_date(as.character(squirrels\$date), format = "%m%d%Y")

## Give each observation a unique ID (to use as group in the
## animation, so as to not have points turn into one another but fade
squirrels\$key  <- 1:nrow(squirrels)

## Associate the different squirrel colours with the filenames of
## icons in different colours (manually filled with GIMP).
squirrels\$image  <- "squirrel.png"
squirrels\$image[squirrels\$primary_fur_color == "Cinnamon"]  <- "squirrel_cinnamon.png"
squirrels\$image[squirrels\$primary_fur_color == "Gray"]  <- "squirrel_grey.png"
squirrels\$image[is.na(squirrels\$primary_fur_colour)]  <- NA
```

Again, we need to parse the date. We already have latitude and longitude. We need a unique identifier for each observation, to tell gganimate that we want each squirrel to be in its own group. Then, we associate squirrel colours with three different files with a squirrel icon in different colours.

First, we make two image scatterplot layers, setting the sizes of adults and juveniles manually. The colour is deal with by mapping the image column containing the file names to the image aesthetic. We add some formatting, and then, the transition_states layer, which is where the graph turns from still and boring to magical moving pictures. This will animate a series of discrete ”states”, which here consist of the date pasted together with the shift (AM or PM squirrel observation shift). The special ”{closest_state}” variable in the title string puts this state name as plot title.

```plot_colour <- ggplot() +
geom_image(aes(y = long, x = lat, image = image, group = key),
size = 0.04,
data = filter(squirrels, age == "Adult")) +
geom_image(aes(y = long, x = lat, image = image, group = key),
size = 0.03,
data = filter(squirrels, age == "Juvenile")) +
theme_bw(base_size = 16) +
theme(panel.grid = element_blank()) +
xlab("Latitude") +
ylab("Longitude") +
labs(title = "{closest_state}",
caption = "Data from NYC Squirrel Census. Squirrel icon made by Freepik from www.flatiron.com.") +
transition_states(paste(date_parsed, shift),
state_length = 2,
transition_length = 1)

## Render it and write to file
animate(plot_colour,
fps = 10,
nframes = 400,
end_pause = 20,
rewind = FALSE,
width = 1000,
height = 1000)
```

I was faffing around unsuccessfully with different map packages to try to find something of Central Park. It seems ggmaps is the way to go. Several other participants made nice maps:

However, I think this was my favourite:

https://github.com/ryantimpe/TidyTuesday/blob/master/2019w44/2019w44.R

The original Squirrel Census Report seems to be amazing object, too, with a beautiful map.

# 2019-11-05 Biking and walking to work in the US (and Sweden)

This week I felt I had to make a map. The end result doesn’t look like much, but it took a while. Here are the average percentages of commuters who walk and bike to work in different US states 2008-2012 with data from the American Community Survey:

```library(dplyr)
library(ggplot2)
library(usmap)

## Map data from the usmap package
state_map  <- us_map(regions = "state")

## There are some incompletely labelled states; fix them
missing  <- setdiff(commute\$state, state_map\$full)

commute\$state_modified <- commute\$state
commute\$state_modified[commute\$state == "Ca"] <- "California"
commute\$state_modified[commute\$state == "Massachusett"]  <- "Massachusetts"
```

We get map coordinates for the US states from the usmap package (because the one in maps doesn’t have Alaska and Hawaii).

Then we fix some mislabelling in the data.

```## Get the average per state
state_average  <- summarise(group_by(commute, state_modified, mode),
average = sum(percent * n)/sum(n))

## Combine averages and coordinates
combined  <- inner_join(state_average,
state_map,
by = c("state_modified" = "full"))
```

We take a weighted average of the percentages per state and join the state averages with the state map coordinates. The map I posted on Twitter didn’t weight the average, but I think that is a bit better. There is still the issue that states have different populations and different distributions of large and small cities, but that’s the nature of things. In summary, there is not much biking going on, but some more walking to work.

```plot_map  <- ggplot() +
geom_polygon(aes(x = x, y = y, fill = average, group = group),
colour = "black",
data = combined) +
facet_wrap(~ mode) +
scale_fill_continuous(low = "white",
high = "blue",
name = "Percent commuters") +
theme_bw(base_size = 16) +
theme(panel.grid = element_blank(),
strip.background = element_blank(),
axis.text = element_blank(),
axis.ticks = element_blank(),
legend.position = "bottom") +
xlab("") +
ylab("") +
labs(caption = "Cycling and walking to work 2008-2012 in the American Community Survey.")
```

The US seems to live up to its reputation as a motorised country. But I have no feeling for the scale of the data. For comparision, here is a map of Sweden with some not too recent data (2005-2006, from this VTI report>). The map is from the swemap package.

# Using R: plotting the genome on a line

Imagine you want to make a Manhattan-style plot or anything else where you want a series of intervals laid out on one axis after one another. If it’s actually a Manhattan plot you may have a friendly R package that does it for you, but here is how to cobble the plot together ourselves with ggplot2.

We start by making some fake data. Here, we have three contigs (this could be your chromosomes, your genomic intervals or whatever) divided into one, two and three windows, respectively. Each window has a value that we’ll put on the y-axis.

```library(dplyr)
library(ggplot2)

data <- data_frame(contig = c("a", "a", "a", "b", "b", "c"),
start = c(0, 500, 1000, 0, 500, 0),
end = c(500, 1000, 1500, 500, 1000, 200),
value = c(0.5, 0.2, 0.4, 0.5, 0.3, 0.1))
```

We will need to know how long each contig is. In this case, if we assume that the windows cover the whole thing, we can get this from the data. If not, say if the windows don’t go up to the end of the chromosome, we will have to get this data from elsewhere (often some genome assembly metadata). This is also where we can decide in what order we want the contigs.

```contig_lengths <- summarise(group_by(data, contig), length = max(end))
```

Now, we need to transform the coordinates on each contig to coordinates on our new axis, where we lay the contings after one another. What we need to do is to add an offset to each point, where the offset is the sum of the lengths of the contigs we’ve layed down before this one. We make a function that takes three arguments: two vectors containing the contig of each point and the position of each point, and also the table of lengths we just made.

```flatten_coordinates <- function(contig, coord, contig_lengths) {
coord_flat <- coord
offset <- 0

for (contig_ix in 1:nrow(contig_lengths)) {
on_contig <- contig == contig_lengths\$contig[contig_ix]
coord_flat[on_contig] <- coord[on_contig] + offset
offset <- offset + contig_lengths\$length[contig_ix]
}

coord_flat
}
```

Now, we use this to transform the start and end of each window. We also transform the vector of the length of the contigs, so we can use it to add vertical lines between the contigs.

```data\$start_flat <- flatten_coordinates(data\$contig,
data\$start,
contig_lengths)
data\$end_flat <- flatten_coordinates(data\$contig,
data\$end,
contig_lengths)
contig_lengths\$length_flat <- flatten_coordinates(contig_lengths\$contig,
contig_lengths\$length,
contig_lengths)
```

It would be nice to label the x-axis with contig names. One way to do this is to take the coordinates we just made for the vertical lines, add a zero, and shift them one position, like so:

```axis_coord <- c(0, contig_lengths\$length_flat[-nrow(contig_lengths)])
```

Now it’s time to plot! We add one layer of points for the values on the y-axis, where each point is centered on the middle of the window, followed by a layer of vertical lines at the borders between contigs. Finally, we add our custom x-axis, and also some window dressing.

```plot_genome <- ggplot() +
geom_point(aes(x = (start_flat + end_flat)/2,
y = value),
data = data) +
geom_vline(aes(xintercept = length_flat),
data = contig_lengths) +
scale_x_continuous(breaks = axis_coord,
labels = contig_lengths\$contig,
limits = c(0, max(contig_lengths\$length_flat))) +
xlab("Contig") + ylim(0, 1) + theme_bw()
```

And this is what we get:

I’m sure your plot will look more impressive, but you get the idea.

# Using R: When using do in dplyr, don’t forget the dot

There will be a few posts about switching from plyr/reshape2 for data wrangling to the more contemporary dplyr/tidyr.

My most common use of plyr looked something like this: we take a data frame, split it by some column(s), and use an anonymous function to do something useful. The function takes a data frame and returns another data frame, both of which could very possibly have only one row. (If, in fact, it has to have only one row, I’d suggest an assert_that() call as the first line of the function.)

```library(plyr)
results <- ddply(some_data, "key", function(x) {
## do something; return data.frame()
})
```

Or maybe, if I felt serious and thought the function would ever be used again, I’d write:

```calculate <- function(x) {
## do something; return data.frame()
}
result <- ddply(some_data, "key", calculate)
```

Rinse and repeat over and over again. For me, discovering ddply was like discovering vectorization, but for data frames. Vectorization lets you think of operations on vectors, without having to think about their elements. ddply lets you think about operations on data frames, without having to think about rows and columns. It saves a lot of thinking.

The dplyr equivalent would be do(). It looks like this:

```library(dplyr)
grouped <- group_by(some_data, key)
result <- do(grouped, calculate(.))
```

Or once again with magrittr:

```library(magrittr)
some_data %>%
group_by(key) %>%
do(calculate(.)) -> result
```

(Yes, I used the assignment arrow from the left hand side to the right hand side. Roll your eyes all you want. I think it’s in keeping with the magrittr theme of reading from left to right.)

One important thing here, which got me at first: There has to be a dot! Just passing the function name, as one would have done with ddply, will not work:

```grouped <- group_by(some_data, key)
## will not work: Error: Results are not data frames at positions ...
try(result <- do(grouped, calculate))
```

Don’t forget the dot!

# It seems dplyr is overtaking correlation heatmaps

(… on my blog, that is.)

For a long time, my correlation heatmap with ggplot2 was the most viewed post on this blog. It still leads the overall top list, but by far the most searched and visited post nowadays is this one about dplyr (followed by it’s sibling about plyr).

I fully support this, since data wrangling and reorganization logically comes before plotting (especially in the ggplot2 philosophy).

But it’s also kind of a shame, because it’s not a very good dplyr post, and the one about the correlation heatmap is not a very good ggplot2 post. Thankfully, there is a new edition of the ggplot2 book by Hadley Wickham, and a new book by him and Garrett Grolemund about data analysis with modern R packages. I’m looking forward to reading them.

Personally, I still haven’t made the switch from plyr and reshape2 to dplyr and tidyr. But here is the updated tidyverse-using version of how to quickly calculate summary statistics from a data frame:

```library(tidyr)
library(dplyr)
library(magrittr)

data <- data.frame(sex = c(rep(1, 1000), rep(2, 1000)),
treatment = rep(c(1, 2), 1000),
response1 = rnorm(2000, 0, 1),
response2 = rnorm(2000, 0, 1))

gather(data, response1, response2, value = "value", key = "variable") %>%
group_by(sex, treatment, variable) %>%
summarise(mean = mean(value), sd = sd(value))
```

Row by row we:

5-8: Simulate some nonsense data.

10: Transform the simulated dataset to long form. This means that the two variables `response1` and `response2` get collected to one column, which will be called ”value”. The column ”key” will indicate which variable each row belongs to. (gather is tidyr’s version of melt.)

11: Group the resulting dataframe by sex, treatment and variable. (This is like the second argument to d*ply.)

12: Calculate the summary statistics.

```Source: local data frame [8 x 5]
Groups: sex, treatment [?]

sex treatment  variable        mean        sd
(dbl)     (dbl)     (chr)       (dbl)     (dbl)
1     1         1 response1 -0.02806896 1.0400225
2     1         1 response2 -0.01822188 1.0350210
3     1         2 response1  0.06307962 1.0222481
4     1         2 response2 -0.01388931 0.9407992
5     2         1 response1 -0.06748091 0.9843697
6     2         1 response2  0.01269587 1.0189592
7     2         2 response1 -0.01399262 0.9696955
8     2         2 response2  0.10413442 0.9417059
```