Our puny scientist minds can’t grasp pizza deals (Guest & Martin 2021; postscript to data-driven science)

In the last post about conceptions of data-driven science, I wrote about the idea that reality is so complicated that we might need computers to help us reason about it. To be fair, even simple relationships in small data are too much for our puny brains to handle intuitively. We need to do the calculations. Here is Guest & Martin’s Pizza Problem of scientific reasoning, as presented in a thread by @cretiredroy on Twitter:

I ordered a 9-inch Pizza.
After a while, the waiter brought two 5-inch pizzas and said, the 9-inch pizza was not available and he was giving me two 5-inches Pizzas instead, and that I am getting 1 inch more for free!

The area of a circle is pi times the square of the radius r. A number n of 5-inch pizzas is smaller than a 9-inch pizza if:

n \pi \cdot 5^2 < \pi \cdot 9^2

Which we can rearrange to:

n < (\frac 9 5)^2 = 3.24

@cretiredroy again:

I said that even if he gave three pizzas, I would still lose-out.
”How can you say you are giving me an extra inch for free?”
The owner was speechless.
He finally gave me 4 pizzas.

Take Maths seriously!

Guest & Martin (2021) use this simple problem as their illustration for computational model building: two 5 inch pizzas for the same price as one 9 inch pizza is not a good deal, because the 9 inch pizza contains more food. As I said before, I don’t think in inches and Swedish pizzas usually come in only one size, but this is counterintuitive to many people who have intuitions about inches and pizzas.

(In Guest & Martin's paper, the numbers are actually 12 and 18 inches. We can generalise to any radiuses of big and small pizzas:

n < (\frac {r_{big}} {r_{small}})^2

Plugging in 12 and 18 gives 2.25, meaning that three 12-inch pizzas would be needed to sweeten the deal.)

We assume that we already agree that the relevant quantity is the area of pizza that we get. If we were out to optimise the amount of crust maybe the circumference, which is 2 times pi times the radius r, would be more relevant. In that case:

2n \pi 5 < 2 \pi 9

n < \frac 9 5 = 1.8

And the two smaller pizzas are a better deal with respect to crust circumference.

The risk of inconsistencies in our scientific understanding because we cannot intuitively grasp the implications of our models is what Guest & Martin call "The pizza problem". They believe that it can be ameliorated by computational modelling. If we have to make the calculations, we will notice and have to deal with assumptions we would otherwise not think about.

This is a different benefit of doing the calculations than the intuition building I wrote about in my post about the Monty Hall problem:

The outcome of the simulation is less important than the feeling that came over me as I was running it, though. As I was taking on the role of the host and preparing to take away one of the losing options, it started feeling self-evident that the important thing is whether the first choice is right. If the first choice is right, holding is the right strategy. If the first choice is wrong, switching is the right option. And the first choice, clearly, is only right 1/3 of the time.

I don’t think anyone is saying that it is impossible to intuitively grasp the case of the pizza deals. If you are used to relationships with squares, you might do it as quickly as @cretiredroy in the Twitter thread. However, because even simple power law relationships and probability experiments are tricky for us to reason about when we are naive to them, that should give us pause about our ability to reason verbally about new causal patterns. We might need some help.

Guest & Martin use the pizza example to illustrate their model of science as a hierarchy of frameworks, theories, specifications, implementations, hypotheses and data. The "framework" contains concepts of pizza, food and order, leading to the idea to maximize the amount of food per order. Our decision to care about amount of pizza and not about crust circumference belongs here. They describe two competing "theories": one that we need to maximise surface area and one that we need to maximise number of pizzas. The specification is the model of the area of a circle, and the implementation is plugging the numbers into it.

They argue that the "theory", "specification" and "implementation" steps are often omitted, and that doing so (prematurely jumping on to data collection), leads to inconsistencies that could have been avoided by being explicit about these steps. That is, in the pizza case, one should first write down the equations for the surface area of pizzas and compute the results before ordering the two deals and weighing the pizzas.

Just writing down the equations is not enough. In their talk on the topic for Glasgow ReproducibiliTea, Guest & Martin (2020) emphasise that the pizza problem is an issue even when we already have a formal model that describes the problem well — as in the pizza case, everyone agrees on the maths of the area of a circle — but that does not prevent the result from being counterintuitive. The formal model does not actually help resolve the confusion until it is used to compute the area of the two pizza orders.

I wanted to illustrate this post with a photo of a pizza from the chippy in Penicuik, where you get a complimentary box of chips with the larger size pizza, but I couldn’t find it. Instead, here are some pizzas from #GeneticistsTweetingAboutPizza.

References

@cretiredroy Twitter thread https://twitter.com/cretiredroy/status/1542351846903529472

Guest, O., & Martin, A. E. (2021). How computational modeling can force theory building in psychological science. Perspectives on Psychological Science, 16(4), 789-802.

Guest O & Martin (2020). How computational modeling can force theory building in psychological science (Glasgow ReproTea). Recording. https://www.youtube.com/watch?v=_WV7EFvFAB8

Theory in genetics

A couple of years ago, Brian Charlesworth published this essay about the value of theory in Heredity. He liked the same Sturtevant & Beadle quote that I liked.

Two outstanding geneticists, Alfred Sturtevant and George Beadle, started their splendid 1939 textbook of genetics (Sturtevant and Beadle 1939) with the remark ‘Genetics is a quantitative subject. It deals with ratios, and with the geometrical relationships of chromosomes. Unlike most sciences that are based largely on mathematical techniques, it makes use of its own system of units. Physics, chemistry, astronomy, and physiology all deal with atoms, molecules, electrons, centimeters, seconds, grams—their measuring systems are all reducible to these common units. Genetics has none of these as a recognizable component in its fundamental units, yet it is a mathematically formulated subject that is logically complete and self contained’.

This statement may surprise the large number of contemporary workers in genetics, who use high-tech methods to analyse the functions of genes by means of qualitative experiments, and think in terms of the molecular mechanisms underlying the cellular or developmental processes, in which they are interested. However, for those who work on transmission genetics, analyse the genetics of complex traits, or study genetic aspects of evolution, the core importance of mathematical approaches is obvious.

Maybe this comes a surprise to some molecularly minded biologists; I doubt those working adjacent to a field called ”biophysics” or trying to understand what on Earth a ”t-distributed stochastic neighbor embedding” does to turn single-cell sequences into colourful blobs will have missed that there are quantitative aspects to genetics.

Anyways, Sturtevant & Beadle (and Charlesworth) are thinking of another kind of quantitation: they don’t just mean that maths is useful to geneticists, but of genetics as a particular kind of abstract science with its own concepts. It’s the distinction between viewing genetics as chemistry and genetics as symbols. In this vein, Charlesworth makes the distinction between statistical estimation and mathematical modelling in genetics, and goes on to give examples of the latter by an anecdotal history models of genetic variation, eventually going deeper into linkage disequilibrium. It’s a fun read, but it doesn’t really live up to the title by spelling out actual arguments for mathematical models, other than the observation that they have been useful in population genetics.

The hypothetical recurring reader will know this blog’s position on theory in genetics: it is useful, not just for theoreticians. Consequently, I agree with Charlesworth that formal modelling in genetics is a good thing, and that there is (and ought to be more of) constructive interplay between data and theory. I like that he suggests that mathematical models don’t even have to be that sophisticated to be useful; even if you’re not a mathematician, you can sometimes improve your understanding by doing some sums. He then takes that back a little by telling a joke about how John Maynard Smith’s paper on hitch-hiking was so difficult that only two researchers in the country could be smart enough to understand it. The point still stands. I would add that this applies to even simpler models than I suspect that Charlesworth had in mind. Speaking from experience, a few pseudo-random draws from a binomial distribution can sometimes clear your head about a genetic phenomenon, and while this probably won’t amount to any great advances in the field, it might save you days of fruitless faffing.

As it happens, I also recently read this paper (Robinaugh et al. 2020) about the value of formal theory in psychology, and in many ways, it makes explicit some things that Charlesworth’s essay doesn’t spell out, but I think implies: We want our scientific theories to explain ”robust, generalisable features of the world” and represent the components of the world that give rise to those phenomena. Formal models, expressed in precise languages like maths and computational models are preferable to verbal models, that express the structure of a theory in words, because these precise languages make it easier to deduce what behaviour of the target system that the model implies. Charlesworth and Robinaugh et al. don’t perfectly agree. For one thing, Robinaugh et al. seem to suggest that a good formal model should be able to generate fake data that can be compared to empirical data summaries and give explanations of computational models, while Charlesworth seems to view simulation as an approximation one sometimes has to resort to.

However, something that occurred to me while reading Charlesworth’s essay was the negative framing of why theory is useful. This is how Charlesworth recommends mathematical modelling in population genetic theory, by approvingly repeating this James Crow quote:

I hope to have provided evidence that the mathematical modelling of population genetic processes is crucial for a proper understanding of how evolution works, although there is of course much scope for intuition and verbal arguments when carefully handled (The Genetical Theory of Natural Selection is full of examples of these). There are many situations in which biological complexity means that detailed population genetic models are intractable, and where we have to resort to computer simulations, or approximate representations of the evolutionary process such as game theory to produce useful results, but these are based on the same underlying principles. Over the past 20 years or so, the field has moved steadily away from modelling evolutionary processes to developing statistical tools for estimating relevant parameters from large datasets (see Walsh and Lynch 2017 for a comprehensive review). Nonetheless, there is still plenty of work to be done on improving our understanding of the properties of the basic processes of evolution.

The late, greatly loved, James Crow used to say that he had no objection to graduate students in his department not taking his course on population genetics, but that he would like them to sign a statement that they would not make any pronouncements about evolution. There are still many papers published with confused ideas about evolution, suggesting that we need a ‘Crow’s Law’, requiring authors who discuss evolution to have acquired a knowledge of basic population genetics.

This is one of the things I prefer about Robinaugh et al.’s account: To them, theory is not mainly about clearing up confusion and wrongness, but about developing ideas by checking their consistency with data, and exploring how they can be modified to be less wrong. And when we follow Charlesworth’s anecdotal history of linked selection, it can be read as sketching a similar path. It’s not a story about some people knowing ”basic population genetics” and being in the right, and others now knowing it and being confused (even if that surely happens also); it’s about a refinement of models in the face of data — and probably vice versa.

If you listen to someone talking about music theory, or literary theory, they will often defend themselves against the charge that theory drains their domain of the joy and creativity. Instead, they will argue that theory helps you appreciate the richness of music, and gives you tools to invent new and interesting music. You stay ignorant of theory at your own peril, not because you risk doing things wrong, but because you risk doing uninteresting rehashes, not even knowing what you’re missing. Or something like that. Adam Neely (”Why you should learn music theory”, YouTube video) said it better. Now, the analogy is not perfect, because the relationship between empirical data and theory in genetics is such that the theory really does try to say true or false things about the genetics in a way that music theory (at least as practiced by music theory YouTubers) does not. I still think there is something to be said for theory as a tool for creativity and enjoyment in genetics.

Literature

Charlesworth, B. (2019). In defence of doing sums in genetics. Heredity, 123(1), 44-49.

Robinaugh, D., Haslbeck, J., Ryan, O., Fried, E. I., & Waldorp, L. (2020). Invisible hands and fine calipers: A call to use formal theory as a toolkit for theory construction. Paper has since been published in a journal, but I read the preprint.

Various positions

What use is there in keeping a blog if you can’t post your arbitrary idiosyncratic opinions as if you were an authority? Here is a list of opinions about life in the scientific community.

Social media for scientists

People who promote social media for scientists by humblebragging about how they got a glam journal paper because of Twitter should stop. An unknown PhD student from the middle of nowhere must be a lot more likely to get into trouble than get on a paper because of Twitter.

Speaking of that, who thinks that that writing an angry letter to someone’s boss is the appropriate response to disagreeing with someone on Twitter? Please stop with that.

Poster sessions

Poster sessions are a pain. Not only do you suffer the humiliation of not begin cool enough to give a talk, you also get to haul a poster tube to the conference. The trouble is that we can’t do away with poster sessions, because they fulfill the important function of letting a lot of people contribute to the conference so that they have a reason to go there.

Now cue comments of this kind: ”That’s not true! I’ve had some of my best conference conversations at poster sessions. Maybe you just don’t know how to make a poster …” It is true that I don’t know how to make a good poster. Regardless, my ad hoc hypothesis for why people say things like this is that they’re already known and connected enough to have good conversations anywhere at the conference, and that the poster served as a signpost for their colleagues to find them.

How can one make a poster session as good as possible? Try to make lots of space so people won’t have to elbow each other. Try to find a room that won’t be incredibly noisy and full of echos. Try to avoid having some posters hidden behind pillars and in corners.

Also, don’t organize a poster competition unless you also organize a keynote competition.

Theory

There is way way way too little theory in biology education, as far as I can tell. Much like computer programming — a little of which is recognized as a useful skill to have even for empirically minded biologists who are not going to be programming experts — it is very useful to be able to read a paper without skipping the equations, or tell whether a paper is throwing dust when it states that ”[unspecified] Theory predicts …” this or that. But somehow, materials for theory manage to be even more threatening than computer documentation, which is an impressive feat. If you disagree, please post in the comments a reference to an introduction to coalescent theory that is accessible for, say, a biology PhD student who hasn’t taken a probability course in a few years.

Language corrections

That thing when reviewers suggest that a paper be checked by a native English speaker, when they mean that it needs language editing, is rude. Find a way of phrasing it that won’t offend that one native English speaker who invariably is on the paper, but doesn’t have an English enough name and affiliation for you!