top of page
Writer's pictureLuna Zagorac

Sometimes a theorist’s job is to be wrong

As a growing tsunami of data pours through telescopes and observatories, theorists like me are faced with the statistical reality that most of their theories about that data will be wrong – in the best possible ways.

Hand erasing writing on a chalkboard

Once upon a time at graduate school, my advisor sat me down for a conversation that would shape my future.


I had just completed my first-ever science paper and the time had come to plan the topic of my PhD thesis. The choice boiled down to two contenders: joining an international collaboration mapping the universe’s large-scale structure or pursuing an independent project investigating a type of dark matter I’d never heard of before. 


I explained to my advisor, at length, how I felt torn between what seemed to be the sensible career choice (the big universe-mapping team) and my gut inclination (the dark matter project). As I shared all this in a panicky stream-of-consciousness monologue, he listened patiently, occasionally nodding with a bemused smile. 


“Look,” he said to me once I had finally run out of breath, “what you are essentially asking is what the role of a theorist is.” I nodded enthusiastically. “A theorist’s job,” he continued, “in some sense, is to be wrong.”

  

I sat, stunned and silent. To be wrong? Surely not. A theorist is only as good as they are right… right?


He went on: “The thing is, your theory can be beautiful, mathematically complete, and compelling, and it can still not be the universe that we happen to live in.”


“The trick, then,” he concluded, “is not just to do good work and hope we are correct, but to find places of intersection with other colleagues – whether theorists, experimentalists, or observers – and find the communal and the useful in the work, whatever it may be.” 


For the five years that followed this conversation, it echoed in my head. I occasionally discussed it with colleagues, who met me with passionate agreement, confusion, or, occasionally, horror. 


But it wasn’t until this year that it came into sharper focus during a presentation by Dr. Robert Thacker at the Canadian Association of Physicists (CAP) Congress. In the presentation, Dr. Thacker looked back on the past 20 years of computational astrophysics and made predictions about what may come in 20 more. 


His prediction that astrophysics as a whole is careening into the state of being “data-rich and theory-poor” was met with approving murmurs and nods. This caught me mildly off guard; I myself had never even contemplated such a possibility. It stuck with me, and so I repeated it often; the same approval echoed in other rooms.


On the one hand, the field being “data-rich and theory-poor” is good news for theorists. In a field inundated with computational capabilities and data but poor in theoretical models, our long-term employment prospects are looking up. On the other hand, in a field that is data-rich and theory-poor, theorists will have to get very comfortable with the statistical inevitability of being wrong.


The universe is, by definition, singular, though there are some compelling arguments in favour of a multiverse. At any rate, we only know of a single universe right now. If the universe is one, the underlying theory space that correctly explains it is necessarily finite.  


This understanding is entrenched so deeply that it is implicitly stated in a maxim known as the “cosmological principle.” The principle states that, at sufficiently large scales, the universe is the same for all observers. The more technical definition invokes terms like “homogeneous” and “isotropic,” meaning that statistically large patches of the universe (so large that they contain multiple galaxies) are the same everywhere and in all directions. 

Cover of the textbook "The Road to Galaxy Formation" by William C. Keel

Astronomer William Keel put it like this in his textbook The Road to Galaxy Formation“[The cosmological principle] amounts to the strongly philosophical statement that the part of the universe which we can see is a fair sample, and that the same physical laws apply throughout. In essence, this in a sense says that the universe is knowable and is playing fair with scientists.”


Therefore, in a universe that is playing fair with scientists – even as the number of cosmologists studying such theories grows and the universe itself expands – the number of correct theories should not. Statistically, most theorists must be wrong most of the time. 


This is not a reason for despair; being wrong is not the same as failing. 


Every time a theory turns out to be wrong, the entire field is another step closer to being right. While the correct theory space is finite, the currently possible theory space is enormous. This means it will take a lot of scientists working together in different ways to be wrong about enough things for the right answers to become clear. 


And the more unlike the universe we are — the more inhomogeneous and anisotropic — the more we make headway. If we are all wrong in the same way and in the same direction, then we risk getting stuck, like chasing the geocentric model of the solar system or the “plum pudding” model of the atom. If instead we are wrong in all different ways and in different directions, we won’t be less wrong — but we will be more likely to stumble onto something that’s right. 


We already know how to deal with precisely this situation in statistics. When we don’t know the value of some parameter in a scenario — say, uncertainties in how far galaxies are in a certain observation — we use a process called marginalization. In this case, marginalization would be adding up all the possible values of the distances in the probability distribution before doing further calculations or modelling. 


This is the science equivalent of taking a poll of the mathematical models and observational techniques and assuming the real truth lies somewhere in the crowd. It’s not as powerful as knowing exactly what the distances are, but if we are convinced the correct answer is in the sum, we preserve some of that correctness. 


However, if a lot of the predicted distances were the same — and that prediction happened to be wrong — marginalizing over them would just preserve that wrongness. It yields a probability distribution that very strongly favours one answer for the distance (a majority of models polled agree!), but if that answer happens to be wrong, the apparent certainty of a result may be a figment of self-propagating errors. 


This is why, when a parameter is very uncertain or theoretically unconstrained, we want to explore a large number of possibilities. In essence, this is what drew me towards choosing the dark matter project as my PhD topic: there is so little certainty in what dark matter actually is, and I felt called to dig into a small but fascinating corner of its possibilities, even as it strays a bit afield from the preferred description. When no clear solution exists, there may be statistical power in the lack of consensus. 


It’s worth remembering that sometimes we set out to make progress on one question and end up answering another. The methods, algorithms, ideas, and relationships we develop along the way may not yield the results in our particular direction, but they can be vital in making a breakthrough elsewhere. 


One scientist’s disappointment may be another’s breakthrough. The era of the lone genius (if ever it truly existed) is over. It is only together, in the bulk motion of our fields, that we can solve the toughest problems – or not solve them, but in ways that advance knowledge. This is the “useful” and the “communal” my advisor described once upon a time.


iStock-1357123095.jpg
iStock-1357123095.jpg

Subscribe to our newsletter

Join the Community of Curious Minds

Stay Connected - Get our latest news and updates

  • Facebook
  • LinkedIn
  • X

Stay Connected

facebook icon.png
X icon.png
linkedin icon.png
bottom of page