The only true wisdom is in knowing you know nothing. - Socrates
The first principle is that you must not fool yourself—and you are the easiest person to fool. - Richard Feynman
At least I know that I don't know, question is, are you bozos smart enough to feel stupid? Hope so - Eminem
By 2005 or so, it will become clear that the Internet's impact on the economy has been no greater than the fax machine's - Paul Krugman
Humans are really bad at predicting the future. Even worse, we don't realise how bad we are.
That's a pretty potent combination. It's bad enough to be a bad forecaster but at least you can be appropriately cautious. But a bad forecaster who thinks they're good is in real trouble. It ain’t what you don’t know that gets you into trouble. It’s what you know for sure that just ain’t so.
But help is at hand. There's now lots and lots of evidence about how bad humans are at forecasting. Even better, it's been neatly summarised in the first few chapters of Philip Tetlock's 'Superforecasting'. If you want to go a step further and prove to yourself that you are bad at forecasting, all you have to do is keep track of your own forecasts. Every time you find yourself predicting something just set a reminder for the relevant date in the future when you’ll be able to check whether you were right or not, write down your prediction and your percentage confidence and wait to be amazed at your previous self's lack of foresight. [1]
This is good news. Realising how bad we are at predicting the future moves us level with Socrates, Feynman and Eminem. It means we're less likely to rely on bad forecasts and therefore less likely to be surprised by the future. And it means we can start to think seriously about how to get better at forecasting. [2]
But there's also something more interesting here. Why are we so bad at forecasting? Surely a slight edge in predicting the future would give a massive advantage in all kinds of areas of life? So why hasn't evolution solved this problem? Or why isn't getting better at forecasting something that we consider as important as learning to read or write?
1 - An Unancestral Problem
I think the first reason is that the kind of forecasting we now care about is a very modern task that we simply haven't evolved to be good at.
While the complexity of the human environment has exploded over the past five millennia (a blink of an eye in evolutionary terms), human brains have remained almost unchanged, and we are left running hugely outdated software.
In the ancestral environment there was a survival advantage available to anyone who could accurately forecast things that were relevant there. Evolution worked its magic and we were the lucky beneficiaries, with a set of incredibly precise intuitions around the behaviour of humans in small groups for example. From the smallest non-verbal cues we have an incredible ability to predict how someone is going to behave and to forecast how they’ll react to our actions.
But modern forecasting problems often revolve around things that simply didn’t exist in that ancestral environment. We should expect to be bad at forecasting the behaviour of nation states, companies and bureaucracies because these things simply didn’t exist until a few hundred years ago. Therefore there was no survival advantage to any human who could more accurately predict their behaviour until a few generations ago. Evolution has had no time to select for this ability, so we shouldn’t expect to be good at it.
Exponential growth is a particularly interesting example. While it’s been present in the ancestral environment for as long as humanity has existed, it has only recently become relevant on human timescales. [3] Therefore it's only recently become an advantage to successfully predict and model this sort of growth. This makes sense: while we can perfectly predict the flight of a thrown object, we’re useless at predicting, from intuition alone, the growth of a company or the spread of a pandemic. That probably explains a lot of things: from how terrible most forecasting around COVID was, to how bad we are at predicting compound interest or the growth of companies. [4]
2 - A Very Long Feedback Loop
But that can't be all of it. There are lots of unnatural things that we can do very well. Skiing is a fairly modern invention but people can get good at it.
Sure when you start out you'll be pretty bad. But you can very quickly work it out. The intuitive thing is to lean backwards and quickly go flying off down the slope out of control. This is the inevitable first mistake you'll make. Eventually you'll try leaning forwards and find you get more control. You'll then start doing more of this.
It's a simple loop: you try something new, see what effect it has and then do more of the things that work and less of the things that don't. All it takes is a few quick trips around this loop to have you skiing well in no time. [5]
Sure you can speed things up by having an instructor point you in the right direction. This will accelerate you through some of the early trial and error, helping you to avoid common mistakes. But importantly you don't need to. By just tinkering and experimenting yourself you can make a huge amount of progress. Despite skiing being a very unnatural activity, that evolution hasn't given us any skill at, the best skiers are staggeringly impressive.
The difference with forecasting is the length of the feedback loop.
Imagine that when you're learning to ski you try shifting your weight backwards. But this time, instead of immediately learning what effect that has, you get lifted off the slope and get on with your life. Then, six months later, you're shown a video of yourself sliding off down the slope out of control and have to try to remember exactly what you did that caused that.
Maybe you've got a vague memory of what you did to cause it. Maybe you completely misremember and think you leaned forward. Or maybe you've lost interest in skiing after not being able to see any progress in your first six months. In any case, you probably won't be appearing at the Winter Olympics anytime soon.
That's the challenge you face when trying to get better at the unnatural task of forecasting. Given the very nature of prediction there's an inevitable lag between making a forecast and seeing whether you were right or not. And, as with many other kinds of high performance, as the distance between action and result grows, the difficulty of learning from it climbs exponentially. A delay of even a few days, for the shortest of forecasts, will still make it far harder to learn to forecast than to learn to ski. [6]
3 - Misaligned Incentives
But even that's still not enough to explain our terrible collective record at forecasting. As we see from superforecasting, with enough effort you can still set up a feedback loop, track your forecasts over time and use the feedback to get really really good. The superforecasters who diligently practise like this are the Olympic skiers compared to the rest of us novices on the nursery slopes.
You don't even need to build a feedback loop to get started. Simply reading about the best techniques for forecasting and doing some simple exercises (less than an hour in total) can improve forecasting accuracy by about 10% [7]. So why haven't we all grabbed such low hanging fruit?
I think the main reason is that most people aren't actually incentivised to forecast accurately.
‘Never, ever, think about something else when you should be thinking about the power of incentives.’ - Charlie Munger
Setting incentives is a superpower: 5 words that changed how I think about almost everything. - Sam Altman
The most obvious examples are where people are pretending to forecast but are actually cheerleading or campaigning. Sure, with hindsight Nancy Pelosi looks a bit silly for saying she was certain that Trump would never be president in a 2016 interview. It certainly wouldn't make it into her top political forecasts.
But being accurate probably wasn't her main goal. She might well have been worried about Trump's chances and thought that throwing doubt on him as a serious candidate could be a good move.
Similarly when Tom Brady sends his famous "We will win" text to his teammates the night before each game, he's probably not thinking about his Brier score!
There are more subtle examples and they're more interesting. In reality lots of people who we might reasonably think are trying to forecast well are actually incentivised to do other things.
Pundits, columnists and news anchors regularly talk about how likely it is that Russia will invade Ukraine and 'what Putin is thinking'. But whether they keep their job, get their annual bonus or get given the front page depends on how many viewers/readers they have rather than the accuracy of their forecasts. This in turn depends on how convincing they seem, how interesting they are to listen to and how reasonable their story about Russia seems. [8]
When groups of friends talk about their predictions on the impact of climate change or how long the PM will remain in post this looks like forecasting. But in reality forecasts on climate change are often a lot less about accurately predicting the future and a lot more about signalling either 'I'm a conscientious person who cares about the future of the planet' or 'I'm a smart realist who is unpersuaded by this panicked mob'.
Or there might just be a strong dose of denial. Predicting that Russia will never invade Ukraine and that NATO will deter it as it has for the last few decades is a reasonable thing to do if you don't care that much about accuracy but do care about persuading yourself there's no imminent threat of war in Europe.
Ideas like this have made it into common wisdom. 'Don't shoot the messenger' could apply equally to 'Don't shoot the prophet'. We don't like to be the predictor of bad news any more than the bearer of it.
And they aren't even new ideas: they stretch all the way back to mythology. The Trojan princess Cassandra was cursed to accurately predict the future but to be disbelieved by everyone around her. Among other highlights she forecast the fall of Troy, saw the Trojan horse for what it was and was fairly unpopular within Troy as a defeatist doom-monger because of it.
These examples are important because they involve forecasts where we really do want to be accurate, but where other incentives can creep in and throw us off. Unlike Tom Brady, who doesn't really care about accuracy, we should care deeply about exactly how likely war in Ukraine, or a meaningful degree of climate change, are.
Recognising that poorly aligned incentives are a big blocker to accurate forecasting allows us to be more sceptical when assessing other people's forecasts and to recognise the bad incentives that might be creeping into our own forecasting and stopping us from getting better.
Thinking clearly about this also allows us to be more intentional about where we want to forecast well and where we don't. There are times where we want to be like Tom Brady and that's fine, as long as we don't build bad habits that prevent us being detached and accurate forecasters when we need to be.
There might even be examples where you want to have different attitudes towards forecasting accuracy in different parts of the same organisation. For example in a political campaign you probably want your candidates to go into every debate, speech and public appearance utterly confident that they're going to win. But at the same time you might want a back office with a precise understanding of how likely you are to win each constituency, so you can plan appropriately. Just as you don't want nervous candidates turning up to debates thinking they're 60% likely to lose, you don't want bullish analysts planning your campaign strategy with an inaccurate picture of your chances in each constituency. The first step in making effective decisions is to have an accurate grasp on reality. [9] [10] [11]
Beyond Forecasting
So where do we end up? Forecasting accurately over long time horizons is an evolutionarily modern task that, as a starting point, we shouldn't expect to be good at. It's hard to improve at it because there's such a long lag in the feedback loop between forecast and result. And most people aren't trying that hard to get good at it because they have misaligned incentives that mean they get rewarded for things other than forecasting accurately.
There's not much we can do about forecasting being an evolutionarily modern task. But it's no coincidence that if we take the other two problems and address them we get superforecasting. The core idea in superforecasting is the obsession with the feedback loop despite the long lag. And by carefully choosing forecasting questions where accuracy matters and ranking forecasters based on their accuracy it establishes a set of incentives that push forecasters towards accuracy over anything else.
What’s really interesting is that none of this is specific to forecasting.
We should expect to be bad at tasks that are evolutionarily recent, where there is a non-existent or long feedback loop and where we're not strongly incentivised to get better. We should expect to be good at the inverse.
That's very useful. If we can predict areas where we should be expect to be good we can save ourselves a lot of trial and error. It's sometimes more fun to work on things that you're already good at.
But I think the inverse is more interesting. Every area we spot where we should expect to be bad at something is, by implication, a potential source of progress. These areas can be hard to spot but the above gives us a test that might be useful in doing so. And once we spot them the success of superforecasting should make us optimistic - anyone who builds a tight feedback loop and incentive structure around it is likely to see impressive results.
Notes
1 - I’m taking this as a starting point for the rest of this essay: i.e. I’m going to hold as true the idea that people are really bad at forecasting.
2 - I wonder if the fact we don't realise how bad we are at forecasting is one of the biggest challenges superforecasting has had in getting widespread adoption. That would make sense. It's hard to get interested in a solution to a problem that you don't realise you have.
3 - For example even when the human population was growing exponentially this wouldn’t have been visible to the average human during their lifetime. Or when the number of bacteria in a dead animal grew exponentially this wasn’t directly visible. Instead we learned intuitions about how long we could leave meat before eating it and how to spot when it had gone off.
4 - Try it out: if a company has £100 of monthly revenue and is growing at 10% each month, how much revenue will it have after 3 years? Or if 10 people are infected with a disease and each will infect 3 other people and this process takes a week, how many people will have been infected after a year? You’ve probably got no idea. I certainly don’t.
5 - As in superforecasting there's a broader point here about the incredible results that can emerge from simple feedback loops like this. See examples on science, high performance and forecasting in this essay.
6 - I stole the idea of skiing as an unnatural activity from this Paul Graham essay, which is fantastic.
7 - This is analogous to having a skiing coach. You don't need them but they can massively speed up your early learning by helping you avoid common mistakes without you having to do all the trial and error yourself.
8 - This problem is made worse because good forecasts often sound boring and bad forecasts often sound entertaining. If you watch a superforecaster at work they go through an odd looking process, starting with a 'base rate' of how often a given event tends to happen (e.g. how often war breaks out in Europe) and then tweak their estimate from that baseline. In the most boring case you just look at the output of a prediction market or aggregate forecast, which return a single percentage figure. These methods often sound very weird and unconvincing, and are certainly less interesting to most people than a forecast that starts with an 'explanation' of how Putin's KGB history has shaped his worldview. In a fight for viewers between a show that has experts tell convincing stories about Putin and one that simply shows the numerical output of a prediction market there's only going to be one winner.
9 - Julia Galef has an interesting take on this in her book The Scout Mindset: contrasting situations where you want to see the world exactly as it is, where you're better off with what she calls 'Scout Mindset', with situations where you don't, where 'Soldier Mindset' is more useful. This is a very similar idea to Bryan Caplan's 'rational irrationality'.
10 - I wonder if this is also a benefit of the almost universal division of armies into a soldier class and an officer class. A very cold analysis would say you want to have people fighting sure they're going to win as well as people who know exactly what percentage chance you have of winning each given engagement and who plan accordingly.
11 - This obsession with seeing the world as it is, rather than as you would like it to be, is the first step in Colonel Boyd's famous OODA loop. It's also the step, at least according to a lot of the best people I've worked with and spoken to within government, that is the most broken through government and politics. And it's something Jeff Bezos was obsessed with by the age of 10, when interviewed by a reporter for a piece on gifted school children. It later became a core tenet of Amazon culture. When an idea pops up in completely different contexts like this it's usually a sign that it's worth taking seriously.
Thanks to Jamie Strachan, Rohit Krishnan, Idil Cakmur, Lawrence Newport and Lydia Field for comments.
If you're interested in reading more essays like this I post new writing to my mailing list here