Warren Hatch is CEO of Good Judgment Inc, a global network of superforecasters to help companies address complex problems, developed following research by Philip Tetlock and Barbara Mellers. Hatch explains what makes a superforecaster different and the importance of teamwork to deliver more robust outcomes.

Superforecasting emphasises the importance of using a variety of sources, teamwork, humility and accountability. How do they come together?
One thing the researchers tried was: do you do better as a forecaster working on your own, working in a prediction market, or working in teams?
They found forecasters working on their own – some of them quite brilliant – posted performance significantly below that of the other two conditions. Being able to interact with others and benefit from other views made a big difference. That was conclusion one: teamwork matters.
What kind of teamwork? In prediction markets, you will express your view by buying and selling a position about a question. You will have a view and you will buy and sell, but based on where the crowd is at the moment. You may have a completely different belief about the question, but you may observe what you think is a mismatch, or it’s thinly traded and there is an inefficiency.
The researchers found prediction markets tend to have an edge on questions that are close to resolving – a week or a month out – and will tend to converge with other kinds of forecasting sources.
Teams have the edge most of the time because they can bring to bear a diversity of views
That leaves the other one, which is teams, and it turns out teams have the edge most of the time. That is because they can bring to bear a diversity of views, where there is an incentive to share information. In prediction markets, there is arguably an incentive, not only to withhold information, but to distort it to help your position.
Not so with teams. We are working together, competing against other teams, so we want our team to succeed. And because there is an incentive to share information, that accelerates learning. If you find a piece of information and share it with the team, that means I don’t have to go and find it myself, and I can go look for something else.
Where it really kicks in is when we have cognitive diversity. You come from a different background, your perception and cognitive approach might be different. What that means is you will find pieces of the mosaic we are trying to fill out that I might have never recognised. That is a very efficient process that outperforms the other ways to make predictions.
How do you ensure all voices are heard?
We tend to have anchoring, where everyone in a group tends to conform their views to what the high-status individual might be thinking, so a good way to provide a level playing field is through anonymity. We will go onto a forecasting platform and have no idea who the other team members are. All I know is the information they are bringing, so every voice is heard.
Not requiring consensus is important too, because it means we are all free to express our views. If some people have crazier-looking forecasts, that’s ok, because it is not going to affect the median. If it turns out they are right, we are all going to learn and pay more attention. If they are not, we will go the other way.
What are some of the difficulties of superforecasting?
We are seeking to find a balance. If we have a diverse range of perspectives, we can analyse ourselves into paralysis and not get anything done. It is one of the advantages of having a framework. Having a specific forecast question to something meaningful – what is it we need to know? – will prevent us getting paralysed with our own thinking.
This idea of balance shows up in different ways too. One of the key things to do in forecasting is to take what Daniel Kahneman calls an outside view: that is to rely on external research – for example, historical data or comparisons with other countries – to synthesise into an initial forecast.
The reason we start with the outside view is once again to mitigate the risk we become anchored and only move incrementally from our initial forecast. That’s why it is much better to start with an informed outside view. Then I want to bring in the inside view so there is balance.
What makes the difference between superforecasters and ‘average’ forecasters?
Forecasting is a process, and some steps are very effective and should be included. The other thing is just to do it and get feedback to improve.
Good forecasters will want to change their mind when new information comes in
Some characteristics tend to make people better forecasters. One of these is being good at pattern recognition – that mosaic we were talking about. Another big part is being actively open-minded. If you have a belief about the world, is it something to be tested, or something to be protected? Good forecasters will want to change their mind when new information comes in.
Good Judgment Inc has been experimenting with longer-term crowd-sourced forecasts. Can you tell us more?
We know from the data that forecasting of the sort we have been talking about can be very effective on horizons of up to one or two years because there is data for it. As you look further out it becomes hazier, and there will be a point at which you cannot improve the focus at all. But we have been finding ways that maybe can.
One is breaking down complex issues into smaller pieces with a set of forecasts for each, then putting them together into a cluster can tell you something useful. That is what we might do, for example, with the energy transition.
The other option is to create a rough indicator. We may still develop a cluster of questions, but we will identify one that seems to be indicative of how a trend might unfold.
We have also started to look at another approach with capital market assumptions. Typical methodologies are model-driven, or make a series of projections on economic growth, population, productivity, inflation and wrap them up into capital market assumptions.
What they are missing is what we know to be very effective: using the wisdom of the crowd
In all cases, what they are missing is what we know to be very effective: using the wisdom of the crowd. This is what our approach does, asking a large number of people for their assumptions over three and ten years to find the median. It is also an opportunity to discuss best practice and the relative value of different information. All comments are shared and voted on, and everyone then has the opportunity to make a final update to their assumptions. It goes really fast; it is really effective, and it is something that more firms are beginning to adopt.