The views expressed in this post are those of the author and not necessarily those of the Open Nuclear Network or any other agency, institution or partner.
In his book Sapiens, Yuval Noah Harari imagines a situation where the best mathematicians and modelers in Egypt approach Mubarak in 2010, claiming they have developed the most accurate algorithm for predicting revolutions. Their forecast? A revolution will happen next year. Mubarak, in response, orders stringent security measures to prevent any uprising. When 2011 arrives and no revolution occurs, Mubarak scolds the forecasters for their faulty prediction. But they argue that because they predicted it, steps were taken to prevent it. In their minds, they should be rewarded, not reprimanded.
Harari offers this hypothetical to illustrate two types of predictions. The first type is immune to human intervention – the weather, for example. No matter how accurate our models become, the weather itself will not change because of our predictions. Climate would be a different matter. The second type, however, is reflexive; it changes because we predict it. Stock markets, political movements and, importantly for our work, nuclear risk, fall into this category. Predictions in these domains are not just observations; they shape outcomes.
At Open Nuclear Network – a program of PAX sapiens, this is the paradox we navigate. Our foresight and prediction work is not about identifying a precise date when nuclear war will occur. In fact, our goal is that we never be exactly right about such a forecast. Instead, we illuminate potentialities, trace milestones and identify the warning signs that indicate rising risk. If we succeed, our insights will prompt action and by doing so, we will have helped avert a crisis rather than witnessed it unfold.
To explore this, we have worked with three organisations that focus on prediction, each bringing a different orientation to the challenge of applying structured forecasting for the nuclear space.
The first is Swift Centre. They are a nimble, tech-savvy team that developed an application connecting superforecasters with subject matter experts to tackle complex issues. The platform is sleek and user-friendly and during exercises, participants can even have cool-looking avatars, so that individual forecasts are kept anonymous. But the real power of Swift is in their deployment model: a solid cohort of vetted superforecasters who can be quickly assembled for forecasting exercises. We have conducted several engagements with them and the experience has been highly productive. As my colleague – Sarah Laderman – put it in a recent article, “While forecasting is not a methodological panacea, it is an extremely effective tool to force some of these discussions that experts do not typically have regarding the assumptions and logic behind their analyses.” The Swift model – putting superforecasters and subject matter experts in the same room – is especially useful for practitioners like myself, who tend to operate within the strict mental lanes of their discipline. It forces us to stretch, to entertain adjacent possibilities we might otherwise dismiss.
Image generated by AI, courtesy of OpenAI's DALL-E
The second organisation, the Forecasting Research Institute (FRI), took a more systematic, research-driven approach. Our collaboration resulted in the largest ever conducted nuclear prediction study, engaging both traditional nuclear experts and superforecasters to quantify the probability of a large-scale nuclear catastrophe. The study also utilises “conditional forecasting,” essentially an “if-then” approach. It was not about a singular forecast but rather about understanding the conditions that might shift probabilities. The study highlighted key divergence points: experts tended to emphasise geopolitical tensions and proliferation risks, while superforecasters placed greater weight on deterrence stability and historical precedent. The most actionable takeaway? Forecasting is not just about numbers – it’s about structured thinking. FRI’s process rigor clarified the different ways risk is perceived and where interventions might be most effective. The most popular policies were to establish crisis communications network and failsafe reviews.
Most recently, we partnered with Confido Institute, a group that understands that forecasting, when done right, is as much about human interaction as it is about probability estimates. Confido gets the gamification aspect of forecasting. Unlike our workshops with Swift, with Confido, we did not bring in superforecasters. Instead, the focus was on group dynamics – how people interact, how ideas are surfaced and most importantly, how cognitive pacing can shape the effectiveness of the exercise. For example, we engaged participants in baselining assumptions through real-time word cloud generation and used the Drivers 2057 card deck – developed for the IAEA 2022 Safeguards Symposium – to explore intervening variables. Forecasting is an exhausting mental process and Confido breaks it down into structured activities that ensure the collective intelligence of the group could be distilled. What I found particularly refreshing about working with them was their commitment to democratising interaction. They do not just focus on making sure everyone is heard – they also structure sessions in a way that prevents mental fatigue from setting in too early. As anyone who has been in this space long enough knows, the quality of a forecasting session is often determined not by the best individual insights but by how well the group can sustain meaningful engagement over time.
Each of these approaches – Swift’s rapid-response forecasting, FRI’s research-driven rigor and Confido’s attention to group interaction – offers something distinct. And each reinforces a crucial truth: in nuclear forecasting, the act of making a prediction is itself a form of intervention. Our work does not exist in a vacuum. We are not meteorologists, observing distant storms from a detached vantage point. We are inside the storm, shaping the conditions with every insight we generate. The challenge, then, is in the details: how do we create spaces that capture the collective wisdom needed to illuminate potentialities and identify the most impactful risk reduction measures? If we do it well, then the prediction paradox – the kind that frustrated Mubarak – becomes not a failure, but a success. Sapiens success in nuclear forecasting means the crisis never comes. So, if we get it “wrong,“ we did something right.
Karim Kamel leads foresight and prediction initiatives at Open Nuclear Network (ONN), a programme of PAX sapiens. Karim manages projects in collaboration with partners such as the Swift Centre and the Forecasting Research Institute, focusing on prediction and forecasting in the context of nuclear risk reduction. Working alongside expert forecasters and subject matter experts, Karim helps design forecasting studies on critical issues in this field. Additionally, Karim is a junior researcher at the Peace Research Center Prague, a center of excellence at Charles University, where he is pursuing his Ph.D. His research focuses on the epistemic community surrounding the use of Delphi methods in technology policy. Before joining ONN, Karim served as an external relations consultant at the Comprehensive Nuclear-Test-Ban Treaty Organization, a program associate at the Social Science Research Council, and a program analyst at Carnegie Corporation of New York.
Contact: kkamel@paxsapiens.org