What is the goal of this project?
The goal of this project is to build, run, and continually improve a public early warning system for mass atrocities. For governments, policymakers, advocates, and others attempting to prevent mass violence, there is currently no effective, publicly available mechanism for identifying where mass atrocities are likely to occur. By using the best available methods to routinely assess the risk of mass atrocities in countries worldwide, we seek to expand opportunities for preventive action before violence breaks out and to be a tool in generating pressure for early and effective response.
What exactly does the Early Warning Project do?
The Early Warning Project produces risk assessments of the potential for mass atrocities in countries around the world. Our early warning system is made up of two elements: state-of-the-art statistical analysis used to rank countries at risk for mass atrocities, and feedback and analysis from those with regional or subject matter expertise. The combination of these two factors produces early warnings of the potential for mass atrocities that can then be acted on by governments, policymakers, advocacy organizations, scholars and the general public.
The Early Warning Project additionally provides regular analysis of at-risk countries and evolving situations via blog posts written by the Project team as well as guest bloggers with expertise in areas related to genocide prevention or specific countries and regions.
What effect does the Early Warning Project hope to have?
In creating the first publicly available early warning system for mass atrocities, the Early Warning Project seeks to make prevention a key part of the policy agenda. Furthermore, where governments, advocacy organizations, and individuals working to prevent mass violence and genocide have limited resources at their disposal, the system is a tool that will help them determine where they should concentrate their efforts to achieve the greatest impact.
The Early Warning Project also aims to advance scholarship in the field of genocide prevention by fostering discussion, debate, and innovation around methods of early warning and prevention.
How do you assess the risks of something as complex as mass atrocities?
While forecasting events such as mass atrocities is difficult, we meet this challenge in two ways. First, through our statistical risk assessments, we look at previous instances of mass atrocities to identify patterns that will help us spot cases at high risk of mass killing now and in the future. Second, where our statistical risk assessments may be imprecise, we utilize expert opinions to further address the complexities of assessing the risk of mass atrocities.
Specifically, how do you assess risks of mass atrocities?
Our early warning system uses two complementary methods: (1) statistical modeling and (2) a “wisdom of crowds” process called an opinion pool. By comparing and combining the statistical risk assessments with forecasts from our opinion pool, we are able to generate warnings that take into account structural risk factors identified through statistical analysis as well as the beliefs of regional and subject-matter experts.
Who can use the early warning system?
The system can be used by anyone with an interest in preventing mass atrocities. Advocates, governments, policymakers and others can use our risk assessments and analysis to prioritize prevention efforts; journalists can use our analysis to evaluate emerging situations; scholars, students, and experts can engage with and further improve our risk assessments; those on the ground in at-risk countries can utilize our assessments in prevention efforts and finally, the public can utilize the outcomes of the project to help drive a policy agenda focused on early and effective prevention.
What makes your system different from previous efforts?
Our early warning system is different from previous efforts because it combines two ways of assessing which countries are at risk of mass atrocities: statistical risk assessments and expert opinions. Our system sees these methods as complementary and by using them in combination we are able to reliable and specific assessments of the risk of mass atrocities. We don’t aim to replace existing risk-assessment efforts, though. Instead, we’re looking to complement and inform them, and to be informed by them.
So which countries are at the highest risk for a potential mass atrocity?
statistical risk assessment ranks countries by risk. You can also
search on our blog for analysis on specific countries.
Where do your statistical risk assessments come from?
Our statistical risk assessments are actually an average of forecasts from three models representing different ideas about the origins of mass atrocities. All of these models are developed from and applied to publicly available data from widely used sources. In essence, we look to the past to identify patterns that will help us spot cases at high risk of mass killing now and in the future.
Drawing on work by Barbara Harff and the Political Instability Task Force, the first model emphasizes characteristics of countries’ national politics that hint at a predilection to commit genocide or “politicide,” especially in the context of political instability. Key risk factors in Harff’s model include authoritarian rule, the political salience of elite ethnicity, evidence of an exclusionary elite ideology, and international isolation as measured by trade openness. We refer to this model as the "bad regime" model.
The second model takes a more instrumental view of mass killing. It uses statistical forecasts of future coup attempts and new civil wars as proxy measures of factors that could either spur incumbent rulers to lash out against threats to their power or usher in an insecure new regime that might do the same. We refer to this model as the "elite threat" model.
The third model is really not a model but a machine-learning process called Random Forests applied to the risk factors identified by the other two. The resulting algorithm is an amalgamation of theory and induction that takes experts’ beliefs about the origins of mass killing as its jumping-off point but also leaves more room for inductive discovery of contingent effects.
All of these models are developed from historical data that compare cases where state-led mass killings occurred to ones where they didn’t. In essence, we look to the past to identify patterns that will help us spot cases at high risk of mass killing now and in the future.
To get our single-best risk assessment, we simply average the forecasts from these three models. We prefer the average to a single model’s output because we know from work in many fields—including meteorology and elections forecasting—that this “ensemble” approach generally produces more accurate assessments than we could expect to get from any one model alone. By averaging the forecasts, we learn from all three perspectives while hedging against the biases of any one of them.
Another advantage of this ensemble approach is that it allows us to add new models as the field grows and advances; we’re happy to hear from others who are developing their own models.
To read our in-depth academic paper on this, see here.
What about things like climate change or unemployment or hate speech? Why aren’t they included in your statistical analysis?
Our statistical modeling can only consider factors for which we have reliable measures covering nearly all countries of the world for a long period of time. That’s because these models have to be trained on historical data, and onsets of state-led mass killing are so rare that you need data from many years and many countries to see enough examples to do that training well.
Unfortunately, there are many things that might help us spot changes in risks of mass killing—including changes in political rhetoric and unemployment—for which we simply don’t have reliable historical data. As that changes, though, we expect to expand and update our statistical modeling, too. We’re happy to hear from you if you are aware of specific data sets that might be useful for this work now or in the future.
Does your system predict genocides?
No. The Early Warning Project provides an assessment of the risks for potential mass atrocities. Our early warning system does not focus specifically on genocide, nor do we claim to be able to anticipate exactly when and where crises will occur. Instead, our system is designed to assess countries’ risks for onsets of mass killing, some of which could evolve into genocides.
What is the difference between “genocide” and “mass killing”?
Mass killing is a broader concept than genocide; genocide has a very specific meaning under international law. Most genocides are also mass killings, but not all mass killings meet the international legal definition of genocide.
Under the 1948 UN Convention on the Prevention and Punishment of the Crime of Genocide, genocide means “any of the following acts committed with intent to destroy, in whole or in part, a national, ethnical, racial or religious group, as such:
(a) Killing members of the group;
(b) Causing serious bodily or mental harm to members of the group;
(c) Deliberately inflicting on the group conditions of life calculated to bring about its physical destruction in whole or in part;
(d) Imposing measures intended to prevent births within the group;
(e) Forcibly transferring children of the group to another group.”
For the purposes of our project, we consider a mass killing to have occurred when the deliberate actions of state agents, or other groups acting at their behest, result in the deaths of at least 1,000 noncombatant civilians in a relatively short period of time, usually a year or less. A full explanation of the terms used in this definition is below.
A “noncombatant civilian” is any person who is not a current member of a formal or irregular military organization and who does not apparently pose an immediate threat to the life, physical safety, or property of other people.
The reference to “deliberate actions” distinguishes mass killing from deaths caused by natural disasters, infectious diseases, the accidental killing of civilians during war, or the unanticipated consequences of other government policies. We consider fatalities intentional if they result from actions designed to compel or coerce civilian populations to change their behavior against their will, as long as the perpetrators could have reasonably expected that these actions would result in widespread death among the affected populations. Note that this definition also covers deaths caused by other state actions, if, in our judgment, perpetrators enacted policies/actions designed to coerce civilian populations and could have expected that these policies/actions would lead to large numbers of civilian fatalities. Examples of such actions include, but are not limited to, mass starvation or disease-related deaths resulting from the intentional confiscation or destruction of medicines or other healthcare supplies, and deaths occurring during forced relocation or forced labor.
To distinguish “mass killing” from large numbers of unrelated civilian fatalities, the victims of mass killing must appear to be perceived by the perpetrators as belonging to a discrete group. That group may be defined communally (e.g., ethnic or religious), politically (e.g., partisan or ideological), socioeconomically (e.g., class or professional), or geographically (e.g., residents of specific villages or regions). In this way, apparently unrelated executions by police or other state agents would not qualify as mass killing, but capital punishment directed against members of a specific political or communal group would.
Our statistical risk assessments and some of the forecasts from our “wisdom of crowds” system focus specifically on state-led mass killing. Here, “state-led” refers to cases in which the relevant violence is carried out by uniformed troops, police, or other agents of state security, or by other groups acting at the behest of government officials.
Why do the statistical assessments only consider state-led mass killings? Don’t non-state groups kill lots of civilians, too?
Yes, unfortunately, rebel groups and other non-state actors also kill many civilians. Our statistical risk assessments only consider state-led mass killing, however, because they are produced by models that have to be “trained” on historical data, and at present we only have deep and reliable data on mass killings carried out by states. If and when we are able to produce or obtain comparable data on mass killings perpetrated by non-state groups, we will expand our statistical modeling to incorporate them as well. We also use our opinion pool to ask experts about cases and topics not covered by our statistical risk assessments.
How accurate are your statistical risk assessments?
We can’t say yet how accurate our statistical risk assessments are because we only started producing them in 2013, and the only way to see for sure how well a forecasting process works is to use it in real time. Additionally, the events that we are forecasting—new episodes of state-led mass killing—are very rare. Since the 1940s, an average of just one or two of these events has occurred worldwide each year. The rarity of these events is good news for humanity, but it also makes it harder to assess the accuracy of a system designed to forecast them. It will take years to accumulate a track record from which we can produce meaningful statistics about the system’s forecasting power.
That doesn’t mean we’re flying blind, though. To confirm that our models have forecasting power, we used a process called stratified k-fold cross-validation to check their accuracy when applied to incidences of mass killing that the models hadn’t already “seen.” The results of this exercise suggest that our statistical risk assessments should prove about as accurate as other models currently being used to forecast similarly rare and complex political events in countries worldwide.
How can your forecasts be trusted?
One of the core principles of our project is transparency. As a matter of course, all of the data we use in our statistical modeling comes from reputable public sources. We also post all of the code and data we use to generate these risk assessments on a public web site so that other researchers can replicate our work and help us think about how to do it even better.
Is this a “big data” thing?
No. The statistical models we’re using now rely on relatively small data sets, most of which are produced by subject-matter experts and are only updated once a year. We are exploring ways to use much larger data sets derived from social media, news stories, and other online content to monitor atrocities and to improve our statistical models, but these are challenging research problems at the cutting edge of contemporary social science and the field is not there yet.
Where does your data come from?
You can find a list of the sources for our data,
as well as the measures that appear in our models, here.
What is an opinion pool, and how does it work?
An opinion pool is a structured process for collecting and combining forecasts (opinions) from a collection (pool) of people. The principle behind it is the “wisdom of crowds.” We know from prior research that individuals usually aren’t very good at forecasting political events, but it turns out that even small crowds can usually do the job much better than the individuals that comprise those crowds.
Who participates in your opinion pool?
Our opinion pool is open to the public. Participation in “wisdom of crowds” forecasting systems can be restricted to a narrowly defined set of people, or it can be thrown open to the general public. From prior research, we know that formal markers of expertise (e.g., academic degrees and professional titles) are not very reliable predictors of forecasting skill, so we see no value in restricting participation in our opinion pool on the basis of hard rules tied to those kinds of markers. We also know that the larger and more diverse the set of participants we have, the more accurate the forecasts will be.
While we used to conduct an expert opinion pool, studies have shown that expert forecasts are not more accurate than public ones, so our current opinion pool includes experts but is open to anyone who wishes to participate.
How accurate are the forecasts this opinion pool produces?
We can’t really say yet because we only started running our opinion pool in late 2013, and most of the events we’re trying to forecast with it occur very rarely. That said, we do know from similar projects that these “wisdom of crowds” systems can produce sound forecasts, and we expect that pattern to hold here.
Are your forecasters paid for their contributions?
No, all of our forecasters generously volunteer the time they spend forecasting for this project.
How do you decide what to ask the forecasters?
Most of our opinion pool questions focus on cases identified by our statistical risk assessments as being at relatively high risk, but we also ask about other situations of current concern based on changing events.
We’re also open to suggestions of topics for the opinion pool to cover.
If your statistical models are good, why ask people what they think, too?
The forecasts generated by our opinion pool complement our statistical risk assessments in a couple of ways.
First and most important is their dynamism. Our statistical models can only be updated occasionally, because the risk factors they consider are primarily structural and don’t change very often. By contrast, forecasters participating in our opinion pool can update their beliefs about risks of mass atrocities at any time, as they read or see or hear new information that they believe to be pertinent. The opinion pool allows those people to record changes in their beliefs whenever they happen and even to record and share with other participants information about why their beliefs changed.
The second big advantage of the opinion pool forecasts is their flexibility. Where the statistical models we have now can only assess risks at the level of whole countries, the opinion pool can also cover specific at-risk groups and parts of countries. Where available data currently limit our statistical assessments to mass killings perpetrated by states, our opinion pool allows us to assess prospects for atrocities committed by rebel groups and other non-state actors. And, because questions can be added at any time, we can also use the opinion pool quickly to generate forecasts in response to fast-developing concerns, like coups or snap elections.