Oil and conflict: Not so related after all?

A new paper from Cotet and Tsui.

This paper examines the effect of oil wealth on political violence. …Using a unique historical panel dataset of oil discoveries and extractions, we show that simply controlling for country fixed effects removes the statistical association between the value of oil reserves and civil war onset. This non-result is robust to using natural disaster in oil-producing nations as an instrument for oil price. Other macro-political violence measures, such as coup attempts and irregular leadership transitions, are also not significantly correlated with oil wealth.

To further address endogeneity concerns, we exploit changes in oil reserves over time due to randomness in the success or failure of oil explorations. We find little robust evidence that oil discoveries increase the likelihood of violent challenges to the state in the sample of country-years in which at least one exploratory well is drilled. Rather, oil discoveries increase military spending in the subsample of nondemocratic countries.

…We suggest a possible explanation of our findings based on the idea that oil-rich nondemocratic regimes effectively expend resources to deter potential challengers.

Older ungated version here.

Michael Ross has argued otherwise. Regardless what you believe, however, there’s a striking shortage of null results papers published in top journals, even when later papers overturn the results. I’m more and more sickened by the lack of interest in null results. All part of my growing disillusionment with economics and political science publishing (and most of our so-called findings). Sigh…

18 Responses

  1. I hear three related gripes w.r.t. null results in social science. I’d be curious which (if any) you consider to be the real problem…
    1) Null results aren’t “interesting” or “exciting” enough.
    2) The emphasis on space-limited “Top” journals, without a corresponding repository of rejected papers, forces us to select surprising results and bury the nulls, preventing real literature review or meta-analysis.
    3) Null results get ignored because the field’s standard tools fail to separate population variance from statistical power, making insignificant results particularly hard to interpret.

  2. Maybe you ought to focus more on influencing policymakers rather than academics? After all, how often has an academic paper changed the world?