# tidbits #10

- I added a new calender to the blog (on the right Sidebar). I call it the
**Environmental Economics calender**and will use it to inform interested readers about environmental economics seminars around Paris, but also about environmental economics conferences and workshops. If you have an interesting workshop/conference/seminar that you would like to see advertised/announced please let me know.

- I also take the opportunity to provide some preliminary information on a
**workshop that I co-organize on the 6th July 2015 in IPAG, Paris: The changing role of economics and economists in nuclear policy and politics**. This workshop will be a side-event to the huge Our Common Future under Climate Change conference that will be held in Paris, 7th-10th of July. We have very interesting speakers so far: Tom Burke (E3G, Imperial College London), Dominique Finon (CNRS), Jan-Horst Keppler (OECD/NEA), Patrick Momal (IRSN), Gordon MacKerron (University of Sussex), Steve Thomas (University of Greenwhich), William Nuttall (Cambridge University), and the BBC journalist Rob Broomby, who is going to chair the panel discussion. More information to follow soon.**Registration:**Attendance is free but registration is required by the 19th of June 2015. Please follow the link ingmar.schumacher@ipag.fr to send an email with the subject line: “Nuclear workshop registration” in order to confirm.

**Wednesday**(13/05): SEFD Seminar at Ecole Polytechnique Paris**(11:00-12:30) Carolyn Fischer**(Resource For the Future http://www.rff.org/rff/Fischer.cfm) “Strategic Subsidies for Green Goods“.

- The psychology journal Basic and Applied Social Psychology has
**banned the use of p-values**for empirical articles, see HERE. Is this a useful change and will other journals follow? The p-value is basically used in statistical tests to tell you something about how significant your results are. For example, we try to understand the impact of x on y, using y= alpha x+e, where alpha is the coefficient to be determined and e the errors. We would hypothesize that alpha=0 (H0). Then a p-value below 0.001 is interpreted as meaning that the coefficient alpha is highly statistically significantly different from zero. Or, in other words, we cannot reject that 1 out of 1000 times the coefficient alpha is zero.

Thus, if the p-value we obtain is e.g. lower than 0.001, then we would reject H0. The problem then is that this gives us only an indication that we would expect the coefficient not to be zero, not which other number it would take. If the p-value is above 0.001, then we cannot reject H0 at that significance level. However, we can never accept H0.

I guess that it is for these reasons that the editors of Basic and Applied Social Psychology decided to ban the use of p-values. In my opinion, a regression result should always be interpreted as the best case that a researcher can make for a hypothesis, or for a model. I think it is quite clear that one can always make a worst case, too. Including or dropping some variables, some other time-series filter, another group of countries, another time period, another treatment of spatial or cross-observation correlation, another regression method is always likely to lead to a different statistical result. I hypothesize that there is no statistical result that is so robust that it holds for any (statistically reasonable) change in the modeling assumptions. For, if there were, then we would not need statistical interference! However, if we do not rely on statistical interference, what is the use of statistics if we cannot study hypotheses and at least know that there exists a best case result for our model?

Thus, in my opinion, most journals are not going to follow this ban. Also, if one understands the limitations of p-values and how they should be used, then there is nothing really wrong with them and they add a little bit of information to the results.

Furthermore, and maybe most importantly, I think it should continue to be best practice to not just throw an econometric regression at an audience, but to start by making a convincing case based on a model that captures the most important relationships between the variables in question, and then provide a best case scenario for this model. This best case scenario should be complemented with robustness exercises that show under which conditions this best case continues to hold, but also when it may not hold anymore. This gives some understanding of the robustness and where the model may go wrong, or where the data does not fit.

Furthermore, if statistical analysis does not support a model, then this should not directly invalidate the model. As I was told once by a friend: “If the data does not fit the model, then too bad for the data!” Clearly, the question then is where are the problems with the model, or why does the data not support the model. I guess the point that I am trying to make is that we should simply be much more careful in our statistical analysis in general, and not simply run the one or the other regression but clearly think of what we want to know, what model we have in mind, how robust our results are, when they are not robust any longer, what the limitations are and where they come from.

Advertisements
(function(g){if("undefined"!=typeof g.__ATA){g.__ATA.initAd({sectionId:26942, width:300, height:250});}})(window);
var o = document.getElementById('crt-1863963601');
if ("undefined"!=typeof Criteo) {
var p = o.parentNode;
p.style.setProperty('display', 'inline-block', 'important');
o.style.setProperty('display', 'block', 'important');
Criteo.DisplayAcceptableAdIfAdblocked({zoneid:388248,containerid:"crt-1863963601",collapseContainerIfNotAdblocked:true,"callifnotadblocked": function () {var o = document.getElementById('crt-1863963601'); o.style.setProperty('display','none','important');o.style.setProperty('visbility','hidden','important'); }
});
} else {
o.style.setProperty('display', 'none', 'important');
o.style.setProperty('visibility', 'hidden', 'important');
}