from Lars Syll
Debating econometrics and its short-comings yours truly often gets the response from econometricians that “ok, maybe econometrics isn’t perfect, but you have to admit that it is a great technique for empirical testing of economic hypotheses.”
But is econometrics — really — such a great testing instrument?
Econometrics is supposed to be able to test economic theories but to serve as a testing device you have to make many assumptions, many of which themselves cannot be tested or verified. To make things worse, there are also only rarely strong and reliable ways of telling us which set of assumptions is to be preferred. Trying to test and infer causality from (non-experimental) data you have to rely on assumptions such as disturbance terms being ‘independent and identically distributed’; functions being additive, linear, and with constant coefficients; parameters being’ ‘invariant under intervention; variables being ‘exogenous’, ‘identifiable’, ‘structural and so on. Unfortunately, we are seldom or never informed of where that kind of ‘knowledge’ comes from, beyond referring to the economic theory that one is supposed to test. Performing technical tests is of course needed, but perhaps even more important is to know — as David Colander recently put it — “how to deal with situations where the assumptions of the tests do not fit the data.”
That leaves us in the awkward position of having to admit that if the assumptions made do not hold, the inferences, conclusions, and testing outcomes econometricians come up with simply do not follow from the data and statistics they use.
The central question is “how do we learn from empirical data?” Testing statistical/econometric models is one way, but we have to remember that the value of testing hinges on our ability to validate the — often unarticulated technical — basic assumptions on which the testing models build. If the model is wrong, the test apparatus simply gives us fictional values. There is always a strong risk that one puts a blind eye on some of those non-fulfilled technical assumptions that actually makes the testing results — and the inferences we build on them — unwarranted.
Haavelmo’s probabilistic revolution gave econometricians their basic framework for testing economic hypotheses. It still builds on the assumption that the hypotheses can be treated as hypotheses about (joint) probability distributions and that economic variables can be treated as if pulled out of an urn as a random sample. But as far as I can see economic variables are nothing of that kind.
I still do not find any hard evidence that econometric testing uniquely has been able to “exclude a theory”. As Renzo Orsi once put it: “If one judges the success of the discipline on the basis of its capability of eliminating invalid theories, econometrics has not been very successful.”
Most econometricians today … believe that the main objective of applied econometrics is the confrontation of economic theories with observable phenomena. This involves theory testing, for example testing monetarism or rational consumer behaviour. The econometrician’s task would be to find out whether a particular economic theory is true or not, using economic data and statistical tools. Nobody would say that this is easy. But is it possible? This question is discussed in Keuzenkamp and Magnus 1995. At the end of our paper we invited the readers to name a published paper that contains a test which, in their opinion, significantly changed the way economists think about some economic proposition. Such a paper, if it existed, would be an example of a successful theory test. The most convincing contribution, we promised, would be awarded with a one week visit to CentER for Economic Research, all expenses paid. What happened? One Dutch colleague called me up and asked whether he could participate without having to accept the prize. I replied that he could, but he did not participate. Nobody else responded. Such is the state of current econometrics.