Wednesday, December 29, 2010

The decline effect and the scientific method

Two weeks ago, the New Yorker has published an interesting article about confirmation bias, publication bias and significance chasing. In case you have not read this, here is the link. Even if most of the discussion is confined to natural science, the methodological concerns also apply to empirical social science.

7 comments:

  1. Thanks a lot for this link, Guo.

    I remember that you were very fond of empirical research and econometrics when we were attending "Development Economics" together at HU. How is it now? I find that a vast majority of empirical articles become less and less convincing the closer I look (with the limes being zero quite often, I think). Coupled with the reluctance of economists to discuss alternative explanations of any correlation and all the other problems of significance, publication bias etc., I am less than sure whether the ever-increasing emphasis on empirical methods in economics (and other social sciences) is such a good idea.

    ReplyDelete
  2. Dear Ole,

    The pitfalls of empirical research are well known. Take, for example, cross-country growth regressions: More than two decades and millions of regressions later, researchers still fail to isolate a robust driver of growth. We run kitchen sink regressions (Sala-i-Martin 1997), tried fancy Bayesian averaging (Doppelhofer and Sala-i-Martin 2004) - but in the end, measurement error in GDP seems to confound even these results (Jarocinski and Ciccone, 2009). If the arguably most robust explanatory variable for future prosperity is current prosperity, growth empirics appears to be a failure.

    How do we go about it? Even if rigorous empirical results are inherently difficult to produce and "less convincing" as you assert, I see no way around it. Just because empirics is hard, would you dismiss it altogether? Theory without empirics is idle conjecture: The greatest scientific achievements would not have been made had there been no empirical study. Without tedious data collection and pencil and paper statistical analyses, we would probably still think that the earth lies in the centre of the universe, with planets rotating in concentric circles around it. The earth centered model simply did not match the data observed, while a heliocentric model with elipsoid rotations turned out to be a much better (and in that regard perhaps "truer") model. In short, empirics and the ability to falsify theory is the very foundation of scientific research. Without empirics, we are not better than medieval doctors believing in all sorts of obscure theories.

    Besides, I do not agree that the vast majority of empirical articles had become less and less convincing. In recent years, we have seen the rise of randomized controlled trials (the "gold standard" in empirical research) applied to development policy. Of course the RCTs are problematic too: They are expensive, fail most of the times (causing publication bias) and usually do not posess a high degree of external validity. But researchers are well aware of these problems, and I would argue that empirical research has become increasingly sophisticated and precise: If you go back only two decades, for example, you will find almost no discussion about issues of endogeneity and reverse causality. Today, no one is interested in (biased) correlations - if you cannot establish a clean causal story, your result is just not that interesting. Today, econometricians have developed a wide range of tools (IV, RD) to address causal issues - methods that were very rarely used in few decades ago.

    As science goes, you will always have results that are not convincing - the idea (and I believe the point of all the research) is to produce a pile of research that ultimately sum up to a clearer understanding. If you go through the pile of existing empirical literature, you will find many contradicting results: For example, Doucouliagos and Ulubasoglu (2006) survey 81 studies on the democracy-growth relationship and find that 16% of the estimates are negative and significant, 20% negative insignificant, 38% positive insignificant and 26% positive significant. While each study might not convey much information per se, combining the pile of study together can yield insightful results - today, we have accumulated enough data to run meta-regressions, combining all results to form a more precise estimate. In fact, meta-regressions are also used to explicitly uncover publication biases.

    Ultimately, there is no way to dismiss empirical study. In fact, I would rather welcome a more evidence based approach in economics: I would rather prefer more stylized facts and robust correlations over a bunch of overly complex mathematical models that might be elegant and beautiful per se but cannot be tested empirically. It is no surprise that you find no robust answer to the question whether trade accelerates economic growth - that is just too general a question - but if you ask the right questions, you will surely be able to contribute to a deeper understanding on how the world works.

    ReplyDelete
  3. PS: Empirics without theory is, of course, merely descriptive and as useless as a purely theoretical approach. That said, theory of course is still the paramount of science.

    ReplyDelete
  4. Dear Guo,
    Many thanks for your elaborate answer. I totally agree concerning the indispensable connection between theoretic and empiric work. But I disagree with some of your points.

    You wrote that „the pitfalls of empirical research are well known“. I am not so sure. Some of them may be well known, but often in the way that it was „well known“ in 2007 that the subprime and derivatives business in the US was everything but healthy: Most specialized economist thought so, others agreed without giving it much thought, and people outside the universities couldn't care less.

    I also take issue with the implied claim that there is a consensus about those pitfalls. Take, for example, McCloskey's criticism of the „cult of significance“ (published in numerous papers and her 2008 book with Ziliak). I think she has a point. Do you agree? Does the statistics department at HU agree? Honestly: How many economists employed by the average university have even heard of that critique? So: are the pitfalls of empirical research well known?

    You also claim that „no one is interested in (biased) correlations“. Sure. There are few recent empirical works that do not come with a causal story. But that is precisely the point: The problem is not that there is no plausible causal interpretation but that there are many, sometimes dozens.
    They can be mentioned. Good scientists can explain why they lean towards one interpretation of the data and not the other. Science is all about persuasion (McCloskey 1998). But only a fraction of articles thoroughly and honestly examine alternative explanations – many just take their preferred story and run with it.* Going from correlation to causality is not only the hardest, it is also by far the weakest part in most articles (they usually the get calculations right ...).

    Thanks for mentioning the paper by Doucouliaglos and Ulubasoglu, which brings me to another point. I see the study as showing mostly one thing: The question that is asked is just not a good one! (Tom Lehrer: „You ask a silly question / And you get a silly answer“) All we have gotten out of a lot of of effort (and money) is a huge rohrschach inkblot, where everyone can see what (s)he wants. The same with many other questions – you yourself mentioned the „drivers of growth“ discussion. So should we really spend still more time in pursuit of The One Method, which will finally give robust results, be accepted among all economists and settle those questions once and for all with a simple answer? Because that is all we can get if we ask questions like „What is the relationship of democracy [whatever that may be] and growth, ceteris paribus?“: Silly answers.

    Of course: scientific work without any empirics is futile. And more empirics is generally a good thing, just like more education or higher pensions. But we are economists, so we care about the marginal benefit – because ressources are finite, sometimes decidedly so. So how large are the benefits of the ever-growing obsession of social scientists with empirical methods? I quite agree with Summers' (1991) article („The Scientific Illusion in Empirical Macroeconomics“) in finding them not overwhelming.

    Best
    Ole
    *I do not read much in development economics and advanced macro. But that is the way that I generally see it political economics and political science, and also in macro the few times that I look.

    ReplyDelete
  5. This comment has been removed by the author.

    ReplyDelete
  6. This is an interesting post and I enjoyed reading both of your comments.

    Has there ever been a look into the possible inverse of this effect? Where with replicated experiments, more empirical evidence can be found for a given hypothesis (without a good explanation) where before there was very little?

    Perhaps it would do little to the problem at hand (how much faith can we put into an experiment, at what point do we know something to be a fact, etc.) but I think it would still make for a more wholesome analysis..

    This was a good article and of course the general idea is an important one that should be part of the conversation in the academic community, but I don't like how loosely this writer seems to suggest the commonness of this 'phenomenon'. It almost seems like he is suggesting that all empirical research and scientific studies should be taken with a grain of salt.

    What I disliked the most was the last sentence, "When the experiments are done, we still have to choose what to believe."

    In defense of empirical research, and to relate this thing to economics,
    I wanted to add a quote from one of my past professor's notes:
    "[Stephen Hawking] claims that economics
    is based on “effective theory,” by which he means that we posit ob jectives and constraint
    sets that result in predictions on outcomes of interest, but where the foundation elements
    (ob jectives) are highly stylized representations of the “real” kind of interactions and behave
    manifested in everyday life. He then recites the common claim that the use of effective
    theory is only moderately successful because people are not fully ration and/or base their
    choices on defective analyses of the consequences of their choices. Our position is that
    rationality is essentially an irrefutable assumption, given that we are able to define pref-
    erences in any way we like. Given our latitude in choosing representations of preferences,
    the assumption of utility maximization is an innocuous one and allows us to use quantita-
    tive methods to analyze behavior. Even the phenomenon of individuals having imperfect
    or incorrect knowledge of the payoffs associated with the choices available to them can
    be modelled within neoclassical economics, which we will call the “standard model” of
    economics."

    Of course, I'm biased to want to believe that this article is wrong, but I still hope that the decline effect happens to the decline effect..

    **My last comment looked messy so I tried to fix the spacing a bit..

    ReplyDelete