The New Development Economics

The current issue of the New Yorker has a profile of Esther Duflo.  In the article, the views of Angus Deaton on the limitations of randomised controlled trials are assessed as wondering if “someone put sand in Angus’s toothpaste”.  Readers will find the offending substance here. 

You will undoubtedly  make your own assessment of the following direct quote from Duflo in the New Yorker piece:  “I want a baby goat” she mused.  “I’ll take good care of it”.

14 thoughts on “The New Development Economics”

  1. The debate on what kind of effects are identified by Randomised Controlled Trials and tight instrumental variables has been one of the most interesting that I have come across since starting economics. Heckman’s thoughts are below. He gave a four hour lecture on this topic here last year which centered around the idea from Jacob Marshak of policy-relevant treatment effects. The basic nub of the debate is that randomised controlled trials and well-controlled IV studies do identify a causal effect, but it is not clear which causal effect. For example, the RCT may be identifying the effect of the treatment itself but not what would happen if it were done in other conditions or with other groups. Deaton’s paper is a defense of the view that “traditional” econometric specficitations using population data are not inferior to randomised trials and very rigidly specified IV models. This debate is enormously important. Practically every empirical paper published in modern micro journals involves the use of very strong causal identification assumptions yet far less is being published on the limitations of this type of empirical strategy.

    http://ideas.repec.org/p/iza/izadps/dp3980.html

    I don’t know what relevance the quote about the baby goat has. For those who don’t know of Duflo, she is this year’s Bates Clarke winner and one of the most influential intellectuals in economics at present. I know this is a more wonkish discussion than usual on this blog but the debate around Duflo’s work really gets to the heart economics and policy design.

  2. @ Liam
    There is indeed a very valuable debate taking place along the lines that Liam has outlined. The problem is that Duflo refuses to participate in it. I disagree with the assertion that she is “one of the most influential intellectuals in economics at present”. It is difficult to determine whether she is an intellectual at all, so committed is she to engaging in action rather than reflection. She has one idea – a method- which is already well established in, for example, epidemiology. The debate in economics is well behind the controversies in that and other medical fields. See the work of Miguel Hernán (http://www.hsph.harvard.edu/faculty/miguel-hernan/) for the most nuanced approach.
    Duflo’s talent is for selecting questions to address. She is also preoccupied with public relations. Her Freudian slip about the goat is an hilarious consequence of this.

  3. @Michael

    Thanks for the reference to Miguel Hernan. In terms of where economics is on the issue of causality, Angrist argues that we have come close to “taking the con out of econometrics”.

    http://econ-www.mit.edu/files/5326

    The Marschak lecture that Heckman talked about is below. I have to say I have been thinking about this in one way or another since he gave the talk. The basic idea of a policy-relevant treatment effect seems to square many of these arguments.

    http://cowles.econ.yale.edu/P/cm/m14/m14-01.pdf

    As well as the medics, there are some very good philosophers that look at the issue of causality. Nancy Cartwright (not the simpsons actress) should be read by any of the PhD students who are reading this blog and feel vaguely unsatisfied intellectually with the idea of “finding instruments” to “demonstrate causality”. The book below is a good overview.

    “Hunting Causes and Using Them: Approaches in Philosophy and Economics, Cambridge University Press (June 2007) ISBN 0-521-86081-4.”

  4. @ Liam

    I am a bit confused. When do econometricians try to ‘demonstate causality’?

    When studying statistics and econometrics it was drilled into us the correlation does not imply causation and that we can’t demonstrate that something is correct, we can only fail to reject a hypothesis.

    Perhaps I’m missing something however.

  5. @Rory,

    It means defining true causal forces rather than simple correlations. An OLS regression of wages on education is not causal because there will be other factors (e.g. ability) that co-determine both. Loosely speaking, over the past two decades or so techniques have come to the fore in econometrics that are more causal in their interpretation than simple OLS coefficients. They will still rest on assumptions holding, but they’re more plausible assumptions. For example, if you “instrument” quarter-of-birth for education in a wage regression, it becomes causal on the basis of your assumption that quarter-of-birth affects education but not inherent ability.

    But as noted above by Liam, the debate has moved on to question these strategies.

    @Michael Moore
    “It is difficult to determine whether she is an intellectual at all, so committed is she to engaging in action rather than reflection.”
    I’m not a big follower of her work but to be perfectly honest I think that’s just plain silly. Duflo and Mullainathan (QJE, 2004).

  6. @ Enda H

    So is it just about the definition of causality?

    For example lightning would be Granger-causal for thunder, but we know from science that neither causes the other.

  7. @ Rory, no I think that would be quite a poor example. The topic is far broader than Granger causality. Typically the causality gang exploit some random shock (or set one up) that affects one variable but not the other.

    It is hard to give you a reasonable sense of it in only a paragraph or two. Fortunately the JEP has a special on this general topic this quarter – http://bit.ly/afqYqV.

  8. I would strongly echo Liam’s comments on the importance of Nancy Cartwright for those interested in the role of RCTs in economics. Her recent paper on RCTs in the journal Philosophical Studies probably will have passed economists by. The reference is as follows:

    Nancy Cartwright, ‘What are randomised controlled trials good for?’, Philosophical Studies, vol. 147, no.1 (2010), pp.59-70.

    Well worth a look in my humble opinion.

  9. @Eoin

    I believe that you may have missed my point. Duflo has refused to engage with serious intellectuals on the relative merits of RCT’s.

    Her work on differences in differences is interesting but not relevant to the big picture.

  10. @Rory

    Your point is a fair one and I am sure there are a number of econometricians who read this blog who would have different answers to this question. In general, most of the testing frameworks commonly in use are of the form you talk about where you are testing a null hypothesis and in that sense you are not fully verifying a causal relationship but rather dismissing the abscence of a causal relationship. Though there are plenty of people working around subtle aspects of that distinction. One of the main working definitions of causality is explained clearly in the Angrist paper I linked to above. Loosely speaking (very loosely!), these models involve trying to figure out whether there is a true causal relationship between y and x as opposed to a correlational one by exploiting factors that impact on x directly but not y. Enda gives a classic example. Many (I hesitate to say most) empirical papers in micro nowadays contain some argument of this form. Colm Harmon’s work on education in the UK is one example from the people who post on this blog. Colm’s work exploited school reforms made in the UK in the 1950s to examine the return to schooling. Because the reform was completely outside of the control of the individuals, you could examine the effect of the extra school years it generated for them on their wages. Paul Devereux has also conducted a large number of studies utilising various causal estimation strategies.

  11. Another paper that is worth reading is Imben’s reply to the Deaton and Heckman/Urzua papers.

    http://www.economics.harvard.edu/faculty/imbens/files/bltn_09apr10.pdf

    Abstract
    Two recent papers, Deaton (2009), and Heckman and Urzua (2009), argue
    against what they see as an excessive and inappropriate use of experimental and quasi-experimental methods in empirical work in economics in the last decade. They specifically question the increased use of instrumental variables and natural experiments in labor economics, and of randomized experiments in development economics. In these comments I will make the case that this move towards shoring up the internal validity of estimates, and towards clarifying the description of the population these estimates are relevant for, has been important and beneficial in increasing the credibility of empirical work in economics. I also address some other concerns raised by the Deaton and Heckman-Urzua papers.

  12. @Graham Brownlow

    Thanks for that suggestion. Abstract below. A somewhat more eloquent description of causality than I attempted above!

    Abstract Randomized controlled trials (RCTs) are widely taken as the gold standard for establishing causal conclusions. Ideally conducted they ensure that the treatment ‘causes’ the outcome—in the experiment. But where else? This is the venerable question of external validity. I point out that the question comes in two importantly different forms: Is the specific causal conclusion warranted by the experiment true in a target situation? What will be the result of implementing the treatment there? This paper explains how the probabilistic theory of causality implies that RCTs can establish causal conclusions and thereby provides an account of what exactly that causal conclusion is. Clarifying the exact form of the conclusion shows just what is necessary for it to hold in a new setting and also how much more is needed to see what the actual outcome would be there were the treatment implemented.

    Keywords Randomized controlled trials (RCTs)  External validity 
    Probabilistic theory of causality  Causal inference  Capacities  Contributions

  13. One can only imagine how Esther is currently coping with these stinging criticisms of her work on the IE blog. Ok, so the Clark medal is definitely a positive but being slagged off on the world’s greatest Irish economics blog, I mean how could any academic get over that?

    😆

  14. If she is listening in, Esther Duflo might take heart from the fact that Michael’s previous post here was an attempt to demonstrate that all Irish economics departments are a complete waste of space so she is not being singled out by him.

    Sorry to do my earnest young man thing again Karl but there is a really serious point here. Is there anything we actually really know as economists or is it all just informed punditry and opinion? What is the nature of the claims we make as economists? There is an anti-intellectualism in a lot of economics in Ireland that was propped up by the boom and it is no harm now that people do a bit of soul-searching to figure out what methodologies we should be relying on to make claims.

Comments are closed.