The Geary Institute at UCD has initiated a Policy Peer Review Series. This involves members of the Geary Institute reviewing research/evaluation reports which have significant implications for public policy. The authors of the evaluations/reports are then invited to reply to the review. The first two such reviews (and responses) are on (a) an evaluation of the School Support System under DEIS and (b) an evaluation of FAS training programmes where the participants exited in 2012. The reviews and responses are available here:
3 replies on “Geary Policy Peer Review Series”
The main criticism in both reports is the absence of a control group, which I would have thought was essential if one is trying to establish if the respective programmes have been effective. I particularly like the response to the DEIS critique, in that it would not have been fair to set up a control group as it would have denied that group the benefits of the programme, which assumes of course that it did carry benefits.
A useful exercise and one that perhaps should be extended to a whole range of public policies.
p.s. there is a problem with the link
I agree, Dan, for programmes that exceed a certain spending threshold, where at all possible, I think there is a lot to be said for it being required to have a control group so that a proper evaluation can be carried out.
As regards the specific issue with respect to the DEIS programme, one possibility would be to have two interventions, varying in intensity. That way both groups benefit (assuming the programme has benefits), but an evaluation of the programme can still be carried out. As far as I know, this approach has been used in evaluating some programmes similar to DEIS.
The use of control groups in medical interventions is the norm. A drug trial without one would not be taken seriously. In the area of child development there are numerous randomized controlled trials & these have been hugely influential. In the case of education it is less common but that is changing, especially in the US and the UK.
This is a side issue in this case: the DEIS program has been up and running for years so there is no question of randomizing treatment and thereby creating a control group (nor did I suggest that).
There is a whole slew of techniques for evaluations of non-experimental interventions: different types of regression, control functions, matching methods, propensity score based methods. I explained one, regression discontinuity design (first used in education research, as it happens), which essentially identifies a valid comparison group amongst those who were not treated. No ethical issues arise here.
Some of these methods could have probably been used with the GUI data which would have been cheaper and would have provided a better comparison.