Background
While the randomised controlled trial (RCT) is generally regarded as the design of choice for assessing the effects of health care, within the social sciences there is considerable debate about the relative suitability of RCTs and non-randomised studies (NRSs) for evaluating public policy interventions.
Objectives
To determine whether RCTs lead to the same effect size and variance as NRSs of similar policy interventions; and whether these findings can be explained by other factors associated with the interventions or their evaluation.
Methods
Analyses of methodological studies, empirical reviews, and individual health and social services studies investigated the relationship between randomisation and effect size of policy interventions by:
- Comparing controlled trials that are identical in all respects other than the use of randomisation by 'breaking' the randomisation in a trial to create non-randomised trials (re-sampling studies).
- Comparing randomised and non-randomised arms of controlled trials mounted simultaneously in the field (replication studies).
- Comparing similar controlled trials drawn from systematic reviews that include both randomised and non-randomised studies (structured narrative reviews and sensitivity analyses within meta-analyses).
- Investigating associations between randomisation and effect size using a pool of more diverse RCTs and NRSs within broadly similar areas (meta-epidemiology).
Results
Prior methodological reviews and meta-analyses of existing reviews comparing effects from RCTs and nRCTs suggested that effect sizes from RCTs and nRCTs may indeed differ in some circumstances and that these differences may well be associated with factors confounded with design.
Re-sampling studies offer no evidence that the absence of randomisation directly influences the effect size of policy interventions in a systematic way. No consistent explanations were found for randomisation being associated with changes in effect sizes of policy interventions in field trials.
Recommendations for research
We recommend
- Policy evaluations adopt randomised designs wherever possible.
- Policy evaluations also adopt other standard procedures for minimising bias and conducting high quality assessment of effects of intervention, particularly blinded allocation of either individuals or groups, and the avoidance of small sample sizes.
- Feasibility studies of randomising geographical areas, communities and regions, for evaluating policy interventions in a range of sectors, implemented within interventions, communities and across regions.
- Feasibility studies of blinded allocation for policy interventions in a range of sectors, implemented within interventions, communities and across regions.
- Research about the reasons for choosing randomisation or not, particularly in the presence and absence of an explicit collective plan of action.
The report should be cited:
Oliver S, Bagnall AM, Thomas J, Shepherd J, Sowden A, White I, Dinnes J, Rees R, Colquitt J, Oliver K, Garrett Z (2008) RCTs for policy interventions? A review of reviews and meta-regression. Birmingham: University of Birmingham.