RCTs to Scale: Comprehensive Evidence from Two Nudge Units
SeriesSpecial Guest Series
SpeakersStefano DellaVigna (University of California, Berkeley, United States)
FieldBehavioral Economics, Empirical Microeconomics, Organizations and Markets
Date and time
October 29, 2020
16:00 - 17:00
Attendance is free, but registration is required. Send an email before Wednesday October 28, 2 PM (Amsterdam) to firstname.lastname@example.org with 'DellaVigna' in the subject line if you wish to attend.
Nudge interventions – behaviorally-motivated design changes with no financial incentives – have quickly expanded from academic studies to larger implementation in so-called Nudge Units in governments. This provides an opportunity to compare interventions in research studies, versus at scale. We assemble a unique data set of 126 RCTs covering over 23 million individuals, including all trials run by two of the largest Nudge Units in the United States. We compare these trials to a separate sample of nudge trials published in academic journals from two recent meta-analyses. In papers published in academic journals, the average impact of a nudge is very large – an 8.7 percentage point take-up effect, a 33.5% increase over the average control. In the Nudge Unit trials, the average impact is still sizable and highly statistically significant, but smaller at 1.4 percentage points, an 8.1% increase. We consider five potential channels for this gap: statistical power, selective publication, academic involvement, differences in trial features and in nudge features. Publication bias in the academic journals, exacerbated by low statistical power, can account for the full difference in effect sizes. Academic involvement does not account for the difference. Different features of the nudges, such as in-person versus letter-based communication, likely reflecting institutional constraints, can partially explain the different effect sizes. We conjecture that larger sample sizes and institutional constraints, which play an important role in our setting, are relevant in other at-scale implementations. Finally, we compare these results to the predictions of academics and practitioners. Most forecasters overestimate the impact for the Nudge Unit interventions, though nudge practitioners are almost perfectly calibrated. Joint paper with Elizabeth Linos (University of California, Berkeley, United States).
View full paper here: https://eml.berkeley.edu/~sdellavi/wp/NudgeToScale2020-07-06.pdf
Stefano DellaVigna (2002 Ph.D., Harvard) is the Daniel Koshland, Sr.
Distinguished Professor of Economics and Professor of Business
Administration at the University of California, Berkeley. He specializes
in Behavioral Economics (a.k.a, Psychology and Economics) and is a
co-director of the Initiative for Behavioral Economics and Finance. He
has published in international journals such as the American Economic Review, the Quarterly Journal of Economics, the Journal of Finance, and the Journal of Labor Economics.
He has been a Principal Investigator for an NSF Grant (2004-07), an
Alfred P. Sloan Fellow for 2008-10, and is a Distinguished Teaching
Award winner (2008). He was also a co-editor of the Journal of the European Economic Association
(JEEA) from 2009 to 2013. His recent work has focused on (i) the
economics of the media, and in particular the impact on voting (through
persuasion) and the study of conflicts of interest; (ii) the design of
model-based field experiments, including the role of social pressure in
charitable giving and voting, and (iii) the analysis of scientific
journals and in particular editorial choices; (iv) the study of
reference-dependence for unemployed workers.