The Experimentation Myth

Innovation Growth Lab: Making innovation and growth policy work from Nesta UK on Vimeo.

The world of social innovation is filled with wonderful things and, like any creative space, has its share of myths. One of these is the role of experimentation. One view is that social innovation operates in such a space of complexity that the use of methods such as randomized controlled trials are not only inappropriate, but harmful.

The alternative perspective is one held by folks like Geoff Mulgan, the Chief Executive of Nesta UK. According to Mulgan (discussed in the video embedded above):

the only way you figure out what works is by doing experiments using control groups and seeing who benefits from an intervention.

This latter position is also a myth. Like any myth, it makes complexity much easier to handle and accept because it takes away the need to invest energy in determining ‘what is going on’.

Alas, that ‘what’s going on’ is at the crux of understanding impact in the realm of social enterprises, because we humans have a tendency to resist standardization in how we behave. This is not to suggest that there is no place for experiments, because as Nesta UK and others have shown, it can be done. But to suggest that the only way to find out whether something works is experimentation is taking things too far.

To be fair, Mulgan was speaking on matters of public policy, which have enormous consequences and his comments after the quote above about understanding potential impact before scaling are spot-on, but there are different paths to understanding how impact is achieved and what it is that go beyond experiments.

Disrupting experiments

A true controlled experiment, as the name suggests, requires control. The one way to get around some of the constraints imposed by controls are to have large numbers of participants. The problem facing the controlled experiments is that we often do not have both — either control or large numbers of participants. Further, the amount of control needed is dependent on the amount of social complexity inherent in the problem. For example, imagine comparing two options: renewing a required document like your drivers licence online or in person. This is a process that might involve a lot of effort (depending on where the nearest location of your motor vehicle registration office is), but it’s not complex. This is something that is amenable to experimentation.

However, a policy or program designed to help individuals manage chronic disease conditions involves enormous complexity given that each participants’ condition will have a combination of shared qualities with unique manifestations and that those will all be mediated by different social, biological, economic, geographic and situational variables that, depending on the chronic condition, might play a significant role in how a program is received and its impact.

This is a far more challenging task, but one that is worth doing lest we, in the words of systems scholar Russell Ackoff, do “the wrong things, righter” by imposing an experimental design on something that warrants something different. Or, put another way, perhaps we need to redesign the experiment itself to suit the conditions.

 

 

Scroll to Top