Our principal, Cameron Norman, recently joined Keita Demming for a Disruptive Conversation as part of his ongoing podcast series. Listen in and learn about how mindfulness, design, psychology, and paying attention to our change efforts can improve what we do and how effective we are with what we do.
Innovators are a hopeful lot. They are looking to find an advantage by creating something new that will improve their current condition or at the very least prevent their current situation from deteriorating. Sometimes you need to change to keep things the same.
Evaluation is one of the hidden (maybe secret) advantages from those innovators who are looking to sustain their work over the long haul. For those who are looking to innovate in human systems, evaluation might be the only thing that allows an idea to scale beyond its current implementation.
Why? How? And what does this look like? We will begin a series looking at the Innovators Secret Advantage and how evaluation can and does play a role in supporting good design, measuring impact, guiding strategy, and informing the development of new ideas. Over the next few weeks we’ll be introducing ten points to illustrate how evaluation can take your ideas forward and create the impact on your clients, the public, your market, and the world.
Sound good? Stay tuned and the secrets will be revealed. Through case studies, practical examples, evidence, and theory we’ll illustrate how to use evaluation to support and amplify your innovation with humans, products, services, and policies. It’s time that evaluation’s advantages are accrued to everyone, not just those ‘in the know’.
And, if you’re in the area, our Principal, Cameron Norman, will be speaking on this topic at the upcoming Service Convention Sweden on November 28th, 2018. Get a jump on what’s to come!
In this third in a series on Developmental Evaluation traps, we look at the trap of the pivot. You’ve probably heard someone talk about innovation and ‘pivoting’ or changing the plan’s direction.
The term pivot comes from the Lean Startup methodology and is often found in Agile and other product development systems that rely on short-burst, iterative cycles that include data collection processes used for rapid decision-making. A pivot is a change of direction based on feedback. Collect the data, see the results, and if the results don’t yield what you want, make a change and adapt. That sounds pretty familiar to those looking to innovate, so where’s the trap?
The trap is that the decisions made aren’t grounded in data or the decision-making is flawed in how it uses data. In both cases, the decision to change direction is more arbitrary than evidence-based.
What do we mean by this?
The data problem
When innovating, it’s important to have the kind of data collection system in place to gather the necessary information required to make a useful decision such as whether to continue on with the process, make a change, or abandon the activity. James Dyson famously trialed his products hundreds of times with ‘tweaks’ both large and small to get to the right product design. A hallmark feature of this process is the emphasis on collecting the right data at the right time (which they call testing).
Dyson has since expanded its product offerings to include lighting, personal hair products, industrial drying tools, and an array of vacuum models. While the data needs for each product might differ, the implementation of a design-driven strategy that incorporates data throughout the decision-making process remains the same. It’s different data used for the right purpose.
Alas, DE has given cover to organizations for making arbitrary decisions based on the idea of pivoting when they really haven’t executed well or given things enough time to determine if a change of direction is warranted. Here are three things that one needs to heed when considering DE data.
- Process data. Without a clear indication that a program has been implemented appropriately and the constraints accounted for, how do you know that something ‘failed’ (or ‘succeeded’) based on what you did? Understanding what happened, under what conditions, and documenting the implementation behind the innovation is critical to knowing whether that innovation is really doing what we expect it to (and ensuring we capture the things we might not have expected it to do).
- Organizational mindfulness. The biggest challenge might be internal to the organization’s heart: it’s mindset. Organizational mindfulness is about paying attention to the activities, motivations, actions, and intentions of the entire enterprise and being willing (and able) to spot biases, identify blind spots, and recognize that, as much as we say we want to change and innovate, the reality is that it is disruptive for many people and something often unconsciously thwarted.
- Evaluability assessment. A real challenge with innovation work is knowing whether you’ve applied the right ‘dosage’ and given it the right amount of time to work. This means doing your homework, paying attention, and having patience. Homework comes in the form of doing background research to other, similar innovations, connecting to the wisdom of the innovators themselves (i.e., draw on experience), and connecting it together. Paying attention ensures you have a plausible means to connect intervention to effect (or product to outcome). This is like Kenny Rogers’ Gambler:
you got to know when to hold ’em
know when to fold ’em
know when to walk away
know when to run
An evaluability assessment can help spot the problems with your innovation data early by determining whether your program is ready to be evaluated in the first place and what methods might be best suited to determining its value.
The decision-making problem
Sometimes you have good data, but do you have the decision-making capabilities to act on it? With innovation, data rarely tells a straightforward story: it requires sensemaking. Sensemaking requires time and a socialization of the content to determine the value and meaning of data within the context it’s being used.
Decision-making can be impeded by a few things:
- Time. Straight up: if you don’t give this time and focus, no amount of good data, visualizations, summaries, or quotes will help you. Time to substantially reflect on what you’re doing, discuss it with those for whom it makes sense.
- Talent. Diverse perspectives around the table is an important part of sensemaking, but also some expertise in the process of decision-making and implementing decisions — particularly in design. An outside consultant can assist you in working with your data to see possibilities and navigate through blindspots in the process as well as be that support for your team in making important, sometimes difficult, decisions.
- Will. You can give time, have talent, but are you willing to make the most of them both? For the reasons raised above about being mindful of your intentions and biases, having the right people in place will not help if you’re unwilling to change what you do, follow when led, and lead when asked.
Developmental evaluation is powerful, it’s useful, but it is not often easy (although it can be enormously worthwhile). Like most of the important things in life, you get what you put into something. Put a little energy into DE and be mindful of the traps, and you can make this approach to evaluation be your key to innovation success.
Want to learn more about how to do DE or how to bring it to your innovation efforts? Contact us and we’d be happy to help.
Is the above tree alive and growing or dead and ready to be made into furniture? How does something like a tree connect to providing a swing, becoming a coffee table, or supporting the structure of a home? That is based partly on a theory of change about how a tree does what it does. That might sound strange, but for more sophisticated things like human service programs, linking what something does to what it achieves often requires a tool for explanation and a Theory of Change can serve this need well if used appropriately.
Theory of Change is described as “a comprehensive description and illustration of how and why a desired change is expected to happen in a particular context.” It has taken hold in the non-profit and philanthropic sectors in recent years as a means of providing guidance for program developers, funders, and staff in articulating the value of a program and its varied purposes by linking activities to specific behavioural theory.
Matthew Forti, writing in the Stanford Social Innovation Review (SSIR), suggests a Theory of Change (ToC) contain the following:
To start, a good theory of change should answer six big questions:
1. Who are you seeking to influence or benefit (target population)?
2. What benefits are you seeking to achieve (results)?
3. When will you achieve them (time period)?
4. How will you and others make this happen (activities, strategies, resources, etc.)?
5. Where and under what circumstances will you do your work (context)?
6. Why do you believe your theory will bear out (assumptions)?
Unlike a program logic model, which articulates program components, expected outputs, and outcomes a ToC explains how and why a particular set of actions is to produce a change and the assumptions that underpin it all. ToC can be used with a program logic model or be developed independently.
What a ToC is meant to do is allow you to explain in simple language the connection between a program’s purpose, design, and execution and what it produces in terms of benefit and impact. While it may draw on theories that have been published or tested, it may also be unique to the program itself, but in all cases, it is meant to be understandable to a variety of stakeholders and audiences.
Creating a Theory of Change
A strong ToC requires some understanding of behaviour change theory: what do we know about how change happens? It can’t simply end up with “and then change happens”, it must have some kind of logic that can be simply expressed and, whenever possible, tied to what we know about change at the individual, group, organization, system, or a combination. It’s for this reason that bringing in expertise in behaviour change is an important part of the process.
That is one of the points that Kathleen Kelly Janus, also writing in the SSIR, recently made as part of her recommendations for those looking to better the impact of creating a ToC. She suggests organizations do the following:
- Engage outside stakeholders
- Include your board and staff
- Bring in an outside facilitator
- Clearly define the outcomes that will spell success
- Track your results rigorously.
Inclusion, consultation, and collaboration are all part of the process of developing a ToC. The engagement with diverse stakeholders — particularly those who sit apart from the program — is critical because they will see your program differently. Outsiders will not get caught up in jargon, internal language, or be beholden to current program structures as explanations for change.
Defining the outcomes are important because change requires an explanation of the current state and what that changed state(s) look like. The more articulate you can be about what these outcomes might be, the more reflective the ToC will be of what you’re trying to do. By defining the outcomes better, a ToC can aid a program in developing the appropriate metrics and methods to best determine how (or whether) programs are manifesting these outcomes through their operations.
A ToC is best used as an active reference source for program managers, staff, and stakeholders. It can continually be referred to as a means of avoiding strategy ‘drift’ by connecting the programs that are in place to outcomes and reminding management that if the programs change, so too might the outcomes.
A ToC can be used as a developmental evaluation tool, allowing programs to see what they can do and how different adaptations might fit within the same framework for behaviour change to achieve the same outcomes. Alternatively, it can also be used to call into question whether the outcomes themselves are still appropriate.
By making a ToC accessible, easy to read and to understand the key is to make it visual. Employing someone with graphic design skills to help bring the concepts to life in visual representation can provide a means to clarify key ideas and getting people beyond words. It’s easy to get hung up on theoretical language and specific terms when using words; where possible use visuals, narrative, and representations. Metaphors, colour, and texture can bring a ToC to life.
A ToC, when developed appropriately, can provide enormous dividends for strategy, performance, and evaluation and help all members of an organization (and its supporters and partners) understand what it is all about and how what it does is linked to what it aims to achieve. The ToC can serve your communications, strategy development, and evaluation plans if done well and appropriately facilitated, particularly for complex programs. It doesn’t solve all your problems, but few things will help you understand what problems you’re trying to solve and how you might do it than a good Theory of Change.
If you need help building a Theory of Change, contact us and we can help you develop one and show you how it can support your strategy, innovation, and evaluation needs of your programs and organization as a whole.
The world of social innovation is filled with wonderful things and, like any creative space, has its share of myths. One of these is the role of experimentation. One view is that social innovation operates in such a space of complexity that the use of methods such as randomized controlled trials are not only inappropriate, but harmful.
The alternative perspective is one held by folks like Geoff Mulgan, the Chief Executive of Nesta UK. According to Mulgan (discussed in the video embedded above):
the only way you figure out what works is by doing experiments using control groups and seeing who benefits from an intervention.
This latter position is also a myth. Like any myth, it makes complexity much easier to handle and accept because it takes away the need to invest energy in determining ‘what is going on’.
Alas, that ‘what’s going on’ is at the crux of understanding impact in the realm of social enterprises, because we humans have a tendency to resist standardization in how we behave. This is not to suggest that there is no place for experiments, because as Nesta UK and others have shown, it can be done. But to suggest that the only way to find out whether something works is experimentation is taking things too far.
To be fair, Mulgan was speaking on matters of public policy, which have enormous consequences and his comments after the quote above about understanding potential impact before scaling are spot-on, but there are different paths to understanding how impact is achieved and what it is that go beyond experiments.
A true controlled experiment, as the name suggests, requires control. The one way to get around some of the constraints imposed by controls are to have large numbers of participants. The problem facing the controlled experiments is that we often do not have both — either control or large numbers of participants. Further, the amount of control needed is dependent on the amount of social complexity inherent in the problem. For example, imagine comparing two options: renewing a required document like your drivers licence online or in person. This is a process that might involve a lot of effort (depending on where the nearest location of your motor vehicle registration office is), but it’s not complex. This is something that is amenable to experimentation.
However, a policy or program designed to help individuals manage chronic disease conditions involves enormous complexity given that each participants’ condition will have a combination of shared qualities with unique manifestations and that those will all be mediated by different social, biological, economic, geographic and situational variables that, depending on the chronic condition, might play a significant role in how a program is received and its impact.
This is a far more challenging task, but one that is worth doing lest we, in the words of systems scholar Russell Ackoff, do “the wrong things, righter” by imposing an experimental design on something that warrants something different. Or, put another way, perhaps we need to redesign the experiment itself to suit the conditions.