Developmental Evaluation Trap #2: The Pivot Problem

In this third in a series on Developmental Evaluation traps, we look at the trap of the pivot. You’ve probably heard someone talk about innovation and ‘pivoting’ or changing the plan’s direction.

The term pivot comes from the Lean Startup methodology and is often found in Agile and other product development systems that rely on short-burst, iterative cycles that include data collection processes used for rapid decision-making. A pivot is a change of direction based on feedback. Collect the data, see the results, and if the results don’t yield what you want, make a change and adapt. That sounds pretty familiar to those looking to innovate, so where’s the trap?

The trap is that the decisions made aren’t grounded in data or the decision-making is flawed in how it uses data. In both cases, the decision to change direction is more arbitrary than evidence-based.

What do we mean by this?

The data problem

When innovating, it’s important to have the kind of data collection system in place to gather the necessary information required to make a useful decision such as whether to continue on with the process, make a change, or abandon the activity. James Dyson famously trialed his products hundreds of times with ‘tweaks’ both large and small to get to the right product design. A hallmark feature of this process is the emphasis on collecting the right data at the right time (which they call testing).

Dyson has since expanded its product offerings to include lighting, personal hair products, industrial drying tools, and an array of vacuum models. While the data needs for each product might differ, the implementation of a design-driven strategy that incorporates data throughout the decision-making process remains the same. It’s different data used for the right purpose.

Alas, DE has given cover to organizations for making arbitrary decisions based on the idea of pivoting when they really haven’t executed well or given things enough time to determine if a change of direction is warranted. Here are three things that one needs to heed when considering DE data.

  1. Process data. Without a clear indication that a program has been implemented appropriately and the constraints accounted for, how do you know that something ‘failed’ (or ‘succeeded’) based on what you did? Understanding what happened, under what conditions, and documenting the implementation behind the innovation is critical to knowing whether that innovation is really doing what we expect it to (and ensuring we capture the things we might not have expected it to do).
  2. Organizational mindfulness. The biggest challenge might be internal to the organization’s heart: it’s mindset. Organizational mindfulness is about paying attention to the activities, motivations, actions, and intentions of the entire enterprise and being willing (and able) to spot biases, identify blind spots, and recognize that, as much as we say we want to change and innovate, the reality is that it is disruptive for many people and something often unconsciously thwarted.
  3. Evaluability assessment. A real challenge with innovation work is knowing whether you’ve applied the right ‘dosage’ and given it the right amount of time to work. This means doing your homework, paying attention, and having patience. Homework comes in the form of doing background research to other, similar innovations, connecting to the wisdom of the innovators themselves (i.e., draw on experience), and connecting it together. Paying attention ensures you have a plausible means to connect intervention to effect (or product to outcome). This is like Kenny Rogers’ Gambler:

you got to know when to hold ’em

know when to fold ’em

know when to walk away

know when to run

An evaluability assessment can help spot the problems with your innovation data early by determining whether your program is ready to be evaluated in the first place and what methods might be best suited to determining its value.

The decision-making problem

Sometimes you have good data, but do you have the decision-making capabilities to act on it? With innovation, data rarely tells a straightforward story: it requires sensemaking. Sensemaking requires time and a socialization of the content to determine the value and meaning of data within the context it’s being used.

Decision-making can be impeded by a few things:

  1. Time. Straight up: if you don’t give this time and focus, no amount of good data, visualizations, summaries, or quotes will help you. Time to substantially reflect on what you’re doing, discuss it with those for whom it makes sense.
  2. Talent. Diverse perspectives around the table is an important part of sensemaking, but also some expertise in the process of decision-making and implementing decisions — particularly in design. An outside consultant can assist you in working with your data to see possibilities and navigate through blindspots in the process as well as be that support for your team in making important, sometimes difficult, decisions.
  3. Will. You can give time, have talent, but are you willing to make the most of them both? For the reasons raised above about being mindful of your intentions and biases, having the right people in place will not help if you’re unwilling to change what you do, follow when led, and lead when asked.

Developmental evaluation is powerful, it’s useful, but it is not often easy (although it can be enormously worthwhile). Like most of the important things in life, you get what you put into something. Put a little energy into DE and be mindful of the traps, and you can make this approach to evaluation be your key to innovation success.

Want to learn more about how to do DE or how to bring it to your innovation efforts? Contact us and we’d be happy to help.

Scroll to Top