Cense

Innovation to Impact

  • Our Services
    • Research + Evaluation
    • Service Design and Innovation Development
    • Education + Training
    • Chief Learning Officer Service
  • Who We Are
    • About Us
  • Tools & Ideas
    • Resource Library
  • Events
    • Design Loft
  • Contact
  • Academy
You are here: Home / Learning & Events / Designful Innovation

Developmental Evaluation Trap #3: Fearing Success

2018-05-23 by cense

What if you tried to innovate and succeeded? This fourth in a series of posts looking at Developmental Evaluation traps explores that question and the implications that come from being good at innovating.

A truly successful innovation changes things — mindsets, workflows, systems, and outcomes. Some of these changes are foreseeable, some are not and they are rarely ever uniformly positive. Take the strange situation that many non-profits face: most will put themselves out of business if they truly succeed in their mission.

Or consider the corporate manager who is looking to get her or his staff engaged. The benefits of staff engagement are many, including contributing more to the organization, but that also means they will ask more of the organization and require more.

The trap with Developmental Evaluation (DE) is that it provides a window into the organizations’ innovation strategy and disrupts the way in which it learns, adapts, and designs its products. It tests assumptions, meaning that some myths that an organization holds might be challenged. DE emplores organizations to use evidence to support decisions, not just precedent.

DE also provides a means to account for the work that is performed in the quest to innovate (not just what is produced), which sounds like an enormous perk for those who are used to being judged solely on production, unless you really didn’t want to put the work in and are content to simply produce the most marginal of products.

What fear looks like

The most problematic form of fear is self-sabotage.  Self-sabotage refers to the behaviours we engage in — individually or as a group or organization — that keep us from taking the actions we believe will lead to success. The reasons might have to do with social norms. If what you’re doing is normal, then it probably isn’t innovation. Innovating means deviating from what others are doing.

Risk is another fear-factor. Doing something different requires risk-taking and that means holding yourself up to possible criticism, disapproval from others, or financial costs (among others).

Further complicating things is that risk-taking and social behaviour are often linked together.

Fear also manifests in actions not taken. It may manifest in not pursuing opportunities or pursuing only those that are safe. It may be ignoring problems or asking the kind of questions that focus attention away from what is important, to what is easy. Often we see organizations seek to collect lots of data on something unimportant because it’s safer than collecting data on something that matters but is much riskier.

Fear can further manifest in things like over-researching a topic (never having ‘enough’ data’), deferring decisions, blaming the timing (“It’s just not the right time to innovate”), or complexity (“it’s too complex”).

Practical fear-fighting

How do guard against your own fear of success? There are some things we can do to better utilize lessons gained through approaches like DE to assist innovation efforts.

  1. Innovation therapy. Mental health professionals often go through and remain in some form of professional therapy or guided reflection to enable them to attune to their biases, work through their personal issues, and hone their skills with another professional. The same process of ongoing, guided, reflective practice and ‘treatment’ is beneficial to organizations as well because it creates that same social norm to talk about fears, challenges, and ambiguities. Create space for reflection, discussion, and problem-solving in your regular meetings. Make the time for innovation.
  2. Create outcomes from processes. DE is a great tool for showcasing the work that goes into an innovation, not just the innovation itself. Regularly collect the data that goes into the process — ideas generated (quality and quantity), rough sketches, prototypes, and hours worked, to name a few suggestions. These can help show what goes into a product, which is an end in itself. This makes something that seems abstract more tangible.
  3. Make change visible. Make work visible through evaluation and visual thinking – including the ups, downs, sideways and showcase where along the journey you are along a process (e.g., use a timeline)
  4. Create better systems, not just different behaviour. Complex systems have path-dependencies — those ruts that shape our actions, often unconsciously and out of habit. Consider ways you organize yourself, your organization’s jobs and roles, the income streams, the system of rewards and recognition, the feedback and learning you engage with, and composition of your team.  This rethinking and reorganization are what changes DNA, otherwise, it will continue to express itself through your organization in the same way.

These are all part of the developmental evaluator’s toolkit, which is basically a core part of innovation itself. DE can be a means to help you confront your fears, not deny them. Try these out and see how you can avoid the trap of self-sabotage and to become better innovators and learners.

Want to learn more about how to do DE or how to bring it to your innovation efforts? Contact us and we’d be happy to help.

 

 

Filed Under: Psychology, Research + Evaluation Tagged With: developmental evaluation, evaluation, fear, innovation, psychology, risk-taking, self-sabotage

Developmental Evaluation Trap #2: The Pivot Problem

2018-05-17 by cense

In this third in a series on Developmental Evaluation traps, we look at the trap of the pivot. You’ve probably heard someone talk about innovation and ‘pivoting’ or changing the plan’s direction.

The term pivot comes from the Lean Startup methodology and is often found in Agile and other product development systems that rely on short-burst, iterative cycles that include data collection processes used for rapid decision-making. A pivot is a change of direction based on feedback. Collect the data, see the results, and if the results don’t yield what you want, make a change and adapt. That sounds pretty familiar to those looking to innovate, so where’s the trap?

The trap is that the decisions made aren’t grounded in data or the decision-making is flawed in how it uses data. In both cases, the decision to change direction is more arbitrary than evidence-based.

What do we mean by this?

The data problem

When innovating, it’s important to have the kind of data collection system in place to gather the necessary information required to make a useful decision such as whether to continue on with the process, make a change, or abandon the activity. James Dyson famously trialed his products hundreds of times with ‘tweaks’ both large and small to get to the right product design. A hallmark feature of this process is the emphasis on collecting the right data at the right time (which they call testing).

Dyson has since expanded its product offerings to include lighting, personal hair products, industrial drying tools, and an array of vacuum models. While the data needs for each product might differ, the implementation of a design-driven strategy that incorporates data throughout the decision-making process remains the same. It’s different data used for the right purpose.

Alas, DE has given cover to organizations for making arbitrary decisions based on the idea of pivoting when they really haven’t executed well or given things enough time to determine if a change of direction is warranted. Here are three things that one needs to heed when considering DE data.

  1. Process data. Without a clear indication that a program has been implemented appropriately and the constraints accounted for, how do you know that something ‘failed’ (or ‘succeeded’) based on what you did? Understanding what happened, under what conditions, and documenting the implementation behind the innovation is critical to knowing whether that innovation is really doing what we expect it to (and ensuring we capture the things we might not have expected it to do).
  2. Organizational mindfulness. The biggest challenge might be internal to the organization’s heart: it’s mindset. Organizational mindfulness is about paying attention to the activities, motivations, actions, and intentions of the entire enterprise and being willing (and able) to spot biases, identify blind spots, and recognize that, as much as we say we want to change and innovate, the reality is that it is disruptive for many people and something often unconsciously thwarted.
  3. Evaluability assessment. A real challenge with innovation work is knowing whether you’ve applied the right ‘dosage’ and given it the right amount of time to work. This means doing your homework, paying attention, and having patience. Homework comes in the form of doing background research to other, similar innovations, connecting to the wisdom of the innovators themselves (i.e., draw on experience), and connecting it together. Paying attention ensures you have a plausible means to connect intervention to effect (or product to outcome). This is like Kenny Rogers’ Gambler:

you got to know when to hold ’em

know when to fold ’em

know when to walk away

know when to run

An evaluability assessment can help spot the problems with your innovation data early by determining whether your program is ready to be evaluated in the first place and what methods might be best suited to determining its value.

The decision-making problem

Sometimes you have good data, but do you have the decision-making capabilities to act on it? With innovation, data rarely tells a straightforward story: it requires sensemaking. Sensemaking requires time and a socialization of the content to determine the value and meaning of data within the context it’s being used.

Decision-making can be impeded by a few things:

  1. Time. Straight up: if you don’t give this time and focus, no amount of good data, visualizations, summaries, or quotes will help you. Time to substantially reflect on what you’re doing, discuss it with those for whom it makes sense.
  2. Talent. Diverse perspectives around the table is an important part of sensemaking, but also some expertise in the process of decision-making and implementing decisions — particularly in design. An outside consultant can assist you in working with your data to see possibilities and navigate through blindspots in the process as well as be that support for your team in making important, sometimes difficult, decisions.
  3. Will. You can give time, have talent, but are you willing to make the most of them both? For the reasons raised above about being mindful of your intentions and biases, having the right people in place will not help if you’re unwilling to change what you do, follow when led, and lead when asked.

Developmental evaluation is powerful, it’s useful, but it is not often easy (although it can be enormously worthwhile). Like most of the important things in life, you get what you put into something. Put a little energy into DE and be mindful of the traps, and you can make this approach to evaluation be your key to innovation success.

Want to learn more about how to do DE or how to bring it to your innovation efforts? Contact us and we’d be happy to help.

Filed Under: Research + Evaluation, Social Innovation Tagged With: data, data collection, decision-making, developmental evaluation, evaluation, innovation, social innovation, strategy

Resourcing Developmental Evaluation: The First Trap

2018-05-14 by cense

In a recent post on our Developmental Evaluation (DE) in their work. In this second in our series on DE and its traps, we look at the issue of resourcing.

Like the image above, resourcing looks relatively simple at a distance, through a frame that constrains what you see in front of you. Most evaluations are designed through this kind of perspective. Developmental Evaluation requires a different kind of frame and the resourcing to support its deployment.

We see four keys to resourcing a DE.

Money

Let’s first get the money issue out of the way first.

Evaluation budgets are recommended to be set at 5 – 10% of a programmatic budget, although it’s commonly acknowledged that real budgeting falls often falls far short of this in practice. Indeed, some suggest the percentage of expenses should be higher, while The Hewlett Foundation recently published a benchmarking study where it found it was able to spend much less on its evaluation to achieve impact (largely due to its size and internal capacity).

Whatever percentage an organization seeks to devote to its evaluation budget, that budget almost certainly must increase for a DE. The reason has to do with the additional human and time resources and related coordination costs associated with doing DE work. The methods and tools associated with a DE may not differ much from a conventional evaluation, but their deployment and the kind of sense-making that comes from the data that they generate is what sets it apart and require resources.

We are reluctant to recommend a firm percentage number, rather we encourage prospective evaluands (i.e., clients, partners) to consider the question:

What are you hiring a Developmental Evaluation to do for you?

This gets at the matter of purpose. DE is about supporting strategic decision-making for innovation, thus the true budget for implementing a DE must include the time, personnel, and associated resources that can support the integration of what emerges through a DE into the program, service, or product developmental stream. That requires energy, focus, and that adds to the total cost.

We suggest you consider doubling your budget considerations at a minimum, although keep in mind that some of that cost might come from budget lines previously earmarked for strategy development, program development, and design so the overall value of DE is likely far higher than that of a standard program evaluation.

Time

While the cost of a DE might be higher, the biggest ‘expense’ might be time. Some programs like the idea of a ‘plug-and-play’ evaluation where an evaluator designs and evaluation and then largely disappears once it’s been approved and initiated. That doesn’t work for DE.

DE requires participation from the program staff, management, and potentially other stakeholders to work. An external consultant cannot, no matter how skilled or knowledgeable, provide all the answers to the questions that a DE will generate and make the decisions necessary to develop a program or process. Questions will come from the data collected — usually in some mixed method form — that illuminates some emergent, dynamic properties in a program that, because of their very complexity, require diverse perspectives to understand. This requires that those perspectives be gathered, facilitated, and integrated into a set of decisions that can be taken forth.

This time requirement will need to match the type of data collected in the evaluation design, the regularity of the meetings, and the complexity of the context over and above what is necessary to gather, analyze, and otherwise manage the data.

The biggest reason why organizations fail at their DE efforts and to innovate more broadly is that they do not invest in the time required to truly learn.

Expertise

Who is best suited to your evaluation? Do you bring in an external consultant or stick with an internal evaluator? We recommend both, together. External evaluators bring the emotional and perceptual distance to see patterns and activities within the organization that may be ‘hidden in plain sight’ to those within the organization. However, this distance also means that the external evaluator likely misses out on some of the nuance and context that is absolutely critical to supporting a DE — something that a well-placed internal evaluator might find accessible.

DE is relatively new and the processes, tools, and outcomes associated with sustained innovation are also relatively unfamiliar to many organizations. Add to this the need for ongoing learning and organizational development through the process of DE and it becomes easier to see why bringing in a skilled consultant matters. DE requires expertise in evaluation methods and tools, group facilitation, behavioural science, practical knowledge of complexity, design, and organizational development — whether within a person or group.

However, what makes DE most effective at supporting innovation is a sustained commitment to nurturing these skills and abilities within the organization. For that reason, we recommend bringing on consultants who can not only do DE but help their clients to learn DE and build their capacity for learning through evaluation and for evidence-informed design.

If an organization can build those capacities, they are set up for innovation success (which includes learning from failure).

Focus

Time, care (expertise), and attention are the three non-financial resources required for DE and innovation. Focus — the application of attention to something — cannot be marshaled periodically for DE to be a success; it has to be ongoing. While DE does involve specific milestones and benchmarks, it is principally about viewing a program as a living enterprise. Thus, one has to treat it like having a pet that requires continued engagement, not as a fence that requires a coat of paint every few years.

Ongoing learning happens by design. As an organization, you need to provide the means for your staff and leadership to truly learn. We have consistently encountered ‘knowledge organizations’ for which their primary product is new knowledge where staff and senior management spend less than an hour per week on activities such as reviewing literature, training, or synthesizing educational material. That is an absence of praxis — learning while doing — and a lack of focus on what the organization is all about.

To do DE requires a focus on learning, commitment to using data to inform decisions, and a willingness to deal with uncertainty. These requirements are not one-time events but must be managed as part of an ongoing process as part of building a culture of evaluation.

Resources for resourcing DE

Are you resourced for Developmental Evaluation?

If you are or require some help in preparing for one, contact us and we can show you new ways to build your capacity for innovation while expanding your possibilities to learn and grow.

 

Filed Under: Research + Evaluation Tagged With: culture of evaluation, design, developmental evaluation, evaluation, innovation, innovation design, resources, strategy, time

Developmental Evaluation: A Short Introduction

2018-04-30 by cense

Developmental Evaluation (DE) was first proposed by Michael Quinn Patton with the support of colleagues who have wrestled with the problem of dealing with complexity in human systems and the need to provide structured, useful, actionable information to make decisions to support innovation.

DE has been described as being akin to taking a classic ‘road trip’ where you have a destination, a planned route, but also a spirit of adventure and willingness to deviate when needed. DE is an approach to evaluation, not a specific method or tool, designed to support decision making for innovation. Innovation, in this case, is about the activities and decisions that allow an organization and its members to create value by design. The design may not turn out as expected or produce surprises, but it is part of an intentional act to create value through new thinking and action.

Ten things about what DE is and is not

Developmental evaluation (“DE” as it’s often referred to as), when used to support innovation, is about weaving design with data and strategy. It’s about taking a systematic, structured approach to paying attention to what you’re doing, what is being produced (and how), and anchoring it to why you’re doing it by using monitoring and evaluation data. DE helps to identify potentially promising practices or products and guide the strategic decision-making process that comes with innovation. When embedded within a design process, DE provides evidence to support the innovation process from ideation through to business model execution and product delivery.

There are a lot of misconceptions about what a DE is and what it is not and we thought it might be worth addressing ten of these to help provide a brief introduction to DE.

  1. DE is an approach to evaluation, not a method. Most standard methods and tools for evaluation can be used as part of a DE. Qualitative, quantitative, administrative, and ‘big’ data can all contribute to an understanding of a program when used appropriately. It is not something that you simply apply to a situation, rather it is an engaged process of refining how you think about the data you have, what data you collect, and how you make sense of it all and apply lessons from it in practice.
  2. DE is about evaluation for strategic decision-making. If the evaluation is not useful in making decisions about a program or service then is it not a DE. What is considered useful in decision-making is context-dependent, meaning that a DE must be tailored toward the specific situational needs of a program or a service.
  3. DE is not about product or service improvement, it’s about product and service development. It involves a shift in mindset from growth and ‘best practices’ to one of mindful, strategic, adaptative strategy and developmental design.
  4. DE is not separate from strategy, but a critical part of it. There must be close ties between those developing and implementing strategy and the evaluation team or evaluator. A bi-directional flow of information is required through regular, ongoing communications so that strategy informs the DE and the DE informs the strategy simultaneously.
  5. DE does not make things easier, but it can make things better. DE helps programs innovate, learn, and adapt more fully, but that isn’t always easy. A strong DE involves deep engagement with data, a commitment to learning, and a willingness to embrace (or at least accept) volatility, uncertainty, complexity, and ambiguity (VUCA). This requires changing the way organizations work and interact with their programs, which requires time, energy, and sustained attention. However, the promise is that with the systematic attention and a methodology that is designed for VUCA, program leaders can put greater confidence in what DE generates than with standard approaches that assume a more linear, stable, set of conditions.
  6. DE can help document the innovation process. Through creating tools, processes, and decision-making structures to support innovation, DE also helps document the decisions and outcomes of those decisions. When people ask: “how did you get here?” DE provides some answers.
  7. DE does not eliminate the risks associated with VUCA. The adaptive strategy that DE is a part of can often be gamed can be a cop-out for those who do not want to make hard decisions. Strategy is not planning, it’s about “an integrated set of choices that determine where the firm should play and how it should win there” (Martin, 2014) and DE provides a means of building the data set and decision tools to support strategy.
  8. DE is not a panacea. Even with the mindset, appropriate decision-making structures, and a good design, DE is not going to solve the problems of innovation. It will give more systematic means to understand the process, outcomes, outputs, and impacts associated with an innovation, but it still means trials, errors, starts and stops, and the usual explorations that innovators need to experience. DE also requires sensemaking — a structured process of ‘making sense’ of the data that emerges from complex conditions. In these conditions, you can’t expect the data will yield obvious interpretations or conclusions, which is why a sensemaking process is necessary.
  9. Not everyone can do DE well. The popularity of DE in recent years has led to a surge in those claiming to be development evaluators. If you are looking for a DE specialist, consider their experience working with VUCA and complexity in particular. You will also want to find someone who understands strategy, group process, design principles, organizational behaviour, organizational sensemaking, and is an experienced evaluator. This last point means adhering to evaluation standards and potentially recruiting someone who has the Credentialed Evaluator (CE) designation to undertake the evaluation. There are many ways to be adaptive and utilization-focused in your evaluation that isn’t considered to be a DE, too.
  10. DE does not have to be complicated. While DE requires greater involvement of more aspects of an organization in its planning and execution, it doesn’t have to be an elaborate, consuming, or complicated endeavor. DE can be done simply on a small scale just as it can be a large, highly involved, participatory process done at a large scale. DE will scale to nearly any program situation. What makes a DE are the things stated above — things like mindset, organizational support, sensemaking

Developmental Evaluation is a powerful way to help innovators learn, demonstrate and showcase the efforts that go into making change happen, and to increase the capacity of your organization to evolve its mindsets, skillsets, and toolsets for innovation.

Are you interested in using DE and learning more about how your innovation — big or small — in your service, products, or systems? Contact us and we can show you what can be done to bring DE and its potential to your organization.

 

Filed Under: Research + Evaluation, Toolkit Tagged With: complexity, design, developmental design, developmental evaluation, evaluation, living systems, strategy, systems thinking

Developmental Evaluation Traps

2018-04-19 by cense

Developmental evaluation (DE) is a powerful tool for supporting innovation in complex systems. 

Developmental evaluation (DE), when used to support innovation, is about weaving design with data and strategy. It’s about taking a systematic, structured approach to paying attention to what you’re doing, what is being produced (and how), and anchoring it to why you’re doing it by using monitoring and evaluation data. For innovators, this is all connected through design.

DE helps to identify potentially promising practices or products and guide the strategic decision-making process that comes with innovation and provides evidence to support the innovation process from ideation through to business model execution and product delivery.

As introduced over on Censemaking, DE is also filled with traps. 

Over the next couple weeks, we’ll be providing some guidance on how to navigate these traps and what your organization can do to steer clear of them and harness the great potential of this important approach to innovation strategy and evaluation for your organization.

Image credit: 200 pair telephone cable model of corpus callosum by J Brew used under Creative Commons license

Filed Under: Complexity, Research + Evaluation Tagged With: censemaking, developmental evaluation, innovation, organizational change, sensemaking

  • « Previous Page
  • 1
  • …
  • 9
  • 10
  • 11
  • 12
  • 13
  • …
  • 16
  • Next Page »

Search

  • Our Services
  • Who We Are
  • Tools & Ideas
  • Events
  • Contact
  • Academy

Copyright © 2021 · Parallax Pro Theme on Genesis Framework · WordPress · Log in

We use cookies to learn where our visitors come from and when they come back. Welcome to our site. OK