Cense

Innovation to Impact

  • Who We Are
    • Contact
    • Principal & President
  • Our Services
    • Research + Evaluation
    • Service Design + Innovation Development
    • Education + Training
    • Chief Learning Officer Service
  • Tools & Ideas
  • Events
  • Academy

Innovation’s Single Biggest Question

2018-09-18 by cense

There is not a bigger question for innovators — social, product, service, policy — than What are you hiring this [ ] to do for you? 

Let’s break this down a little and then explain why we ask that at the beginning of any engagement and all throughout from the first meeting to the final run of the evaluation data. The question gets us to shape what and how we might design our innovation while the answer is about the ways we tell what kind of value it generates and the impact it produces.

What’s in a question?

The start of the question is about you, the aspiring innovator. This highlights the role of the creator and reminds us that we are generating this ‘thing’ by procurement, by design, or by simply encouraging something to be made. Without us (i.e., you), nothing changes.

The active use of hire is about the reality that we are paying for innovation through our time, our energy, our focus, our social (and often political) capital, and our money. All of these could be spent elsewhere. Design is an investment and it’s purposeful. Asking this big question gets us to pause and think deeply about what we’re putting into innovation and what we’re looking to get out of it.

The [ ] is the thing you’re hiring — the proposed service, experience, product, policy, or ecosystem — and is what you’re purposefully bringing about. This is your idea manifest into something real.

The last part is ‘to do for you‘ is active: it’s about ensuring that you’re clear about what purposes it serves. No matter how beneficial your planned innovation might be for others, you are ultimately asking it to serve a role, fill a need, for you. It is you that wants to solve a problem, build a market, or prevent something from happening, and this requires some clarity to innovate well.

What’s in an answer?

Innovation is not just creating things, it’s about evaluating the things we create. If our novel products, services, experiences, and policies don’t generate value for people, they aren’t really innovations. It’s just stuff.

An evaluation perspective on the question asked above might look at things like:

-The roles people played in the innovation process, including the skills they used, experiences they had, and the insights that they gained along the way. This learning is what feeds into our understanding of how an innovation develops along with the people and organization it is a part of. All of that is part of the innovation dividend or ROI.

-The resources used as part of the ‘hiring‘ process like money, time, human resources, and other capital; all can help look at the value of the initiative to see if the costs and benefits make sense.

-What ‘things’ are produced — the prototypes, their functioning, their benefits, and weakness — as well as provide a means to document the iterations, the steps taken, the new ‘offshoots’ that might emerge, and the resulting products, services, experiences, and policies.

-Lastly, the innovation needs to fulfill some requirements or expectations and evaluation looks at what it does in the world, for whom, under what conditions, and what other impacts might emerge unintentionally. This helps assess risks, benefits, and find new opportunities for further development and innovation.

Better questions, better answers

Innovation is what will drive much of the future value of your organization. It’s what allows you to build, grow, adapt, or sustain what you’re doing because even if you don’t feel a need to change, everything is changing around you and sometimes you need to change just to stay where you are.

By asking this one simple question you might find answers that will lead you to much better innovations to shape and create that future.

We help our clients ask this question. If you want our help, contact us and we’ll gladly help you ask better questions for better answers.

Filed Under: Design, Research + Evaluation Tagged With: design, design thinking, evaluation, impact, value

Evaluation As Part of An Innovation Value ROI

2018-09-11 by cense

Follow us for a moment. We’re going to talk about design, innovation, evaluation, and how they all go together.

Design is really the discipline — the theory, science, and practice — of innovation. That means that if you are innovating, you’re designing. Innovating is about adding value through introducing something new to a situation — it might be entirely new, a twist on an existing idea, or an old idea placed in a new context. Innovation and design are about taking ideas and purposefully transforming things into making those ideas real in the form of services, products, or experiences.

Design and innovation are all about creating value.

Thus, understanding the value of design is partly about the understanding of valuation of innovation. At the root of evaluation is the concept of value. One of the most widely used definitions of evaluation (pdf) is that it is about merit, worth, and significance — with worth being a stand-in for value.

Understanding value

Value can only be understood by asking the right questions because it’s a relative question as many people will see the worth of something different from others.

One of the big questions professional designers wrestle with at the start of any engagement with a client is:

“What are you hiring [your product, service, or experience] to do?”

What evaluators ask is: “Did your [product, service, or experience (PSE)] do what you hired it to do?”

AND

“To what extent did your PSE do what you hired it to do?”

“Did your PSE operate as it was expected to?”

“What else did your PSE do that was unexpected?”

“What lessons can we learn from your PSE development that can inform other initiatives and build your capacity for innovation as an organization?”

In short, evaluation is about asking:

“What value does your PSE provide and for whom and under what context?”

Value creation, redefined

Without asking the questions above how do we know value was created at all? Without evaluation, there is no means of being able to claim that value was generated with a PSE, whether expectations were met, and whether what was designed was implemented at all.

By asking the questions about value and how we know more about it, innovators are better positioned to design PSE’s that are value-generating for their users, customers, clients, and communities as well as their organizations, shareholders, funders, and leaders.

This redefinition of value as an active concept gives the opportunity to see what the return on investment — time, money, energy, commitment — can yield an organization in real-time. This means that value is fluid, dynamic and that can be generated on an ongoing basis. It’s not just what you report at the end of the fiscal year or project.

Imagine reporting real-time value at your next stakeholder, staff, or shareholder meeting? Imagine knowing what you’re creating now and having that focus your efforts on what you could create in the near and long-term future?

Evaluation is how you do it. It’s in the name itself.

If you’re looking to hire an evaluation to better your innovation capacity, contact us at Cense. That’s what we do.

 

Filed Under: Research + Evaluation Tagged With: design, evaluation, innovation, investment, ROI, value

Developmental Evaluation Trap #4: Organizational culture

2018-05-29 by cense

In this latest post in a series on Developmental Evaluation (DE) and its traps, we look at the innovator’s trap of culture. In a quote widely attributed to Peter Drucker

Culture eats strategy for lunch

Culture is basically “the way things are done around here”, which includes the policies, practices, and people that make what happens, happen. Culture can be a trap when we fail to prepare for the change in how we do things that comes with adopting a developmental approach.

Just as we might fear success (discussed in an earlier post), we may also not prepare (or tolerate) it when it comes. Success with one goal means having to set new goals. It changes the goal posts. It also means that one needs to reframe what success means going ahead.

Many successful sports teams face the problem of reframing their mission after winning a championship: what it takes to win the first time is different when the motivation, context, and opportunities shift because of that victory. The same thing is true for organizations.

Building a culture of innovation

We’re not naïve enough to think that we can simply offer you a few bullet points and expect you to change your culture overnight. It is a big, important job. However, there is much research on behaviour change and organizational development that we’ve drawn on to help get you started.

  1. Manage deadlines differently. Deadlines drive much of our organizational behaviour and strategy, yet research has shown that deadline-driven work is less satisfying, that people’s decision-making gets impaired under the stress of urgency, and that priority isn’t always given to the more important tasks. Consider developmental milestones, which are similar to what we use in other areas of life. It’s a mindset that looks at tasks, accomplishments, and expectations in an evolving context with adaptive targets.
  2. Planning your stops as well as your starts. Developmental Evaluation can not only help you build out your innovation, but it can also determine what needs to stop. This means making hard choices about what kind of activities you might continue, what needs to be stopped, and what new things you wish to build out. The Tamarack Stop Start Continue tool provides a practical means to guide an organization through the process of making these strategic decisions. Deciding what you will stop is just as important as creating innovations.
  3. Measure performance compassionately. New research has emerged to show that intensive performance measurement can actually hamper the performance of an organization. Collecting too much data or making it onerous to do, not disclosing the purposes of the data, not using data collected, failing to engage those involved in the program in the design of the data collections, or neglecting to collect context-rich data that can help understand the mediators and moderators of performance are ways to alienate people from evaluation and limit its use. A strong DE approach seeks to engage users, making for more effective evaluation overall.
  4. Give data back. Show those involved in the program and the evaluations associated with it what the data reveals and what decisions are made from it. By making evaluation data transparent, you build trust, but also a culture of innovation by showing the purpose of the data and the ways in which it is used to make decisions. This kind of accounting to your users and stakeholders is part of the culture development for innovation.

By getting the right data, using it respectfully, and making it visible are ways in which the culture of an organization can change through developmental evaluation and lead to the kind of system shifts that allow great ideas to emerge and flourish.

Interested in learning more about what DE is and how implementing evaluation in this manner can support your innovation efforts? Contact us to find out more.

Filed Under: Psychology, Research + Evaluation Tagged With: culture of innovation, design, developmental evaluation, evaluation, innovation, leadership, mission, psychology, sport, strategy, success, vision

Resourcing Developmental Evaluation: The First Trap

2018-05-14 by cense

In a recent post on our Developmental Evaluation (DE) in their work. In this second in our series on DE and its traps, we look at the issue of resourcing.

Like the image above, resourcing looks relatively simple at a distance, through a frame that constrains what you see in front of you. Most evaluations are designed through this kind of perspective. Developmental Evaluation requires a different kind of frame and the resourcing to support its deployment.

We see four keys to resourcing a DE.

Money

Let’s first get the money issue out of the way first.

Evaluation budgets are recommended to be set at 5 – 10% of a programmatic budget, although it’s commonly acknowledged that real budgeting falls often falls far short of this in practice. Indeed, some suggest the percentage of expenses should be higher, while The Hewlett Foundation recently published a benchmarking study where it found it was able to spend much less on its evaluation to achieve impact (largely due to its size and internal capacity).

Whatever percentage an organization seeks to devote to its evaluation budget, that budget almost certainly must increase for a DE. The reason has to do with the additional human and time resources and related coordination costs associated with doing DE work. The methods and tools associated with a DE may not differ much from a conventional evaluation, but their deployment and the kind of sense-making that comes from the data that they generate is what sets it apart and require resources.

We are reluctant to recommend a firm percentage number, rather we encourage prospective evaluands (i.e., clients, partners) to consider the question:

What are you hiring a Developmental Evaluation to do for you?

This gets at the matter of purpose. DE is about supporting strategic decision-making for innovation, thus the true budget for implementing a DE must include the time, personnel, and associated resources that can support the integration of what emerges through a DE into the program, service, or product developmental stream. That requires energy, focus, and that adds to the total cost.

We suggest you consider doubling your budget considerations at a minimum, although keep in mind that some of that cost might come from budget lines previously earmarked for strategy development, program development, and design so the overall value of DE is likely far higher than that of a standard program evaluation.

Time

While the cost of a DE might be higher, the biggest ‘expense’ might be time. Some programs like the idea of a ‘plug-and-play’ evaluation where an evaluator designs and evaluation and then largely disappears once it’s been approved and initiated. That doesn’t work for DE.

DE requires participation from the program staff, management, and potentially other stakeholders to work. An external consultant cannot, no matter how skilled or knowledgeable, provide all the answers to the questions that a DE will generate and make the decisions necessary to develop a program or process. Questions will come from the data collected — usually in some mixed method form — that illuminates some emergent, dynamic properties in a program that, because of their very complexity, require diverse perspectives to understand. This requires that those perspectives be gathered, facilitated, and integrated into a set of decisions that can be taken forth.

This time requirement will need to match the type of data collected in the evaluation design, the regularity of the meetings, and the complexity of the context over and above what is necessary to gather, analyze, and otherwise manage the data.

The biggest reason why organizations fail at their DE efforts and to innovate more broadly is that they do not invest in the time required to truly learn.

Expertise

Who is best suited to your evaluation? Do you bring in an external consultant or stick with an internal evaluator? We recommend both, together. External evaluators bring the emotional and perceptual distance to see patterns and activities within the organization that may be ‘hidden in plain sight’ to those within the organization. However, this distance also means that the external evaluator likely misses out on some of the nuance and context that is absolutely critical to supporting a DE — something that a well-placed internal evaluator might find accessible.

DE is relatively new and the processes, tools, and outcomes associated with sustained innovation are also relatively unfamiliar to many organizations. Add to this the need for ongoing learning and organizational development through the process of DE and it becomes easier to see why bringing in a skilled consultant matters. DE requires expertise in evaluation methods and tools, group facilitation, behavioural science, practical knowledge of complexity, design, and organizational development — whether within a person or group.

However, what makes DE most effective at supporting innovation is a sustained commitment to nurturing these skills and abilities within the organization. For that reason, we recommend bringing on consultants who can not only do DE but help their clients to learn DE and build their capacity for learning through evaluation and for evidence-informed design.

If an organization can build those capacities, they are set up for innovation success (which includes learning from failure).

Focus

Time, care (expertise), and attention are the three non-financial resources required for DE and innovation. Focus — the application of attention to something — cannot be marshaled periodically for DE to be a success; it has to be ongoing. While DE does involve specific milestones and benchmarks, it is principally about viewing a program as a living enterprise. Thus, one has to treat it like having a pet that requires continued engagement, not as a fence that requires a coat of paint every few years.

Ongoing learning happens by design. As an organization, you need to provide the means for your staff and leadership to truly learn. We have consistently encountered ‘knowledge organizations’ for which their primary product is new knowledge where staff and senior management spend less than an hour per week on activities such as reviewing literature, training, or synthesizing educational material. That is an absence of praxis — learning while doing — and a lack of focus on what the organization is all about.

To do DE requires a focus on learning, commitment to using data to inform decisions, and a willingness to deal with uncertainty. These requirements are not one-time events but must be managed as part of an ongoing process as part of building a culture of evaluation.

Resources for resourcing DE

Are you resourced for Developmental Evaluation?

If you are or require some help in preparing for one, contact us and we can show you new ways to build your capacity for innovation while expanding your possibilities to learn and grow.

 

Filed Under: Research + Evaluation Tagged With: culture of evaluation, design, developmental evaluation, evaluation, innovation, innovation design, resources, strategy, time

Developmental Evaluation: A Short Introduction

2018-04-30 by cense

Developmental Evaluation (DE) was first proposed by Michael Quinn Patton with the support of colleagues who have wrestled with the problem of dealing with complexity in human systems and the need to provide structured, useful, actionable information to make decisions to support innovation.

DE has been described as being akin to taking a classic ‘road trip’ where you have a destination, a planned route, but also a spirit of adventure and willingness to deviate when needed. DE is an approach to evaluation, not a specific method or tool, designed to support decision making for innovation. Innovation, in this case, is about the activities and decisions that allow an organization and its members to create value by design. The design may not turn out as expected or produce surprises, but it is part of an intentional act to create value through new thinking and action.

Ten things about what DE is and is not

Developmental evaluation (“DE” as it’s often referred to as), when used to support innovation, is about weaving design with data and strategy. It’s about taking a systematic, structured approach to paying attention to what you’re doing, what is being produced (and how), and anchoring it to why you’re doing it by using monitoring and evaluation data. DE helps to identify potentially promising practices or products and guide the strategic decision-making process that comes with innovation. When embedded within a design process, DE provides evidence to support the innovation process from ideation through to business model execution and product delivery.

There are a lot of misconceptions about what a DE is and what it is not and we thought it might be worth addressing ten of these to help provide a brief introduction to DE.

  1. DE is an approach to evaluation, not a method. Most standard methods and tools for evaluation can be used as part of a DE. Qualitative, quantitative, administrative, and ‘big’ data can all contribute to an understanding of a program when used appropriately. It is not something that you simply apply to a situation, rather it is an engaged process of refining how you think about the data you have, what data you collect, and how you make sense of it all and apply lessons from it in practice.
  2. DE is about evaluation for strategic decision-making. If the evaluation is not useful in making decisions about a program or service then is it not a DE. What is considered useful in decision-making is context-dependent, meaning that a DE must be tailored toward the specific situational needs of a program or a service.
  3. DE is not about product or service improvement, it’s about product and service development. It involves a shift in mindset from growth and ‘best practices’ to one of mindful, strategic, adaptative strategy and developmental design.
  4. DE is not separate from strategy, but a critical part of it. There must be close ties between those developing and implementing strategy and the evaluation team or evaluator. A bi-directional flow of information is required through regular, ongoing communications so that strategy informs the DE and the DE informs the strategy simultaneously.
  5. DE does not make things easier, but it can make things better. DE helps programs innovate, learn, and adapt more fully, but that isn’t always easy. A strong DE involves deep engagement with data, a commitment to learning, and a willingness to embrace (or at least accept) volatility, uncertainty, complexity, and ambiguity (VUCA). This requires changing the way organizations work and interact with their programs, which requires time, energy, and sustained attention. However, the promise is that with the systematic attention and a methodology that is designed for VUCA, program leaders can put greater confidence in what DE generates than with standard approaches that assume a more linear, stable, set of conditions.
  6. DE can help document the innovation process. Through creating tools, processes, and decision-making structures to support innovation, DE also helps document the decisions and outcomes of those decisions. When people ask: “how did you get here?” DE provides some answers.
  7. DE does not eliminate the risks associated with VUCA. The adaptive strategy that DE is a part of can often be gamed can be a cop-out for those who do not want to make hard decisions. Strategy is not planning, it’s about “an integrated set of choices that determine where the firm should play and how it should win there” (Martin, 2014) and DE provides a means of building the data set and decision tools to support strategy.
  8. DE is not a panacea. Even with the mindset, appropriate decision-making structures, and a good design, DE is not going to solve the problems of innovation. It will give more systematic means to understand the process, outcomes, outputs, and impacts associated with an innovation, but it still means trials, errors, starts and stops, and the usual explorations that innovators need to experience. DE also requires sensemaking — a structured process of ‘making sense’ of the data that emerges from complex conditions. In these conditions, you can’t expect the data will yield obvious interpretations or conclusions, which is why a sensemaking process is necessary.
  9. Not everyone can do DE well. The popularity of DE in recent years has led to a surge in those claiming to be development evaluators. If you are looking for a DE specialist, consider their experience working with VUCA and complexity in particular. You will also want to find someone who understands strategy, group process, design principles, organizational behaviour, organizational sensemaking, and is an experienced evaluator. This last point means adhering to evaluation standards and potentially recruiting someone who has the Credentialed Evaluator (CE) designation to undertake the evaluation. There are many ways to be adaptive and utilization-focused in your evaluation that isn’t considered to be a DE, too.
  10. DE does not have to be complicated. While DE requires greater involvement of more aspects of an organization in its planning and execution, it doesn’t have to be an elaborate, consuming, or complicated endeavor. DE can be done simply on a small scale just as it can be a large, highly involved, participatory process done at a large scale. DE will scale to nearly any program situation. What makes a DE are the things stated above — things like mindset, organizational support, sensemaking

Developmental Evaluation is a powerful way to help innovators learn, demonstrate and showcase the efforts that go into making change happen, and to increase the capacity of your organization to evolve its mindsets, skillsets, and toolsets for innovation.

Are you interested in using DE and learning more about how your innovation — big or small — in your service, products, or systems? Contact us and we can show you what can be done to bring DE and its potential to your organization.

 

Filed Under: Research + Evaluation, Toolkit Tagged With: complexity, design, developmental design, developmental evaluation, evaluation, living systems, strategy, systems thinking

  • « Previous Page
  • 1
  • 2
  • 3
  • 4
  • 5
  • Next Page »

Search

  • Who We Are
  • Our Services
  • Tools & Ideas
  • Events
  • Academy

Copyright © 2021 · Parallax Pro Theme on Genesis Framework · WordPress · Log in