Cense Ltd.

Inspiration, Innovation and Impact

  • Who We Are
    • Contact
    • Principal & President
  • Our Services
    • Strategic Design
    • Evaluation Services
    • Well-Being Design
    • Coaching + Training
    • Chief Learning Officer Service
  • Innovation Kit
  • Censemaking
  • Academy
  • Events
  • Inspiration | Innovation | Impact

Context Setting

2021-01-19 by cense

In any innovation project, there is a need to set the starting point so you know where you’ve come from in understanding where you are and where you are going to.

We’ve often referred to this as setting a baseline. Another way to frame this is about setting the stage for what’s to come — your context. One of the tools to help you do this is to prepare a Living History document — a master document that tracks your activities, decisions, and observations along the journey. This is part of a larger effort to evaluate and tell the story of your innovation.

However, context-setting is more than that. Starting a business in the middle of an economic crisis or a pandemic is not the same as doing it in the middle of a boom. Measuring the early success of an ice cream shop that opens in Canada in January is different than one that opens up during the summer months. The same product and service, a very different context.

Starting Out

Where to begin?

The first thing we suggest you do is try to view your current situation and context through the eyes of a stranger. Imagine you are coming upon a place or situation for the first time. What are you noticing?

That beginner’s mind is something that we use in Design Thinking all the time to help us ask better questions. It helps us to be mindful of our environment and ourselves and allows us to ground whatever actions we take, strategies we create, and directions we follow in the present reality — not just possibility. As innovators, we often are primed to see what could be at the expense of what is.

We don’t want to lose that, just to put it aside at the beginning.

This involves asking questions like:

  • What is this [ ] for?
  • Why is this [ ] done the way it is?
  • What do these [people, things, tools] do?
  • What is important to the people around me in this situation?

Simple questions like this can lead to profound insights about something you thought you knew. Add in some observations — without judgement — by simply describing the things you see in front of you and you’re ready to start organizing and sensemaking.

STEEP-V

We like to use the STEEP-V framework to help you organize some of what you see. STEEP-V is a means to record and organize information based on a variety of different factors present in a context. These are:

  • Social
  • Technological
  • Economic
  • Environmental
  • Political
  • Values

By organizing and inquiring about your context using these categories we start to see what kind of situation we are in and what are the areas of focus of our clients and community. It helps us to understand why people might be more inclined to act, think, strive in particular ways and how what we are doing with our innovation can meet people where they are at to bring it together.

By combining some inquiry-based questions and STEEP-V you will draw a picture of the current context that you can use to populate your Living History document and use as a point of comparison down the road.

By understanding the context in which you are launching your innovation you are reminding yourself of the constraints, enablers, and situations you’ve designed your product or service for. Later — months or years later — you might find that things have changed. With this baseline, you can then do this context-setting again to see whether you are still designing for it — or for the past.

Filed Under: Research + Evaluation, Strategy Tagged With: baseline, context, developmental evaluation, innovation, innovation design

Persistence: The Innovation Process Outcome

2020-12-01 by cense

When looking to evaluate innovation many seek to find numbers related to product adoption, revenue generated, people reached, when what they ought to consider first is process outcomes.

Sustainable innovation — a process, practice, and culture of design-driven creation — is the most valuable outcome for any organization. Innovation is not about creating a single item — product, service, policy — it’s about doing it regularly, consistently, over time.

Regular innovation only comes from persistence or what Seth Godin calls The Practice.

Measuring the practice — the amount of activity, persistence, and consistency of effort — is what any organization should be evaluated against. It fits with what we know about design thinking, performance and innovation: the more ideas you generate, the more prototypes you create, and the more attempts you make the more likely you are to have better ideas, more successful products, and create transformation.

Coming up with a single successful innovation is mostly good if you’re seeking to be bought up by a competitor and, while that can be lucrative, it’s not a sustainable strategy and is contingent on having one very good idea. Having many good ideas and having them implemented into practice is what creates sustainable, resilient organizations. It is what allows organizations to adapt in times of crisis and create new opportunities in times of contraction within your market.

This is what a culture of innovation is all about.

Metrics of Effort

There are many metrics and methods that can help capture the effort of your team in developing that culture of innovation. These can be used to complement questions we might ask about design thinking. Here are a few:

  • Number of attempts
  • Number of ideas generated / ideation sessions engaged in
  • Number of concepts proposed and prototypes developed
  • Background research gathered (e.g., artifacts)**
  • Consistently of application (i.e., ongoing use of a process and fidelity)
  • Number of solicitations for feedback from internal and external sources
  • Integrations within existing processes and tools
  • Materials used
  • Evaluation designs created for products or services
  • Evaluations implemented
  • Number of products launched outside of the organization
  • Number of new innovations generated (may be products, processes, or policy improvements)
  • Persistence of effort (e.g., continuity of activity, sequencing, and time-spent)

** note that research can be a trap. It’s easy to get stuck in over-researching something. While important as a product, it’s only useful if the research converts to real process or product efforts.

These are part of an Innovation Implementation Index that can help you to assess what innovation activities that you are undertaking and whether they are leading to an actual output or outcome.

By looking at not only what you do but how often and persistent your efforts are you will later be able to assess how your organization adopts, builds, and benefits from a culture of innovation.

Are you looking to build this with your organization, unit, or team? Contact us and we can help you build, assess, and sustain a culture of innovation in your organization.

Filed Under: Design, Research + Evaluation Tagged With: culture of innovation, design thinking, evaluation, implementation, innovation design, metrics

Surfacing Invisible Rules

2020-07-07 by cense

What often can hold our change initiatives back are mental models about how or why something happens. Historically, many innovations and discoveries were held back or failed outright because people were unable to see or believe what was in front of them. By asking a set of questions at the outset and throughout your project you can avoid many mishaps.

The scene below from Men In Black illustrates what happens when our mental models about the world get upended and ask a simple question about what we know*. (*Just prior to this scene, Will Smith’s character confronts alien life forms for the first time — something that Tommy Lee Jones’ character already knows and lives with.)

One way to surface these hidden assumptions is through an exercise we might call ‘Invisible Rules‘. This three-part exercise can help you surface and uncover those ‘hidden’ rules we live by that might be holding us back from what we are seeking to change.

The exercise involves asking a series of questions in three stages:

1. Assumptions

  • What assumptions am I operating under?
    • Consider things like people (populations, characteristics, traits, knowledge, skills, preferences), time and timing, the likelihood of success, resources required.
  • How did these assumptions come about?
    • Is the evidence based on fact or folk knowledge?
  • What evidence is there to support that these assumptions are true?
    • Is this evidence still valid? (e.g., is it based on a historical or current position? Has something changed considerably since the evidence was first generated to prompt questions about its relevance?)

2. Design

With these answers, we move to a new set of questions tied to the design of your innovation (project, product, service, etc..)

  • Can I modify any part of the design (e.g., remove, reduce, amplify, or replace) that might make it better?
  • What can I learn (borrow, modify, adapt) from other designs addressing similar issues?

3. Future-casting

Lastly, it is useful to ask yourself three “How might” questions about your innovation.

  • How might this project fail?
    • For whom? Under what conditions?
  • How might we learn about what we’re doing while we’re doing it?
    • The evaluation and reflection metrics, measures, and processes in place to learn what works and doesn’t as you go.
  • How might things change beyond our control?
    • Possible surprises that might sidetrack your plans (e.g., pandemic, government change, policy change).

These simple set of questions can produce an enormous amount of data for you and your team. In just a few hours you might save years of pain and problems and see beyond the fence into the pool of opportunity beyond.

Want help in seeing things differently and asking better questions in your work? There are some simple steps that can help your team see things that others can’t. Contact us. This is what we do.

we’d be happy to help.

Filed Under: Design, Toolkit Tagged With: assumptions, design, innovation, innovation design, innovation development, toolkit

Creating Design Pathways for Learning

2019-08-29 by cense

Capturing learning requires a focus on the journey, not end. Thinking like a designer can shape what we learn and how.

Learning is both a journey and a destination and it’s through recognizing this that we can better facilitate intentional, deliberative learning to support innovation and development. By approaching this journey through the lens of service design — as a design-driven evaluation — we can better design the data and insights that come from it to support learning.

What is learning?

Learning comes from perception, experience, feedback, and reflection. You first encounter something and perceive it with our senses (e.g., read, observe, hear, feel), experience something (e.g., movement, action, tension, emotion), which gives you feedback about the perception and experience that is synthesized through reflection (e.g., memory, comparison with related things, contemplation).

Evaluation is principally a tool for learning because it focuses our perception on things, monitors the experience, provides the feedback and can support reflection through offering a systematic, structured means to makes sense of what’s happened.

Evaluation is simply the means of answering the question “what happened?” in a systematic manner.

For those developing an innovation, looking to change, or seeking to improve the sustainability of our systems, answering ‘what happened?’ is the difference between real impact and nothing.

Mapping the journey with data

A journey map is a tool that is used in service design to help understand how service users (e.g., clients, customers, patients, students) might encounter the service and system to achieve a particular goal. These can be displayed visually with great artistry (see here for a beautiful example of the Indigenous cancer patient journey in BC) or simply with boxes and arrows.

It is one of many types of maps that can be created to illustrate the ways in which a user might navigate or approach a service, decision, or pathway to learning.

For innovators and evaluators, these tools present an opportunity to create touchpoints for data collection and deeper understanding of the service throughout. Too often, evaluation is focused on the endpoint or an overall assessment of the process without considering ways to embed opportunities to learn and support learning throughout a journey.

We feel this is a lost opportunity.

Without the opportunity to perceive, gain feedback, and reflect on what happens we are left with experience only, which isn’t a great teacher on its own and filled with many biases that can shift focus away from some of the causes and consequences associated with what’s happening. This is not to say that there isn’t bias in evaluation, yet what makes it different is that it is systematic and accounts for the biases in the design.

Service design meets evaluation

Design-driven evaluation is about integrating evaluation into the design of a program to create the means for developing systematic, structured feedback to support learning along a service journey. One of the simplest ways to do this is to build a layer of evaluation on the service journey map.

Consider a detailed service journey map like the one illustrating the patient journey map cited above. Along this windy, lengthy journey from pre-diagnosis to the end, there are many points where we can learn from the patient, providers, system administrators, and others associated with the health-seeking person that can inform our understanding of the program or system they are in.

By embedding structured (not rigid) data collection into the system we can better learn what’s happening — in both process and effects. Taking this approach offers us the following:

  • Identify activities and behaviours that take place throughout the journey.
  • Provides a lens on service through the perspective of a user. The same service could be modelled using a different perspective (e.g., caregiver, healthcare professional, health administrator).
  • Identifies the systems, processes, people, and relationships that a person goes through on the way through, by, or in spite of a service
  • Allows us to identify how data can fit into a larger narrative of a program or service and be used to support the delivery of that service.
  • Anchors potential data collection points to service transitions and activities to help identify areas of improvement, development, or unnecessary features.
  • Provides a visual means of mapping the structural, behavioural and social processes that underpin the program to test out the theory of change or logic model (does it hold up?).
  • Offers opportunities to explore alternative futures without changing the program (what happens if we did X instead of Y — how would that change the pathway?).

These are some of the ways in which taking a design-driven approach and using common methods from service design can improve or enhance our understanding of a program. Not a bad list, right? That’s just a start.

Try this out. Service design tools and thinking models coupled with evaluation can provide access to the enormous wealth of learning opportunities that exist within your programs. It helps you to uncover the real impact of your programs and innovation value hidden in plain sight.

To learn more about this approach to evaluation, innovation, and service design contact us. We’d love to help you improve what you do and get more value from all your hard work.

Photo by Lili Popper on Unsplash , Billy Pasco on Unsplash and  Startaê Team on Unsplash . Thank you to these artists for making their work available for use.

Filed Under: Design, Research + Evaluation, Social Innovation Tagged With: design, design-driven evaluation, developmental design, evaluation, innovation, innovation design, journey map, learning, organizational learning, service design

Resourcing Developmental Evaluation: The First Trap

2018-05-14 by cense

In a recent post on our Developmental Evaluation (DE) in their work. In this second in our series on DE and its traps, we look at the issue of resourcing.

Like the image above, resourcing looks relatively simple at a distance, through a frame that constrains what you see in front of you. Most evaluations are designed through this kind of perspective. Developmental Evaluation requires a different kind of frame and the resourcing to support its deployment.

We see four keys to resourcing a DE.

Money

Let’s first get the money issue out of the way first.

Evaluation budgets are recommended to be set at 5 – 10% of a programmatic budget, although it’s commonly acknowledged that real budgeting falls often falls far short of this in practice. Indeed, some suggest the percentage of expenses should be higher, while The Hewlett Foundation recently published a benchmarking study where it found it was able to spend much less on its evaluation to achieve impact (largely due to its size and internal capacity).

Whatever percentage an organization seeks to devote to its evaluation budget, that budget almost certainly must increase for a DE. The reason has to do with the additional human and time resources and related coordination costs associated with doing DE work. The methods and tools associated with a DE may not differ much from a conventional evaluation, but their deployment and the kind of sense-making that comes from the data that they generate is what sets it apart and require resources.

We are reluctant to recommend a firm percentage number, rather we encourage prospective evaluands (i.e., clients, partners) to consider the question:

What are you hiring a Developmental Evaluation to do for you?

This gets at the matter of purpose. DE is about supporting strategic decision-making for innovation, thus the true budget for implementing a DE must include the time, personnel, and associated resources that can support the integration of what emerges through a DE into the program, service, or product developmental stream. That requires energy, focus, and that adds to the total cost.

We suggest you consider doubling your budget considerations at a minimum, although keep in mind that some of that cost might come from budget lines previously earmarked for strategy development, program development, and design so the overall value of DE is likely far higher than that of a standard program evaluation.

Time

While the cost of a DE might be higher, the biggest ‘expense’ might be time. Some programs like the idea of a ‘plug-and-play’ evaluation where an evaluator designs and evaluation and then largely disappears once it’s been approved and initiated. That doesn’t work for DE.

DE requires participation from the program staff, management, and potentially other stakeholders to work. An external consultant cannot, no matter how skilled or knowledgeable, provide all the answers to the questions that a DE will generate and make the decisions necessary to develop a program or process. Questions will come from the data collected — usually in some mixed method form — that illuminates some emergent, dynamic properties in a program that, because of their very complexity, require diverse perspectives to understand. This requires that those perspectives be gathered, facilitated, and integrated into a set of decisions that can be taken forth.

This time requirement will need to match the type of data collected in the evaluation design, the regularity of the meetings, and the complexity of the context over and above what is necessary to gather, analyze, and otherwise manage the data.

The biggest reason why organizations fail at their DE efforts and to innovate more broadly is that they do not invest in the time required to truly learn.

Expertise

Who is best suited to your evaluation? Do you bring in an external consultant or stick with an internal evaluator? We recommend both, together. External evaluators bring the emotional and perceptual distance to see patterns and activities within the organization that may be ‘hidden in plain sight’ to those within the organization. However, this distance also means that the external evaluator likely misses out on some of the nuance and context that is absolutely critical to supporting a DE — something that a well-placed internal evaluator might find accessible.

DE is relatively new and the processes, tools, and outcomes associated with sustained innovation are also relatively unfamiliar to many organizations. Add to this the need for ongoing learning and organizational development through the process of DE and it becomes easier to see why bringing in a skilled consultant matters. DE requires expertise in evaluation methods and tools, group facilitation, behavioural science, practical knowledge of complexity, design, and organizational development — whether within a person or group.

However, what makes DE most effective at supporting innovation is a sustained commitment to nurturing these skills and abilities within the organization. For that reason, we recommend bringing on consultants who can not only do DE but help their clients to learn DE and build their capacity for learning through evaluation and for evidence-informed design.

If an organization can build those capacities, they are set up for innovation success (which includes learning from failure).

Focus

Time, care (expertise), and attention are the three non-financial resources required for DE and innovation. Focus — the application of attention to something — cannot be marshaled periodically for DE to be a success; it has to be ongoing. While DE does involve specific milestones and benchmarks, it is principally about viewing a program as a living enterprise. Thus, one has to treat it like having a pet that requires continued engagement, not as a fence that requires a coat of paint every few years.

Ongoing learning happens by design. As an organization, you need to provide the means for your staff and leadership to truly learn. We have consistently encountered ‘knowledge organizations’ for which their primary product is new knowledge where staff and senior management spend less than an hour per week on activities such as reviewing literature, training, or synthesizing educational material. That is an absence of praxis — learning while doing — and a lack of focus on what the organization is all about.

To do DE requires a focus on learning, commitment to using data to inform decisions, and a willingness to deal with uncertainty. These requirements are not one-time events but must be managed as part of an ongoing process as part of building a culture of evaluation.

Resources for resourcing DE

Are you resourced for Developmental Evaluation?

If you are or require some help in preparing for one, contact us and we can show you new ways to build your capacity for innovation while expanding your possibilities to learn and grow.

 

Filed Under: Research + Evaluation Tagged With: culture of evaluation, design, developmental evaluation, evaluation, innovation, innovation design, resources, strategy, time

  • 1
  • 2
  • Next Page »

Search

  • Who We Are
  • Our Services
  • Innovation Kit
  • Censemaking
  • Academy
  • Events
  • Inspiration | Innovation | Impact

Copyright © 2022 · Parallax Pro Theme on Genesis Framework · WordPress · Log in