Cense

Innovation to Impact

  • Who We Are
    • Contact
    • Principal & President
  • Our Services
    • Research + Evaluation
    • Service Design + Innovation Development
    • Education + Training
    • Chief Learning Officer Service
  • Learning & Resources
  • Events
  • Academy

Sensemaking in Crisis

2020-06-30 by cense

Sensemaking is a social process that helps us make sense of data, information, and knowledge in a time of complexity. It’s used often in innovation contexts when we are fitting data to a unique situation.

The RSA (the Royal Society for the encouragement of Arts, Manufactures and Commerce) a UK-based charity and think-tank has recently updated and revised its collective sense-making framework which provides a clear example of ways to consider change-making and leading in times of crisis.

The 2×2 framework, presented below, helps to frame activities that may have stopped or started during a crisis and what activities we may wish to amplify, end, abandon, and re-start.

Developmental Thinking

What the RSA framework embodies is what we call developmental thinking. This is the kind of thinking embedded in Developmental Evaluation, design and innovation. This thinking is about taking information and feedback from the activities in the system and making adaptive, strategic decisions to keep the organization developing (evolving) through learning.

Learning is about taking action based on new information and in some cases — such as those in complex situations with lots of change and activity — this learning must come from sensemaking. It tells us when to stop, start, pause, and wind-down activities.

The RSA proposes conducting this sensemaking over time through a process that is akin to developmental design and developmental evaluation through the crisis which, in the case of events like COVID-19, might be protracted and evolve. As noted below, it also recognizes that sensemaking is tied to systems thinking where events (the most visible parts of a system) are actually built upon larger sets of behaviours, structures, and paradigms.

To make use of this requires a monitoring and evaluation system tied to an overall developmental, design-driven process. It’s not difficult, but it does require substantial mindframe shifts and organizational supports. Yet, the payoff is that your organization is adaptive and working with what’s happening and what’s emerging, rather than stuck trying to make what used to work come alive in an environment that has not only changed, but might be different altogether.

If your organization needs help in reshaping your work to make the most of what you have and pivot to what’s needed next, contact us. This is what we do.

Filed Under: Psychology, Toolkit Tagged With: complexity, developmental design, developmental evaluation, framework, learning, organizational learning, sensemaking, systems thinking, The RSA

Developmental Evaluation Preparedness

2019-10-08 by cense

Pilots have a pre-flight check before they fly a plane. A developmental evaluation journey requires the same kind of preparedness.

Developmental Evaluation (DE) is an approach to strategic learning designed for understanding and supporting innovation. This approach to data-driven learning requires a specific set-up to do well. While we need a mindset, skillset, and toolset to do DE, we also need a receptive system to make it all work.

Here are the things you can do to lay the groundwork for doing DE and making it a success. Think of this as your pre-flight checklist for a journey of learning together.

These are attributes and may be found within individuals, but it’s better if they are distributed widely across the organization.

Mindset

The most important aspect of a Developmental Evaluation is creating the right mindset. This mindset is focused on complexity, adaptation, and development (vs. improvement, linear growth). A list of statements to examine whether you have the right mindset is available on Censemaking and can provide a start.

Some other mindset questions include:

  1. Openness to experience. A DE will likely take a journey that can’t be fully predicted, much like a road trip. An openness to a non-linear pathway toward a preferred destination and the willingness to adapt the path and the destination based on evaluation data and evidence is key.
  2. Tolerance for ambiguity. Few of us enjoy uncertainty, but we all need to deal with it when working on a DE.
  3. Self-awareness. We often get in our own way. Understanding how we think and the biases and perspectives we hold is important to knowing when they serve us and when they are a barrier to innovation. Mindfulness is a part of this quality.
  4. Patience. This is a skill and mindset quality. Knowing that things will unfold at a time and pace that might change is useful — and requires patience in just knowing that.
  5. Evaluative thinking. This is a form of thinking that gets us connecting our activities, outcomes, and effects and asks three key questions about how we know something (and what to do about it).

Skillset

Developmental evaluation is not for the unskilled. While many of the qualities inherent in a DE are natural to people, they are not commonly practiced in organizations. DE is an approach, and while a skilled evaluator might have many methodological skills to do much of what is in a DE, without the following skills — with the evaluator, the evaluation team, the organization, or all of them — you won’t likely succeed.

  • Facilitation. A DE is a collaborative endeavour. You need facilitation skills to engage the diversity of perspectives in the system and coordinate the discussion, feedback, and opportunities for everyone to engage with these perspectives.
  • Sensemaking. Sensemaking is a participatory process that involves taking data that may be difficult to understand, ambiguous in its conclusions, or incomplete and helps people make sense of what it means for the organization within the present context and strategic needs. Sensemaking for DE helps guide decision-making and strategy.
  • Complexity and Systems Thinking. A distinguishing feature of DE is the use of applied thinking from systems science and complexity to guide the entire process. This means creating designs and processes that are sensitive to detecting emergent properties, network effects, organizational behaviour (like self-organization), and non-linear or algorithmic change dynamics.
  • Dynamism. Complexity also requires that an evaluation be designed with consideration of the dynamics of the program, actors, and system. This means adapting or developing from a moving position, not a static position. It also involves designing evaluations that take into account the growth pattern and evolution of a program that suits the system. It’s about a design for changing conditions.
  • Visualization. System complexity is made more so by the inability to effectively ‘see’ the systems in our heads – there’s too much information. The ability to create system maps, visualize the dynamics and relationships within it, and facilitate discussion of those systems is critical to understanding where a program fits within it all. Visualization can be sophisticated or relatively simple in nature.
  • Design. What makes DE distinct is that it is about making modifications to the program as it unfolds using evaluation data to guide the strategy. These modifications require design and the skills — like with evaluation — require designing a program while it is moving. These developmental design skills are often overlooked and bring together service design and innovation with elements of systemic design.

Toolset

The toolset is the least specific of the three ‘sets’ for DE. Almost any method can support a DE, although unlike other approaches, rarely will there be a case where a single method — interview, survey, observation — will work on its own. A multi-method or mixed-method approach to data collection is almost always necessary.

Some additional tools or methods that are not common are ones we’ve advocated for and either developed ourselves or borrowed from other contexts. Three of the most important of these are:

  • The Living History Method. This meta-method brings together artifacts that inform the program’s early development and evolution. It recognizes that the present program is shaped by where it came from and that such a history can influence where things are going. Unless these conscious or unconscious patterns are recognized it is unlikely they will change. This method also helps record decisions and activities that often get missed when creating an innovation.
  • Dashboards. The wealth of potential information generated from a DE and available at our fingertips can overwhelm even the most skilled evaluator. Creating dashboards using simple tools and technologies can help organize the various streams of data. It also can allow someone to gain a sense of where there may be simple patterns laying within a complex layer of data.
  • Spidergrams. The spider diagram is a simple visualization tool that can work well in a DE as it helps pull various data points together in a way that allows comparison between them AND the ability to dive deep within each data stream.

Some additional perspectives on doing DE can be found in the guide (PDF) published by the McConnell Foundation by Elizabeth Dozois and colleagues.

Next time you are considering a DE, look at your present inventory of mindsets, skillsets, and toolsets to see if you have the right resources in place to do the work you need to do.

Need help? Contact us. We can help you set up a Developmental Evaluation, do one, or both. We’d love to hear from you.

Photo by Lucia Otero on Unsplash

Filed Under: Research + Evaluation, Toolkit Tagged With: complexity, dashboard, developmental design, developmental evaluation, evaluative thinking, living history method, mindfulness, spidergram, systemic design

Creating Design Pathways for Learning

2019-08-29 by cense

Capturing learning requires a focus on the journey, not end. Thinking like a designer can shape what we learn and how.

Learning is both a journey and a destination and it’s through recognizing this that we can better facilitate intentional, deliberative learning to support innovation and development. By approaching this journey through the lens of service design — as a design-driven evaluation — we can better design the data and insights that come from it to support learning.

What is learning?

Learning comes from perception, experience, feedback, and reflection. You first encounter something and perceive it with our senses (e.g., read, observe, hear, feel), experience something (e.g., movement, action, tension, emotion), which gives you feedback about the perception and experience that is synthesized through reflection (e.g., memory, comparison with related things, contemplation).

Evaluation is principally a tool for learning because it focuses our perception on things, monitors the experience, provides the feedback and can support reflection through offering a systematic, structured means to makes sense of what’s happened.

Evaluation is simply the means of answering the question “what happened?” in a systematic manner.

For those developing an innovation, looking to change, or seeking to improve the sustainability of our systems, answering ‘what happened?’ is the difference between real impact and nothing.

Mapping the journey with data

A journey map is a tool that is used in service design to help understand how service users (e.g., clients, customers, patients, students) might encounter the service and system to achieve a particular goal. These can be displayed visually with great artistry (see here for a beautiful example of the Indigenous cancer patient journey in BC) or simply with boxes and arrows.

It is one of many types of maps that can be created to illustrate the ways in which a user might navigate or approach a service, decision, or pathway to learning.

For innovators and evaluators, these tools present an opportunity to create touchpoints for data collection and deeper understanding of the service throughout. Too often, evaluation is focused on the endpoint or an overall assessment of the process without considering ways to embed opportunities to learn and support learning throughout a journey.

We feel this is a lost opportunity.

Without the opportunity to perceive, gain feedback, and reflect on what happens we are left with experience only, which isn’t a great teacher on its own and filled with many biases that can shift focus away from some of the causes and consequences associated with what’s happening. This is not to say that there isn’t bias in evaluation, yet what makes it different is that it is systematic and accounts for the biases in the design.

Service design meets evaluation

Design-driven evaluation is about integrating evaluation into the design of a program to create the means for developing systematic, structured feedback to support learning along a service journey. One of the simplest ways to do this is to build a layer of evaluation on the service journey map.

Consider a detailed service journey map like the one illustrating the patient journey map cited above. Along this windy, lengthy journey from pre-diagnosis to the end, there are many points where we can learn from the patient, providers, system administrators, and others associated with the health-seeking person that can inform our understanding of the program or system they are in.

By embedding structured (not rigid) data collection into the system we can better learn what’s happening — in both process and effects. Taking this approach offers us the following:

  • Identify activities and behaviours that take place throughout the journey.
  • Provides a lens on service through the perspective of a user. The same service could be modelled using a different perspective (e.g., caregiver, healthcare professional, health administrator).
  • Identifies the systems, processes, people, and relationships that a person goes through on the way through, by, or in spite of a service
  • Allows us to identify how data can fit into a larger narrative of a program or service and be used to support the delivery of that service.
  • Anchors potential data collection points to service transitions and activities to help identify areas of improvement, development, or unnecessary features.
  • Provides a visual means of mapping the structural, behavioural and social processes that underpin the program to test out the theory of change or logic model (does it hold up?).
  • Offers opportunities to explore alternative futures without changing the program (what happens if we did X instead of Y — how would that change the pathway?).

These are some of the ways in which taking a design-driven approach and using common methods from service design can improve or enhance our understanding of a program. Not a bad list, right? That’s just a start.

Try this out. Service design tools and thinking models coupled with evaluation can provide access to the enormous wealth of learning opportunities that exist within your programs. It helps you to uncover the real impact of your programs and innovation value hidden in plain sight.

To learn more about this approach to evaluation, innovation, and service design contact us. We’d love to help you improve what you do and get more value from all your hard work.

Photo by Lili Popper on Unsplash , Billy Pasco on Unsplash and  Startaê Team on Unsplash . Thank you to these artists for making their work available for use.

Filed Under: Design, Research + Evaluation, Social Innovation Tagged With: design, design-driven evaluation, developmental design, evaluation, innovation, innovation design, journey map, learning, organizational learning, service design

Developmental Evaluation: A Short Introduction

2018-04-30 by cense

Developmental Evaluation (DE) was first proposed by Michael Quinn Patton with the support of colleagues who have wrestled with the problem of dealing with complexity in human systems and the need to provide structured, useful, actionable information to make decisions to support innovation.

DE has been described as being akin to taking a classic ‘road trip’ where you have a destination, a planned route, but also a spirit of adventure and willingness to deviate when needed. DE is an approach to evaluation, not a specific method or tool, designed to support decision making for innovation. Innovation, in this case, is about the activities and decisions that allow an organization and its members to create value by design. The design may not turn out as expected or produce surprises, but it is part of an intentional act to create value through new thinking and action.

Ten things about what DE is and is not

Developmental evaluation (“DE” as it’s often referred to as), when used to support innovation, is about weaving design with data and strategy. It’s about taking a systematic, structured approach to paying attention to what you’re doing, what is being produced (and how), and anchoring it to why you’re doing it by using monitoring and evaluation data. DE helps to identify potentially promising practices or products and guide the strategic decision-making process that comes with innovation. When embedded within a design process, DE provides evidence to support the innovation process from ideation through to business model execution and product delivery.

There are a lot of misconceptions about what a DE is and what it is not and we thought it might be worth addressing ten of these to help provide a brief introduction to DE.

  1. DE is an approach to evaluation, not a method. Most standard methods and tools for evaluation can be used as part of a DE. Qualitative, quantitative, administrative, and ‘big’ data can all contribute to an understanding of a program when used appropriately. It is not something that you simply apply to a situation, rather it is an engaged process of refining how you think about the data you have, what data you collect, and how you make sense of it all and apply lessons from it in practice.
  2. DE is about evaluation for strategic decision-making. If the evaluation is not useful in making decisions about a program or service then is it not a DE. What is considered useful in decision-making is context-dependent, meaning that a DE must be tailored toward the specific situational needs of a program or a service.
  3. DE is not about product or service improvement, it’s about product and service development. It involves a shift in mindset from growth and ‘best practices’ to one of mindful, strategic, adaptative strategy and developmental design.
  4. DE is not separate from strategy, but a critical part of it. There must be close ties between those developing and implementing strategy and the evaluation team or evaluator. A bi-directional flow of information is required through regular, ongoing communications so that strategy informs the DE and the DE informs the strategy simultaneously.
  5. DE does not make things easier, but it can make things better. DE helps programs innovate, learn, and adapt more fully, but that isn’t always easy. A strong DE involves deep engagement with data, a commitment to learning, and a willingness to embrace (or at least accept) volatility, uncertainty, complexity, and ambiguity (VUCA). This requires changing the way organizations work and interact with their programs, which requires time, energy, and sustained attention. However, the promise is that with the systematic attention and a methodology that is designed for VUCA, program leaders can put greater confidence in what DE generates than with standard approaches that assume a more linear, stable, set of conditions.
  6. DE can help document the innovation process. Through creating tools, processes, and decision-making structures to support innovation, DE also helps document the decisions and outcomes of those decisions. When people ask: “how did you get here?” DE provides some answers.
  7. DE does not eliminate the risks associated with VUCA. The adaptive strategy that DE is a part of can often be gamed can be a cop-out for those who do not want to make hard decisions. Strategy is not planning, it’s about “an integrated set of choices that determine where the firm should play and how it should win there” (Martin, 2014) and DE provides a means of building the data set and decision tools to support strategy.
  8. DE is not a panacea. Even with the mindset, appropriate decision-making structures, and a good design, DE is not going to solve the problems of innovation. It will give more systematic means to understand the process, outcomes, outputs, and impacts associated with an innovation, but it still means trials, errors, starts and stops, and the usual explorations that innovators need to experience. DE also requires sensemaking — a structured process of ‘making sense’ of the data that emerges from complex conditions. In these conditions, you can’t expect the data will yield obvious interpretations or conclusions, which is why a sensemaking process is necessary.
  9. Not everyone can do DE well. The popularity of DE in recent years has led to a surge in those claiming to be development evaluators. If you are looking for a DE specialist, consider their experience working with VUCA and complexity in particular. You will also want to find someone who understands strategy, group process, design principles, organizational behaviour, organizational sensemaking, and is an experienced evaluator. This last point means adhering to evaluation standards and potentially recruiting someone who has the Credentialed Evaluator (CE) designation to undertake the evaluation. There are many ways to be adaptive and utilization-focused in your evaluation that isn’t considered to be a DE, too.
  10. DE does not have to be complicated. While DE requires greater involvement of more aspects of an organization in its planning and execution, it doesn’t have to be an elaborate, consuming, or complicated endeavor. DE can be done simply on a small scale just as it can be a large, highly involved, participatory process done at a large scale. DE will scale to nearly any program situation. What makes a DE are the things stated above — things like mindset, organizational support, sensemaking

Developmental Evaluation is a powerful way to help innovators learn, demonstrate and showcase the efforts that go into making change happen, and to increase the capacity of your organization to evolve its mindsets, skillsets, and toolsets for innovation.

Are you interested in using DE and learning more about how your innovation — big or small — in your service, products, or systems? Contact us and we can show you what can be done to bring DE and its potential to your organization.

 

Filed Under: Research + Evaluation, Toolkit Tagged With: complexity, design, developmental design, developmental evaluation, evaluation, living systems, strategy, systems thinking

Complexity Science for Evaluators

2014-02-25 by cense

Strategy and Complexity

This article draws on a piece originally shared as an invited post on the American Evaluation Association’s AEA365 blog. AEA365 is a daily blog that shares tips, tricks and resources for professional evaluators worldwide and is focused on complexity science for evaluators.

Our work at Cense brings together complexity science and design together with developmental evaluation into something loosely called developmental design, which is about making decisions in the face of changing conditions.

Lesson Learned: At the heart of developmental evaluation is the concept of complexity and innovation. Complexity is a word that we hear a lot of, but might not fully know what it means or how to think about it in the context of evaluation.

For social programs, complexity exists:

… where there are multiple, overlapping sources of input and outputs

… that interact with systems in dynamic ways

… at multiple time scales and organizational levels

… in ways that are highly context-dependent

Rad Resources: Complexity is at the root of developmental evaluation. So for those who are new to the idea or new to developmental evaluation, here are 7 resources that might help you get your head around this complex (pun intended) concept:

  1. Getting to Maybe is a book co-written by our good friend Michael Quinn Patton and offers a great starting place for those working with communities and human services;
  2. Patton’s book Developmental Evaluation (ch 5 in particular) is, of course, excellent;
  3. The Plexus Institute is a non-profit organization that supports ongoing learning about complexity applications for a variety of settings;
  4. Tamarack Institute for Community Engagement has an excellent introduction page including an interview with Getting to Maybe co-author Brenda Zimmerman
  5. Ray Pawson’s new book The Science of Evaluation is a more advanced, but still accessible look at ways to think about complexity, programs and evaluation;
  6. Cameron Norman’s blog Censemaking has a library section with sources on systems thinking and complexity that include these and many more.
  7. The best short introduction to the concept is a video by Dave Snowden on How to Organize A Children’s Party that is a cheeky way to illustrate complexity that we use in our training and teaching with clients.

Complexity is part theory, part science and all about a way of seeing and thinking about problems. It doesn’t need to scare you and these resources can really help get you in the right mind-frame to tackle challenging problems and use evaluation effectively as a means of addressing them. It might be complex, but it’s fun.

– See more at: http://aea365.org/blog/cameron-norman-on-complexity-science-for-evaluators/#sthash.Ssewsvsq.dpuf

Filed Under: Complexity, Research + Evaluation Tagged With: complexity, developmental design, developmental evaluation, evaluation, resources

Search

  • Who We Are
  • Our Services
  • Learning & Resources
  • Events
  • Academy

Copyright © 2021 · Parallax Pro Theme on Genesis Framework · WordPress · Log in