Cense

Research.Design.Impact.

  • Our Services
    • Research + Evaluation
      • Evaluation
      • Research
      • Foresight
    • Strategy + Design
      • Strategy
      • Communications
      • Advising + Coaching
    • Education + Training
      • Skill Building
      • Facilitation
      • The Learning Lounge
        • Design Loft
  • Research. Design. Impact.
    • About Us
    • Principal & President
  • Our Ideas
    • Academic Work
    • Resource Library
  • Contact

Mindfulness: Seeing the Constellations of Innovation

2018-12-03 by cense

Revealing what’s behind the light

In a world filled with increasing number of signals and lots of noise, it can be difficult to achieve focus and determine what to pay attention to. In this third piece in a series, Evaluation: The Innovator’s Secret Advantage, we look at one of the bedrocks of sustainable innovation that is the best representation of something shockingly simple, enormously powerful, but not easy: mindfulness.

Before embarking on an introduction to mindfulness for innovation, let’s dispel some myths about what it is not. It is not religious, spiritual, or some new-age trend, nor is it meditation. While it can and often is be affiliated with all those things, mindfulness is simply the practice of paying attention to what is around you and to yourself in the process. It is about conscious awareness of the present moment and non-judgemental attention toward the thoughts, feelings, experiences that arise from that experience. 

An Exercise in Mindfulness

To many, the idea of mindfulness seems like an odd start to the conversation about innovation and evaluation, but the closer you look at what mindfulness is all about, the more it becomes clear how important it is to what innovators and evaluators both do. 

At a recent talk on innovation and evaluation as part of Service Convention Sweden, Cameron Norman took the audience through a short exercise in mindfulness that looked like this: 

  • Close your eyes (or lower your eyelids a little). [This reduces the amount of distraction from visual stimuli]
  • Put your feet flat on the ground and keep your back straight [This also reduces the amount of ‘signal’ coming from the body by getting into a more optimal position for sitting]
  • Breathe easy and relaxed and simply pay attention to what’s going on around you and what you are thinking. [Breathing easy avoids the issues created by holding one’s breath, creating a whole realm of problematic stimuli]

There are variants of this exercise that include focusing on the breath (which is a common technique for mindfulness-based stress reduction) and emphasis on quieting the mind, yet the outcomes are similar: an increased awareness of things that were — up until that moment — unaware and that is where mindfulness comes into evaluation. 

Evaluation serves innovation best when it goes beyond the simple assessment of outputs, outcomes, and process document. Evaluation can be a mechanism for focusing attention on the work of innovation and its context. In the example above, the audience was asked: what did you notice? The answers ranged from hearing the HVAC system, noticing their breath, and realizing how much noise takes place during a talk. 

In previous workshops, participants have reported bodily sensations (e.g., getting hungry), temperature changes, physical discomfort (e.g., back getting stiff from sitting), a wandering mind, and often forms of judgement about not ‘doing it well’ (for which there is no ‘well’, but rather ‘practice’). Just like evaluation for innovation, there is only ‘useful’ and not ‘useful’, not ‘good’ and ‘bad’. 

Linking Mindfulness to Evaluation

The idea of applying mindfulness to organizations is not new and has actually been well-researched. Introducing and practicing the concept of organizational mindfulness has been shown to be strongly correlated with what high reliability in organizations, meaning that they continually produce desirable results, consistently. Innovation is difficult to do and doing it repeatedly and consistently is even more so. What organizations that have fostered mindfulness in the way they work have done is create a mechanism for paying attention that is systematized and implemented consistently. 

Mindfulness is not a one-off exercise for those who adopt it into their work, but closer to a way of being. What it does is provide the means for organizations to not only focus on what’s in front of them, but to pay attention to the subtle signals that take place around that focus. (see two articles on our sister site Censemaking that link organizational mindfulness to social innovation and to developmental evaluation)

Mindfulness also cultivates curiosity. Curiosity is what draws an innovator and evaluator to consider what additional things might be happening (which we will explore later in this series) beyond just the intended outcomes. It is what leads us to innovation in the first place. By regularly creating space for mindful reflection into the innovation space, we nurture and cultivate curiosity. 

Doing the Work

What does mindfulness practice look like for innovation? As mentioned earlier: we are dealing with something simple, not easy. A place to start is to follow some of these practices: 

  • Ask evaluative questions, which include one that focuses on paying attention. We’ve discussed three of the most useful questions for promoting evaluative thinking in a previous post. 
  • Create regular reflection space within the organization at each level (program, division, organization). This includes setting aside deliberate time regularly to reflect on what the data is telling you about what is happening, drawing on some of the questions above. This means instilling quiet, uninterrupted time for members of an organization to think and reflect. This involves unplugging from networks and thinking. It may involve tools (e.g., whiteboards, notebooks, cards) or movement (e.g., going for a walk or run), but it must be focused. Switching back and forth from email or other demands won’t work. 
  • Bring reflections together. Individual reflection is important as is the chance to discuss those insights or experiences with others. Socialize the process of reflection by establishing a sharing culture. This builds much on what Donald Schon proposed as part of his work on reflective practice. 
  • Collect the data that supports reflection. This means not only focusing on the core of the innovation (e.g., the product or service) but the area around it. This is similar to attractor mapping, by enabling innovators to determine where the action is in the face of not always knowing. This means attending to the system that an innovation is a part of. 
  • Reserve judgment and avoid labels. The rush to judge something is what kills curiosity and mindfulness. Ever notice that once something is labeled as ‘good’ or ‘bad’ we cease to ask the kind of questions of it that get deeper into its core? Mindfulness is about being open and then assessing utility. 
  • Use learning as an outcome. What mindfulness does is encourage identification and insight into patterns. What comes from that identification and discussion is learning and is a genuine and important outcome for innovators. Document what is seen, heard, discussed, and concluded. 

The biggest barrier that we see in our work is time. It isn’t that this takes a long time, although it does require some investment of it, rather it is that organizations are reluctant to prioritize this work and make it a regular part of their practice. Doing it occasionally has some benefit, but making it part of the organizational culture is really what will transform everyday work into something that has potential beyond the original purpose.  

By instituting mindfulness into the work of your organization, you are more likely to see the constellations and quiet of night and not just the blue sky of the daytime. 

Photo by Teddy Kelley on Unsplash

Filed Under: Research + Evaluation Tagged With: complexity, evaluation, innovation, mindfulness, organizational mindfulness

Seeing Innovation Differently: The Role of Evaluation

2018-11-26 by cense

See what you do through new eyes

We begin a series called the Innovators Secret Advantage by starting with a simple, but powerful premise: perspective is everything. 

This is based on a phrase that we use as our guide at Cense: 

“The real voyage of discovery consists not in seeking new landscapes, but in having new eyes.”

Marcel Proust

Evaluation is most commonly seen as a means to assess the merit, worth, and significance of a program, service, or product. Evaluation can focus on implementations and learning, how a program works, outcomes and outputs, or some combination of these things. This is how most see evaluation. 

However, for innovators — those looking to develop and apply something new into a situation — evaluation offers so much more when approached as a tool for innovation. This is less about the method and much more about how we treat the methods. This is where we need our new eyes. 

Evaluation = Feedback

What evaluation offers to innovators is feedback, which plays a central role in complex systems. These are the kind of systems that most human services exist to various degrees. Complexity creates conditions for innovators where it’s difficult to know what is happening with a high degree of certainty. Feedback allows innovators to ‘take the temperature’ of what they are ‘cooking up’ to see if they need to make changes. These changes might look like doing more of something, adding things, or perhaps removing something. It’s very much like making a soup. 

Evaluation is the means of obtaining feedback and doing it systematically. It allows innovators to avoid bias, mis-perceptions in the data, and overcome blind spots that get created. Innovators are focusing on their product, service, or program itself, but with a slight shift in perception toward feedback systems, and they can gain much more in the way they approach their work.  

Evaluation is the means of providing fuel for innovation, particularly as an idea moves from concept to prototype. Data, systematically collected and focused, along with the analysis and sensemaking that comes with it, plays a vital role in determining how ready an innovation is for deployment or how appropriate it is for scaling upward or outward. 

Seeing With New Eyes

Seeing with new eyes begins by asking some questions. The first three questions foster evaluative thinking, not just new ways of seeing: 

  • What is going? 
  • What is new?
  • What does it mean?

The next question is: What are you hiring innovation to do for you? The answers to these questions are what can inspire that new way to see the work being done. 

The final set of questions to give you those new eyes come from mindful reflection. Organizational mindfulness is not some new age meditation technique, but a scientifically-supported approach to paying attention across the organization to learn what is being done and how it connects with purpose and values. 

Engaging these questions will set the stage for using evaluation as a means to help innovations fulfill their true promise. In the weeks ahead, we’ll look at how this is done and what your new eyes can be trained to look at. 

Filed Under: Research + Evaluation Tagged With: complexity, evaluation, evaluation method, organizational mindfulness, perception

Living History for Developmental Evaluation

2018-08-28 by cense

Developmental Evaluation (DE) is an approach to supporting the evolution and development of an innovation. This means not only helping guide where an innovation is going but understanding where it is now and where it has been. Just like human development, an innovation is as much a product of its past as it is the present decisions and that past can help inform the strategy for moving forward. But how do we properly account for the past and present context in understanding what comes next? This is where The Living History method comes in.

The Living History method comprises a set of data collection, sensemaking, and design strategies that come together to reflect on how a project comes into being, develops a focus, and evolves up to the present context. It helps provide the ‘backstory’ to the present situation and provides a means of developing a baseline – another key part of a DE.

Why does knowing this matter? Taking a Living History of a program helps surface the path dependencies of a program or innovation. Path dependencies can include those habits of mind that form, patterns of behaviour, and general routines that can develop within a complex environment and consciously or unconsciously shape the context in which decisions are made. By understanding what these patterns are, we can better prepare to address them as the program unfolds.

It’s helpful to the biases and mental models that we hold in guiding our decisions to better account for them both when things work well, but also to help understand how things didn’t work out.

To borrow from the image above: the Living History method helps you use the data collected from preparing the climb, starting up to base camp, and setting the groundwork (the trials, failures, and victories) to initiate the climb as well as the data from the actual ascent to the summit.

Introducing the method

The Living History method is something we developed to ‘backfill’ much of the information that is necessary to understand the present context of a program. Many evaluators do this kind of work with informational interviews, document reviews, and site visits, however, the means in which this data is collated and made sense of isn’t always systematic, nor is it done in a manner that recognizes complexity (which is the space in which DE is meant to be employed).

A Living History involves building that historical narrative that has been constructed by program developers (founders, staff, stakeholders, etc..) and the activities that contribute to this narrative. It draws together these ‘stories’ from the data into an understanding of the present. This present space is where the existing narrative and current construction of the program (‘what is happening now?‘) comes together. It’s also space where strategy is developed and initiated to shape what is to come (‘where are we going and how are we going to get there?‘).

Below is an image that demonstrates this process.

Taking a Living History

A Living History can encompass many different data gathering techniques. Four of the most useful are:

  1. Document review. A thorough review of foundational documents that might include things like the initial application(s) for funding, ‘pitch decks’ used to generate funds or interest, program and policy manuals, staff reviews, board meeting minutes, evaluations, and annual reports (to boards, funders, stakeholders, etc.). Some programs may have an early program logic model or Theory of Change that can be examined. These can give insight into the planning structure, theories, and reasons for establishing the program.
  2. Interviews. Connecting with the program founders, board members (past and present), investors (and funders), as well as leaders involved in the initiative is critical. Having conversations with these people will help surface both explicit and tacit knowledge held by those that helped shape the program. These interviews can be critical in understanding the logic, political context, and social dynamics that underpin the direction of the organization. It is useful to determine not only the direction taken, but the directions considered. The reason is that these ‘unexplored paths’ may linger in the minds of people and serve as a source of tension or inspiration in moving forward.
  3. Site visits. It is helpful to understand that programs – even those that are virtual such as a website or app — originate in the physical world somewhere. Site visits, when possible, help gather information about organizational culture, physical resources, and relational barriers and facilitators. Take the example of the organization that starts up as a maker space in a garage and then evolves to a large corporate high-rise office. Is the culture that was established at the start-up maintained at a different scale or abandoned? We’ve seen this with organizations that began as a start-up, grew enormously, and then abandoned the culture that was created at startup for a more conservative corporate one despite holding on to the belief that things had not changed. Only by pointing out (seeing) the manner in which things physically changed (e.g., open space vs. closed spaces, size and shape of offices) was this able to be pointed out. Further, such visits can help determine developmental milestones such as large investments of capital.
  4. Timeline. A timeline is a simple method of connecting activities — actions + decisions — together in a visual means. This helps determine and organize data about what was done, but also when it was done. This can make an enormous difference in understanding things like time lags and exploring cause-and-consequence connections between actions, reactions, and other effects.

The Role of Sensemaking

Living History is not complete without sensemaking. Sensemaking is required because, like most interesting stories, innovating through complexity is messy and full of contradictions, gaps in logic, and remarkable things that can only be understood by sitting together with all the data in one place. These emergent properties — coherence that forms from diverse sources of information as part of the process of (self) organization — are what make a program a ‘whole’ and not just the ‘sum of the parts’.

Sensemaking is collaborative, discursive, and involves debate, reflection, and a process of drawing conclusions and forming hypotheses based on data and experience. It is the process of taking the strangeness of seeing things in chunks and putting them together. What comes from this process, which is a key part of Developmental Evaluation and design, is a ‘sense’ of what happened, what is going on, and what options might work next.

It helps you understand where you and your program stand now.

Moving forward (and backward)

The Living History method doesn’t end with sensemaking, it also is carried forward.

Developing a project dashboard for innovation evaluation is a helpful means to pull together the data collected from the past, the present, and then populating it as the program evolves. The Living History method views history as being co-constructed as the program evolves and thus thrives on having new data about the present (which is always a second away from being the past) added to the overall corpus of data.

New data, when combined with what has already been collected, can provide new insights into the bigger picture. As data is added and time passes, it is possible that new patterns will emerge, which can be insightful for making decisions. This data, collected now and going forward, will later be recombined as part of an ongoing sensemaking process.

A Living History can also provide a means to document the activities that have taken place up to the present moment by systematically, although retrospectively, capturing (mostly existing) data to show what has happened before, which can be useful for organizations and programs who are starting their evaluations later than they would have liked to.

Consider the Living History approach with your evaluation. By looking at the past you might find yourself better prepared to go forward. We can help.

Photo by Aaron Benson on Unsplash

Filed Under: Complexity, Research + Evaluation Tagged With: complexity, developmental evaluation, evaluation, evaluation method, living history method, path dependence

Three Questions for Evaluative Thinking

2018-08-10 by cense

Evaluative thinking is at the heart of evaluation, yet it’s remarkably challenging to do in practice. To help strengthen those evaluative neural pathways, we offer some questions to aid you in developing your evaluative thinking skills.

To begin, let’s first look at this odd concept of ‘evaluative thinking’.

Tom Grayson’s recent post on the AEA 365 Blog looked at this topic more closely and provided a useful summary of some of the definitions of the term commonly in use. In its simplest term: evaluative thinking is what we do when we think about things from an evaluation perspective, which is to say, a point of view that considers the merit, worth, and significance of something.

Like many simple things, there is much complexity on the other side of this topic. While we have many methods and tools that can aid us in the process of doing an evaluation, engaging in the evaluative thinking supporting it is actually far more challenging. To help foster evaluative thinking we suggest asking three simple questions:

What is going on?

This question is about paying attention and doing so with an understanding of perspective. Asking this question gets you to focus on the many things that might be happening within a program and the context around it. It gets you to pay attention to the activities, actors, and relationships that exist between them by simple observation and listening. By asking this question you also can start to empathize with those engaged in the program.

Ask: 

What is going on for [ ] person?

What is going on in [ ] situation?

What is going on when I step back and look at it all together? 

Inquiring about what is going on enlists one of the evaluator’s most powerful assets: curiosity.

By starting to pay attention and question what is going on around you in the smallest and most mundane activities through to those common threads across a program, you will start to see things you never noticed before and took for granted. This opens up possibilities to see connections, relationships, and potential opportunities that were previously hidden.

What’s new?

Asking about what is new is a way to build on the answers from the first question. By looking at what is new, we start to see what might be elements of movement and change. It allows us to identify where things are shifting and where the ‘action’ might be within a program. Most of what we seek in social programs is change — improvements in something, reductions in something else — and sometimes these changes aren’t obvious. Sometimes they are so small that we can’t perceive them unless we pause and look and listen.

There are many evaluation methods that can detect change, however, asking the question about what’s new can help you to direct an evaluation toward the methods that are best suited to capturing this change clearly. Asking this question also amplifies your attentive capacity, which is enormously important for evaluation in detecting large and small changes (because often small changes can have big effects in complex systems like those in human services).

What does it mean?

This last question is about sensemaking. It’s about understanding the bigger significance of something in relation to your enterprise. There can be a lot happening and a lot changing within a program, but it might not mean a whole lot to the overall enterprise. Conversely, there can be little to nothing happening, which can be enormously important for an organization by demonstrating poor effects of an intervention or program or, in the case of prevention-based programs, show success.

This question also returns us to empathy and encourages some perspective-taking by getting us to consider what something means for a particular person or audience.  A system (like an organization or program) looks different from where you sit in relation to it. Managers will have a different perspective than that of front-line staff, which is different for clients and customers, and different yet from funders or investors. The concept of ‘success’ or ‘failure’ is judged from the perspective of the viewer and a program may be wildly successful from one perspective (e.g., easy to administer for a manager) and a failure from another (e.g., relatively low return on investment from a funder’s point of view).

This question also affords an opportunity to get a little philosophical about the ‘big picture’. It allows program stakeholders to inquire about what the bigger ‘point’ of a program or service is. Many programs, once useful and effective, can lose their relevance over time due to new entrants to a market or environment, shifting conditions, or changes in the needs of the population served. By not asking this question, there is a risk that a program won’t realize it needs to adapt until it is too late.

 

By asking these three simple questions you can kick-start your evaluation and innovation work and better strengthen your capacity to think evaluatively.

Photo by Tim Foster on Unsplash

Filed Under: Research + Evaluation, Toolkit Tagged With: attention, change, complex systems, complexity, critical thinking, evaluation, evaluative thinking, program evaluation, sensemaking, systems thinking, tools

Developmental Evaluation: A Short Introduction

2018-04-30 by cense

Developmental Evaluation (DE) was first proposed by Michael Quinn Patton with the support of colleagues who have wrestled with the problem of dealing with complexity in human systems and the need to provide structured, useful, actionable information to make decisions to support innovation.

DE has been described as being akin to taking a classic ‘road trip’ where you have a destination, a planned route, but also a spirit of adventure and willingness to deviate when needed. DE is an approach to evaluation, not a specific method or tool, designed to support decision making for innovation. Innovation, in this case, is about the activities and decisions that allow an organization and its members to create value by design. The design may not turn out as expected or produce surprises, but it is part of an intentional act to create value through new thinking and action.

Ten things about what DE is and is not

Developmental evaluation (“DE” as it’s often referred to as), when used to support innovation, is about weaving design with data and strategy. It’s about taking a systematic, structured approach to paying attention to what you’re doing, what is being produced (and how), and anchoring it to why you’re doing it by using monitoring and evaluation data. DE helps to identify potentially promising practices or products and guide the strategic decision-making process that comes with innovation. When embedded within a design process, DE provides evidence to support the innovation process from ideation through to business model execution and product delivery.

There are a lot of misconceptions about what a DE is and what it is not and we thought it might be worth addressing ten of these to help provide a brief introduction to DE.

  1. DE is an approach to evaluation, not a method. Most standard methods and tools for evaluation can be used as part of a DE. Qualitative, quantitative, administrative, and ‘big’ data can all contribute to an understanding of a program when used appropriately. It is not something that you simply apply to a situation, rather it is an engaged process of refining how you think about the data you have, what data you collect, and how you make sense of it all and apply lessons from it in practice.
  2. DE is about evaluation for strategic decision-making. If the evaluation is not useful in making decisions about a program or service then is it not a DE. What is considered useful in decision-making is context-dependent, meaning that a DE must be tailored toward the specific situational needs of a program or a service.
  3. DE is not about product or service improvement, it’s about product and service development. It involves a shift in mindset from growth and ‘best practices’ to one of mindful, strategic, adaptative strategy and developmental design.
  4. DE is not separate from strategy, but a critical part of it. There must be close ties between those developing and implementing strategy and the evaluation team or evaluator. A bi-directional flow of information is required through regular, ongoing communications so that strategy informs the DE and the DE informs the strategy simultaneously.
  5. DE does not make things easier, but it can make things better. DE helps programs innovate, learn, and adapt more fully, but that isn’t always easy. A strong DE involves deep engagement with data, a commitment to learning, and a willingness to embrace (or at least accept) volatility, uncertainty, complexity, and ambiguity (VUCA). This requires changing the way organizations work and interact with their programs, which requires time, energy, and sustained attention. However, the promise is that with the systematic attention and a methodology that is designed for VUCA, program leaders can put greater confidence in what DE generates than with standard approaches that assume a more linear, stable, set of conditions.
  6. DE can help document the innovation process. Through creating tools, processes, and decision-making structures to support innovation, DE also helps document the decisions and outcomes of those decisions. When people ask: “how did you get here?” DE provides some answers.
  7. DE does not eliminate the risks associated with VUCA. The adaptive strategy that DE is a part of can often be gamed can be a cop-out for those who do not want to make hard decisions. Strategy is not planning, it’s about “an integrated set of choices that determine where the firm should play and how it should win there” (Martin, 2014) and DE provides a means of building the data set and decision tools to support strategy.
  8. DE is not a panacea. Even with the mindset, appropriate decision-making structures, and a good design, DE is not going to solve the problems of innovation. It will give more systematic means to understand the process, outcomes, outputs, and impacts associated with an innovation, but it still means trials, errors, starts and stops, and the usual explorations that innovators need to experience. DE also requires sensemaking — a structured process of ‘making sense’ of the data that emerges from complex conditions. In these conditions, you can’t expect the data will yield obvious interpretations or conclusions, which is why a sensemaking process is necessary.
  9. Not everyone can do DE well. The popularity of DE in recent years has led to a surge in those claiming to be development evaluators. If you are looking for a DE specialist, consider their experience working with VUCA and complexity in particular. You will also want to find someone who understands strategy, group process, design principles, organizational behaviour, organizational sensemaking, and is an experienced evaluator. This last point means adhering to evaluation standards and potentially recruiting someone who has the Credentialed Evaluator (CE) designation to undertake the evaluation. There are many ways to be adaptive and utilization-focused in your evaluation that isn’t considered to be a DE, too.
  10. DE does not have to be complicated. While DE requires greater involvement of more aspects of an organization in its planning and execution, it doesn’t have to be an elaborate, consuming, or complicated endeavor. DE can be done simply on a small scale just as it can be a large, highly involved, participatory process done at a large scale. DE will scale to nearly any program situation. What makes a DE are the things stated above — things like mindset, organizational support, sensemaking

Developmental Evaluation is a powerful way to help innovators learn, demonstrate and showcase the efforts that go into making change happen, and to increase the capacity of your organization to evolve its mindsets, skillsets, and toolsets for innovation.

Are you interested in using DE and learning more about how your innovation — big or small — in your service, products, or systems? Contact us and we can show you what can be done to bring DE and its potential to your organization.

 

Filed Under: Research + Evaluation, Toolkit Tagged With: complexity, design, developmental design, developmental evaluation, evaluation, living systems, strategy, systems thinking

  • 1
  • 2
  • Next Page »
  • Our Services
  • Research. Design. Impact.
  • Our Ideas
  • Contact

Copyright © 2019 · Parallax Pro Theme on Genesis Framework · WordPress · Log in

We use cookies to learn where our users come from, whether they come back, and that's it. If you're OK with that, welcome to our site. Ok