Cense Ltd.

Inspiration, Innovation and Impact

  • Who We Are
    • Contact
    • Principal & President
  • Our Services
    • Strategic Design
    • Evaluation Services
    • Well-Being Design
    • Coaching + Training
    • Chief Learning Officer Service
  • Innovation Kit
  • Censemaking
  • Academy
  • Events
  • Inspiration | Innovation | Impact

Developmental Evaluation Preparedness

2019-10-08 by cense

Pilots have a pre-flight check before they fly a plane. A developmental evaluation journey requires the same kind of preparedness.

Developmental Evaluation (DE) is an approach to strategic learning designed for understanding and supporting innovation. This approach to data-driven learning requires a specific set-up to do well. While we need a mindset, skillset, and toolset to do DE, we also need a receptive system to make it all work.

Here are the things you can do to lay the groundwork for doing DE and making it a success. Think of this as your pre-flight checklist for a journey of learning together.

These are attributes and may be found within individuals, but it’s better if they are distributed widely across the organization.

Mindset

The most important aspect of a Developmental Evaluation is creating the right mindset. This mindset is focused on complexity, adaptation, and development (vs. improvement, linear growth). A list of statements to examine whether you have the right mindset is available on Censemaking and can provide a start.

Some other mindset questions include:

  1. Openness to experience. A DE will likely take a journey that can’t be fully predicted, much like a road trip. An openness to a non-linear pathway toward a preferred destination and the willingness to adapt the path and the destination based on evaluation data and evidence is key.
  2. Tolerance for ambiguity. Few of us enjoy uncertainty, but we all need to deal with it when working on a DE.
  3. Self-awareness. We often get in our own way. Understanding how we think and the biases and perspectives we hold is important to knowing when they serve us and when they are a barrier to innovation. Mindfulness is a part of this quality.
  4. Patience. This is a skill and mindset quality. Knowing that things will unfold at a time and pace that might change is useful — and requires patience in just knowing that.
  5. Evaluative thinking. This is a form of thinking that gets us connecting our activities, outcomes, and effects and asks three key questions about how we know something (and what to do about it).

Skillset

Developmental evaluation is not for the unskilled. While many of the qualities inherent in a DE are natural to people, they are not commonly practiced in organizations. DE is an approach, and while a skilled evaluator might have many methodological skills to do much of what is in a DE, without the following skills — with the evaluator, the evaluation team, the organization, or all of them — you won’t likely succeed.

  • Facilitation. A DE is a collaborative endeavour. You need facilitation skills to engage the diversity of perspectives in the system and coordinate the discussion, feedback, and opportunities for everyone to engage with these perspectives.
  • Sensemaking. Sensemaking is a participatory process that involves taking data that may be difficult to understand, ambiguous in its conclusions, or incomplete and helps people make sense of what it means for the organization within the present context and strategic needs. Sensemaking for DE helps guide decision-making and strategy.
  • Complexity and Systems Thinking. A distinguishing feature of DE is the use of applied thinking from systems science and complexity to guide the entire process. This means creating designs and processes that are sensitive to detecting emergent properties, network effects, organizational behaviour (like self-organization), and non-linear or algorithmic change dynamics.
  • Dynamism. Complexity also requires that an evaluation be designed with consideration of the dynamics of the program, actors, and system. This means adapting or developing from a moving position, not a static position. It also involves designing evaluations that take into account the growth pattern and evolution of a program that suits the system. It’s about a design for changing conditions.
  • Visualization. System complexity is made more so by the inability to effectively ‘see’ the systems in our heads – there’s too much information. The ability to create system maps, visualize the dynamics and relationships within it, and facilitate discussion of those systems is critical to understanding where a program fits within it all. Visualization can be sophisticated or relatively simple in nature.
  • Design. What makes DE distinct is that it is about making modifications to the program as it unfolds using evaluation data to guide the strategy. These modifications require design and the skills — like with evaluation — require designing a program while it is moving. These developmental design skills are often overlooked and bring together service design and innovation with elements of systemic design.

Toolset

The toolset is the least specific of the three ‘sets’ for DE. Almost any method can support a DE, although unlike other approaches, rarely will there be a case where a single method — interview, survey, observation — will work on its own. A multi-method or mixed-method approach to data collection is almost always necessary.

Some additional tools or methods that are not common are ones we’ve advocated for and either developed ourselves or borrowed from other contexts. Three of the most important of these are:

  • The Living History Method. This meta-method brings together artifacts that inform the program’s early development and evolution. It recognizes that the present program is shaped by where it came from and that such a history can influence where things are going. Unless these conscious or unconscious patterns are recognized it is unlikely they will change. This method also helps record decisions and activities that often get missed when creating an innovation.
  • Dashboards. The wealth of potential information generated from a DE and available at our fingertips can overwhelm even the most skilled evaluator. Creating dashboards using simple tools and technologies can help organize the various streams of data. It also can allow someone to gain a sense of where there may be simple patterns laying within a complex layer of data.
  • Spidergrams. The spider diagram is a simple visualization tool that can work well in a DE as it helps pull various data points together in a way that allows comparison between them AND the ability to dive deep within each data stream.

Some additional perspectives on doing DE can be found in the guide (PDF) published by the McConnell Foundation by Elizabeth Dozois and colleagues.

Next time you are considering a DE, look at your present inventory of mindsets, skillsets, and toolsets to see if you have the right resources in place to do the work you need to do.

Need help? Contact us. We can help you set up a Developmental Evaluation, do one, or both. We’d love to hear from you.

Photo by Lucia Otero on Unsplash

Filed Under: Research + Evaluation, Toolkit Tagged With: complexity, dashboard, developmental design, developmental evaluation, evaluative thinking, living history method, mindfulness, spidergram, systemic design

Three Questions for Evaluative Thinking

2018-08-10 by cense

Evaluative thinking is at the heart of evaluation, yet it’s remarkably challenging to do in practice. To help strengthen those evaluative neural pathways, we offer some questions to aid you in developing your evaluative thinking skills.

To begin, let’s first look at this odd concept of ‘evaluative thinking’.

Tom Grayson’s recent post on the AEA 365 Blog looked at this topic more closely and provided a useful summary of some of the definitions of the term commonly in use. In its simplest term: evaluative thinking is what we do when we think about things from an evaluation perspective, which is to say, a point of view that considers the merit, worth, and significance of something.

Like many simple things, there is much complexity on the other side of this topic. While we have many methods and tools that can aid us in the process of doing an evaluation, engaging in the evaluative thinking supporting it is actually far more challenging. To help foster evaluative thinking we suggest asking three simple questions:

What is going on?

This question is about paying attention and doing so with an understanding of perspective. Asking this question gets you to focus on the many things that might be happening within a program and the context around it. It gets you to pay attention to the activities, actors, and relationships that exist between them by simple observation and listening. By asking this question you also can start to empathize with those engaged in the program.

Ask: 

What is going on for [ ] person?

What is going on in [ ] situation?

What is going on when I step back and look at it all together? 

Inquiring about what is going on enlists one of the evaluator’s most powerful assets: curiosity.

By starting to pay attention and question what is going on around you in the smallest and most mundane activities through to those common threads across a program, you will start to see things you never noticed before and took for granted. This opens up possibilities to see connections, relationships, and potential opportunities that were previously hidden.

What’s new?

Asking about what is new is a way to build on the answers from the first question. By looking at what is new, we start to see what might be elements of movement and change. It allows us to identify where things are shifting and where the ‘action’ might be within a program. Most of what we seek in social programs is change — improvements in something, reductions in something else — and sometimes these changes aren’t obvious. Sometimes they are so small that we can’t perceive them unless we pause and look and listen.

There are many evaluation methods that can detect change, however, asking the question about what’s new can help you to direct an evaluation toward the methods that are best suited to capturing this change clearly. Asking this question also amplifies your attentive capacity, which is enormously important for evaluation in detecting large and small changes (because often small changes can have big effects in complex systems like those in human services).

What does it mean?

This last question is about sensemaking. It’s about understanding the bigger significance of something in relation to your enterprise. There can be a lot happening and a lot changing within a program, but it might not mean a whole lot to the overall enterprise. Conversely, there can be little to nothing happening, which can be enormously important for an organization by demonstrating poor effects of an intervention or program or, in the case of prevention-based programs, show success.

This question also returns us to empathy and encourages some perspective-taking by getting us to consider what something means for a particular person or audience.  A system (like an organization or program) looks different from where you sit in relation to it. Managers will have a different perspective than that of front-line staff, which is different for clients and customers, and different yet from funders or investors. The concept of ‘success’ or ‘failure’ is judged from the perspective of the viewer and a program may be wildly successful from one perspective (e.g., easy to administer for a manager) and a failure from another (e.g., relatively low return on investment from a funder’s point of view).

This question also affords an opportunity to get a little philosophical about the ‘big picture’. It allows program stakeholders to inquire about what the bigger ‘point’ of a program or service is. Many programs, once useful and effective, can lose their relevance over time due to new entrants to a market or environment, shifting conditions, or changes in the needs of the population served. By not asking this question, there is a risk that a program won’t realize it needs to adapt until it is too late.

 

By asking these three simple questions you can kick-start your evaluation and innovation work and better strengthen your capacity to think evaluatively.

Photo by Tim Foster on Unsplash

Filed Under: Research + Evaluation, Toolkit Tagged With: attention, change, complex systems, complexity, critical thinking, evaluation, evaluative thinking, program evaluation, sensemaking, systems thinking, tools

Search

  • Who We Are
  • Our Services
  • Innovation Kit
  • Censemaking
  • Academy
  • Events
  • Inspiration | Innovation | Impact

Copyright © 2022 · Parallax Pro Theme on Genesis Framework · WordPress · Log in