Cense

Research.Design.Impact.

  • Our Services
    • Research + Evaluation
      • Evaluation
      • Research
        • Academic Work
        • Resource Library
      • Foresight
    • Strategy + Design
      • Strategy
      • Communications
      • Service Design
    • Education + Training
      • Skill Building
      • Facilitation
      • The Learning Lounge
        • Design Loft
  • Who We Are
    • About Us
    • Principal & President
  • Blog
  • Contact

Three Questions for Evaluative Thinking

2018-08-10 by cense

Evaluative thinking is at the heart of evaluation, yet it’s remarkably challenging to do in practice. To help strengthen those evaluative neural pathways, we offer some questions to aid you in developing your evaluative thinking skills.

To begin, let’s first look at this odd concept of ‘evaluative thinking’.

Tom Grayson’s recent post on the AEA 365 Blog looked at this topic more closely and provided a useful summary of some of the definitions of the term commonly in use. In its simplest term: evaluative thinking is what we do when we think about things from an evaluation perspective, which is to say, a point of view that considers the merit, worth, and significance of something.

Like many simple things, there is much complexity on the other side of this topic. While we have many methods and tools that can aid us in the process of doing an evaluation, engaging in the evaluative thinking supporting it is actually far more challenging. To help foster evaluative thinking we suggest asking three simple questions:

What is going on?

This question is about paying attention and doing so with an understanding of perspective. Asking this question gets you to focus on the many things that might be happening within a program and the context around it. It gets you to pay attention to the activities, actors, and relationships that exist between them by simple observation and listening. By asking this question you also can start to empathize with those engaged in the program.

Ask: 

What is going on for [ ] person?

What is going on in [ ] situation?

What is going on when I step back and look at it all together? 

Inquiring about what is going on enlists one of the evaluator’s most powerful assets: curiosity.

By starting to pay attention and question what is going on around you in the smallest and most mundane activities through to those common threads across a program, you will start to see things you never noticed before and took for granted. This opens up possibilities to see connections, relationships, and potential opportunities that were previously hidden.

What’s new?

Asking about what is new is a way to build on the answers from the first question. By looking at what is new, we start to see what might be elements of movement and change. It allows us to identify where things are shifting and where the ‘action’ might be within a program. Most of what we seek in social programs is change — improvements in something, reductions in something else — and sometimes these changes aren’t obvious. Sometimes they are so small that we can’t perceive them unless we pause and look and listen.

There are many evaluation methods that can detect change, however, asking the question about what’s new can help you to direct an evaluation toward the methods that are best suited to capturing this change clearly. Asking this question also amplifies your attentive capacity, which is enormously important for evaluation in detecting large and small changes (because often small changes can have big effects in complex systems like those in human services).

What does it mean?

This last question is about sensemaking. It’s about understanding the bigger significance of something in relation to your enterprise. There can be a lot happening and a lot changing within a program, but it might not mean a whole lot to the overall enterprise. Conversely, there can be little to nothing happening, which can be enormously important for an organization by demonstrating poor effects of an intervention or program or, in the case of prevention-based programs, show success.

This question also returns us to empathy and encourages some perspective-taking by getting us to consider what something means for a particular person or audience.  A system (like an organization or program) looks different from where you sit in relation to it. Managers will have a different perspective than that of front-line staff, which is different for clients and customers, and different yet from funders or investors. The concept of ‘success’ or ‘failure’ is judged from the perspective of the viewer and a program may be wildly successful from one perspective (e.g., easy to administer for a manager) and a failure from another (e.g., relatively low return on investment from a funder’s point of view).

This question also affords an opportunity to get a little philosophical about the ‘big picture’. It allows program stakeholders to inquire about what the bigger ‘point’ of a program or service is. Many programs, once useful and effective, can lose their relevance over time due to new entrants to a market or environment, shifting conditions, or changes in the needs of the population served. By not asking this question, there is a risk that a program won’t realize it needs to adapt until it is too late.

 

By asking these three simple questions you can kick-start your evaluation and innovation work and better strengthen your capacity to think evaluatively.

Photo by Tim Foster on Unsplash

Filed Under: Research + Evaluation, Toolkit Tagged With: attention, change, complex systems, complexity, critical thinking, evaluation, evaluative thinking, program evaluation, sensemaking, systems thinking, tools

Theory of change: An introduction

2017-09-26 by cense

Is the above tree alive and growing or dead and ready to be made into furniture? How does something like a tree connect to providing a swing, becoming a coffee table, or supporting the structure of a home? That is based partly on a theory of change about how a tree does what it does. That might sound strange, but for more sophisticated things like human service programs, linking what something does to what it achieves often requires a tool for explanation and a Theory of Change can serve this need well if used appropriately.

Theory of Change is described as “a comprehensive description and illustration of how and why a desired change is expected to happen in a particular context.” It has taken hold in the non-profit and philanthropic sectors in recent years as a means of providing guidance for program developers, funders, and staff in articulating the value of a program and its varied purposes by linking activities to specific behavioural theory.

Matthew Forti, writing in the Stanford Social Innovation Review (SSIR), suggests a Theory of Change (ToC) contain the following:

To start, a good theory of change should answer six big questions:
1. Who are you seeking to influence or benefit (target population)?
2. What benefits are you seeking to achieve (results)?
3. When will you achieve them (time period)?
4. How will you and others make this happen (activities, strategies, resources, etc.)?
5. Where and under what circumstances will you do your work (context)?
6. Why do you believe your theory will bear out (assumptions)?

Unlike a program logic model, which articulates program components, expected outputs, and outcomes a ToC explains how and why a particular set of actions is to produce a change and the assumptions that underpin it all. ToC can be used with a program logic model or be developed independently.

What a ToC is meant to do is allow you to explain in simple language the connection between a program’s purpose, design, and execution and what it produces in terms of benefit and impact. While it may draw on theories that have been published or tested, it may also be unique to the program itself, but in all cases, it is meant to be understandable to a variety of stakeholders and audiences.

Creating a Theory of Change

A strong ToC requires some understanding of behaviour change theory: what do we know about how change happens? It can’t simply end up with “and then change happens”, it must have some kind of logic that can be simply expressed and, whenever possible, tied to what we know about change at the individual, group, organization, system, or a combination. It’s for this reason that bringing in expertise in behaviour change is an important part of the process.

That is one of the points that Kathleen Kelly Janus, also writing in the SSIR, recently made as part of her recommendations for those looking to better the impact of creating a ToC. She suggests organizations do the following:

  1. Engage outside stakeholders
  2. Include your board and staff
  3. Bring in an outside facilitator
  4. Clearly define the outcomes that will spell success
  5. Track your results rigorously.

Inclusion, consultation, and collaboration are all part of the process of developing a ToC. The engagement with diverse stakeholders — particularly those who sit apart from the program — is critical because they will see your program differently. Outsiders will not get caught up in jargon, internal language, or be beholden to current program structures as explanations for change.

Defining the outcomes are important because change requires an explanation of the current state and what that changed state(s) look like. The more articulate you can be about what these outcomes might be, the more reflective the ToC will be of what you’re trying to do. By defining the outcomes better, a ToC can aid a program in developing the appropriate metrics and methods to best determine how (or whether) programs are manifesting these outcomes through their operations.

Supporting strategy

A ToC is best used as an active reference source for program managers, staff, and stakeholders. It can continually be referred to as a means of avoiding strategy ‘drift’ by connecting the programs that are in place to outcomes and reminding management that if the programs change, so too might the outcomes.

A ToC can be used as a developmental evaluation tool, allowing programs to see what they can do and how different adaptations might fit within the same framework for behaviour change to achieve the same outcomes. Alternatively, it can also be used to call into question whether the outcomes themselves are still appropriate.

By making a ToC accessible, easy to read and to understand the key is to make it visual. Employing someone with graphic design skills to help bring the concepts to life in visual representation can provide a means to clarify key ideas and getting people beyond words. It’s easy to get hung up on theoretical language and specific terms when using words; where possible use visuals, narrative, and representations. Metaphors, colour, and texture can bring a ToC to life.

A ToC, when developed appropriately, can provide enormous dividends for strategy, performance, and evaluation and help all members of an organization (and its supporters and partners) understand what it is all about and how what it does is linked to what it aims to achieve. The ToC can serve your communications, strategy development, and evaluation plans if done well and appropriately facilitated, particularly for complex programs. It doesn’t solve all your problems, but few things will help you understand what problems you’re trying to solve and how you might do it than a good Theory of Change.

If you need help building a Theory of Change, contact us and we can help you develop one and show you how it can support your strategy, innovation, and evaluation needs of your programs and organization as a whole.

Filed Under: Research + Evaluation, Social Innovation Tagged With: evaluation, program evaluation, social innovation, strategy, theory of change

Better data visualizing, better impact

2017-09-18 by cense

What good is a program evaluation if its findings aren’t used? Not much. Even as an accountability mechanism, evaluations have the potential to demonstrate the impact a program is having in the world and reveal new insights to guide strategy in ways that few other things can. Although there are many ways to convey evaluation findings, one of the most typical is through an evaluation report.

A good report should not only reflect what happened in the program but also inspire its readers to take action based on the report. This involves making sure the key points are made, but also that the findings are communicated in ways that can be easily understood by audiences who may not have access to the evaluator or program staff (which is why reports are written and codified and why, despite their limitations, are unlikely to disappear).

There are courses aimed at preparing stronger evaluation reports, best practice reports,  tools for creative visualization and entire literature fields based on knowledge translation. But if you want to create real impact, you need to deliver knowledge to where it is intended for purposes that may not be fully known because great data reports create possibilities, not just communicate results.

Just as the image above illustrates the many ways light and structure can be reflected, so too can the contents of an evaluation report inspire new ways of seeing programs, data, and strategic opportunities. It all comes down to how the data is presented and in what measure. To support this, we present some tools, people and resources that can help you in making better use of the opportunity that an evaluation report offers through visualization and better communication design.

Resources

Stephanie Evergreen’s a specialist in data visualization for evaluation. Her website features a lot of great resources including links to her books, resource cards and includes tips and tricks to take everyday data and transform it into something attractive, engaging and more useful for audiences.

Kylie Hutchinson, an evaluator and a passionate advocate for better communication in program evaluation, has produced a wonderfully appropriate new resource for evaluators called the Evaluation Reporting Guide that is available through her website. It is a useful guide for being more innovative in the way evaluation findings are reported and comes from someone who knows how to do it.

Kumu is a tool that takes networked data and allows anyone with a basic understanding of network theory to create useful, interactive visuals that can be manipulated and presented in different formats for audiences looking to see the bigger picture.

Cole Nussbaumer Knaflic writes a wonderful blog on how to tell better stories through data combining data visualization with tips on narrative writing that can help even the least creative person imagine new possibilities through the data they have available.

Sometimes it’s just about seeing examples. This post from Import provides some classic examples from the history of data visualization and the latest research to illustrate how data has been used to showcase findings from research and evaluation in creative ways to tell better stories.

Lastly, no commentary on ways to see data differently would be complete without a mention of the incredible works of Edward Tufte, one of the pioneers in data visualization and author of some of the most beautiful, provocative works on the subject ever written.

Creating pictures, telling stories.

Each of these resources provide different ways to see, play with, and present evaluation findings and data. There is no ‘right’ way to do it, rather there are many ways to tell stories and some will resonate with your audience. Data visualization and creative report writing is only good if you know what your audience wants, who they are, and what their motivations are to engage with your content.

The best visuals will not help if you aren’t delivering a product that an audience is ready to see or able to act on, but it is a start. A better looking, more coherent report told through visuals utilizes more of our senses and provides greater opportunities for more people to engage with its content by relying on more than just logic, ‘hard numbers’ and evidence to include things like narrative, emotion, and relationships – the things that make us all human whether we are a data whiz or not.

If you’re looking to make more impactful use of the knowledge you have in your organization, connect with us and we can help you to take the best of what you know and transform it into stories about what you do to those that matter.

Filed Under: Research + Evaluation, Toolkit Tagged With: communication, data visualization, evaluation, knowledge translation, program evaluation, software, toolkit

Mindfulness in Developmental Evaluation

2017-04-12 by cense

A place to sit, reflect, observe and evaluate

One of the tools within the toolkit of a developmental evaluator is mindfulness. Mindfulness is a disciplined, regular, and persistent means of paying attention to what is going on. It is a terrific means of collecting data on regular activities, particularly in complex environments where it may be unclear what is worth paying attention to.

In an earlier article on Cameron Norman outlines how mindfulness and developmental evaluation work together, highlighting some of the emerging scholarship in the area of organizational mindfulness as an example. As discussed in that piece:

Mindfulness is the disciplined practice of paying attention. Bishop and colleagues (2004 – PDF), working in the clinical context, developed a two-component definition of mindfulness that focuses on 1) self-regulation of attention that is maintained on the immediate experience to enable pattern recognition (enhanced metacognition) and 2) an orientation to experience that is committed to and maintains an attitude of curiosity and openness to the present moment.

It’s one thing to talk mindfulness, but what does this mean for evaluators and organizations in practice? How can we use mindfulness practice and theory to inform developmental evaluation in a practical manner?

We present some strategies we’ve employed in our client work and offer some suggests for those seeking to bring a mindfulness approach to their developmental evaluations.

1. Introduce meditative practice through demonstrations. Let’s get the most obvious link out of the way: meditation. Meditation is usually the first thing that most people think of when hearing the word mindfulness. Meditation is a practice and while it is often linked with spiritual traditions from different cultures it does not need to have a spiritual dimension included if that’s not appropriate or useful. One of the simplest exercises to do is to walk members of the evaluation team through a short mindfulness exercise involving just sitting, closing or dimming the eyes, and paying attention to the breath and the thought patterns going through their head in a non-judgemental manner. A simple one-minute exercise can alert people to the massive amount of stimuli — both inner and outer — that is going on that they either aren’t fully attuned to or wasn’t as clear to them. Developmental evaluation is very much like this: it opens up our awareness of what’s happening in a living system on the go.

2. Build a mindful culture. While meditation is useful, it’s benefits are only accrued when applied collectively as part of the evaluation. Not everyone will take to the idea of meditating (and it needs to be introduced safely), but the idea of paying attention at regular intervals, consistently is effectively at the heart of a developmental evaluation, particularly in a highly complex context. To do this, create regular check-in’s where people answer the simple question: What’s going on? It’s like the equivalent of the “how was your day?” question we might ask our spouse or child. It allows people a momentary space to reflect on what happened within a time period and pick out what was meaningful to them? A follow-up question is: “what did you notice?”

3. Practice non-judgement, at first, and often. Evaluation is about judgement as much as anything else, but in a developmental evaluation context we may not know what benefits, drawbacks, implications or opportunities are present in a situation in the present moment. Encourage participants to simply take note of what is going on, when, and on what was observed without attributing cause and effect, judgement (“good” or “bad”) right away. Attribution is something that can be made later on through a more structured process of sensemaking.

4. Make it social. Mindfulness is generally thought of as a solitary, intra-personal activity. In a developmental evaluation you’re collecting information from across an organization or program and it involves many people and many perspectives: it has to be social to work. This is where sensemaking comes in – a critical component of developmental evaluation. The meaning of any activity won’t be readily apparent without an ability to translate individual observations and reflections into a more collective understanding. Further, because developmental evaluation is used in contexts that are usually complex, the meaning of something will be best determined by having multiple perspectives on the issue — diverse perspectives on the system — brought to bear. Ensure that teams are scheduling regular sensemaking meetings alongside regular reflections.

5. Make it visual. Mindfulness yields a lot of data. In terms of social research methods, mindfulness practice is as close to yielding ‘Attractor maps can also include different elements that represent certain properties, feelings, experiences, observations and other data and put them together.

The timing of all of these activities is highly dependent upon the complexity and dynamism within the program domain. For example, a 12-week program aimed at educating individuals might have a lot of dynamism in it from week-to-week, enough that weekly check-ins and tri-weekly or monthly sensemaking sessions might be required. However, if the work being done is a collective impact model looking at advocacy for a major policy change, the activities and actions might be more long term. In that case, bi-weekly checkins and maybe quarterly sensemaking sessions are needed.

Mindfulness is a very powerful tool when employed in a developmental evaluation. Creating spaces and practices that allow people to mindfully reflect on their work, capture it and organize it can be a way of collecting data on and detecting subtle, evolving patterns of activity that are the hallmark of complex systems and programs.

Good luck. Be mindful.

Filed Under: Research + Evaluation, Toolkit Tagged With: attractor mapping, complexity, developmental evaluation, gigamap, meditation, mindfulness, program evaluation, sensemaking, visual methods, visualization

Search

  • Our Services
  • Who We Are
  • Blog
  • Contact

Copyright © 2019 · Parallax Pro Theme on Genesis Framework · WordPress · Log in

We use cookies to learn where our users come from, whether they come back, and that's it. If you're OK with that, welcome to our site. Ok