Cense

Innovation to Impact

  • Our Services
    • Research + Evaluation
    • Service Design and Innovation Development
    • Education + Training
    • Chief Learning Officer Service
  • Who We Are
    • About Us
  • Tools & Ideas
    • Resource Library
  • Events
    • Design Loft
  • Contact
  • Academy

Forecasting

2020-09-01 by cense

You might not have a crystal ball, but you can still envision the near-future by using a simple strategy called forecasting to plot your strategy for the coming months. Here is how.

Fundamentals

A forecast is a data-driven prediction of possible outcomes that can be used to generate scenarios. The first item required is data. This can be qualitative, quantitative, or mixed and from primary or secondary sources. Most often, forecasts are a combination of these.

Checklists have been found to be useful tools to help organize data that contribute to forecasts. Pull together those sources you have and then organize them in a manner that allows you to build a narrative — a story — of what has happened to allow you to better anticipate what might happen.

Forecasts work when there is some expectation of a linear progression from time to time (with some variation). Time series data — data gathered on the same topic/issue/item multiple times over in succession — is among the most popular sources of data. This will allow you to see patterns and spot trends that lead you to now.

Add Imagination

Once you’ve developed a model of the present situation, the next stage is to imagine what might happen in the near future. Forecasts are generally useful for predicting near-term (e.g., 3-, 6-, or 12-months) outcomes and are less useful for longer-term assessments.

Next match data from other sources — social trends, government policy documents, census data — to create scenarios. For example, seasonal trends can change the near term. ‘Seasons’ like ‘back-to-school’, holidays, flu trends, weather changes can all affect how present data can mislead us for future activities. The COVID-19 pandemic provided an example of the various ways in which an economy can re-open, a healthcare system can respond, and what ‘back-to-school’ looks like.

From these data points, work together as a team (this is always better done in groups because different people will see data differently) we can start to envision possible futures and outcomes.

Look for amplifiers and dampeners. What things might make an existing trend more pronounced and what might dampen that trend, or extinguish it altogether. In discussion as a group you can

Structuring Forecasts: Tips & Tricks

Begin your group work together with a few simple ‘rules’ to guide your discussion. Start with limiting any feedback or critique of ideas at the start. You want to explore why something could happen, not assess the likelihood of such activities at first. This opens our minds up to unlikely scenarios.

It’s helpful to have someone on the team who can play the role of the ‘black hat‘ – the person whose role is to illustrate why something won’t work. Edward DeBono’s ‘thinking style’ roles can be useful here in helping us structure a way to look at the data and ideas from different points of view. Building on these different perspectives, it’s important to build a variety of scenarios and attach a level of anticipated likelhood to them. (e.g., high, medium, or low) and timing (e.g., imminent, soon, long-term, etc..)

Build out as many scenarios as the data suggests might be useful. This is often three to five, but rarely nine or ten.

From these scenarios, ‘walk them back’ to the present using an approach of asking “what happened just before X” and repeating that of each answer until you find yourself at the present. This allows you to start building pathways of potential causality.

While it may be that none of the scenarios come into reality, there are likely to be pathways that resemble them. When you find these, your team can use those to examine the assumptions that you hold with each one of them and use that to develop a strategy around them to better increase your anticipatory awareness and adaptive capacity to learn and act.

Taken together, this method can help you to see what might be coming and plan accordingly. It is a powerful means to explore near futures and design your organization to be better suited to living in them rather than having to play catch-up.

To go even deeper, the Future Today Institute has developed this useful ‘Funnel’ model to guide forecasting that might be useful to you as well.

FTI-Funnel ToolDownload

If you want to develop forecasts, contact us. We can help you see what might be coming and design your team to better meet it.

Filed Under: Toolkit Tagged With: data, forecast, foresight, futures, methods

The Amazing Spidergram

2019-08-22 by cense

Illustrating action points within a complex system challenges evaluation users. Time to call on your friendly neighbourhood spidergram for help.

Visualizing complex systems is a challenge within strategy, foresight, and evaluation because each component of the system is interconnected with others. Influence on one of these is likely to influence others.

From an action standpoint, it’s easier to focus on one or two small parts of the system than tackle the entire system all at once. How do we reconcile this and provide a means to see the parts of the system without being reductionistic and neglecting the relationship with the whole?

One answer is: look to the spiderweb.

A spiderweb is a good entry-level metaphor for helping people see places they can take action within a system by creating distinctions between the parts (nodes, intersections) and the whole (branches, webs). There are two related, but different spiderweb models worth noting.

Spider Diagrams / Mindmaps

Spider diagrams (or spidergrams or mindmaps) are ways to connect ideas together through the branch-and-thread model akin to a spider’s web (hence the name). These are often called mindmaps and have been shown to facilitate learning about complex topics.

They enable the development of relationships between ideas and possible causal or associative pathways between ideas, concepts, or other data- or evidence-informed concepts. They also enable us to cluster related concepts together to identify sub-systems that may be more amenable to our intervention within the larger whole.

Spidergrams / Radar Diagrams

The other spider-related metaphor that is available to innovators and evaluators is the spidergram, sometimes called a Radar Chart or Spider Chart. These allow for the display of data collected along a scale presented alongside others that use a similar proportioned scale.

This hub-and-spoke model of data allows users to see a variety of performance indicators presented along a similar set of axes related to a common goal.

What this allows for is a view of performance across a variety of metrics simultaneously and can recognize how we make progress on one area often at the expense of another. Strategically, it can enable an organization to balance its actions and foci across a variety of key indicators at the same time.

This can be used with quantitative data such as the financial data above or social data, too.

Spidergrams/charts can also overlay data within the same domain (see example) providing even more depth into recurring or separated data points within the same topic or subsystem.

The web of engagement

What makes these tools powerful is that they display a lot of data at the same time in a manner that can facilitate engagement with a group of people tasked with making decisions. Visualizing data or systems brings the benefit of literally getting people’s focus on the same page.

People around a table can then literally point to the areas they are interested in, concerned with, do not understand, or wish to explore assumptions about.

Complex systems introduce a lot of data and a lot of confusion. Sensemaking through the use of visuals and the discussion that they encourage is one of the ways to reduce confusion and get more from your data to make better decisions.

If you’re stuck in the web of data and complexity, call on your friendly neighbourhood spidergram.

Want to learn more about how this can assist your innovation efforts? Contact us and we’ll gladly swing over and help (without the costume).

Photo by Jean-Philippe Delberghe on Unsplash. Spidergram by

Filed Under: Complexity, Research + Evaluation, Toolkit Tagged With: data, data visualization, evaluation method, foresight, mindmap, spidergram, strategy, tool

Developmental Evaluation Trap #2: The Pivot Problem

2018-05-17 by cense

In this third in a series on Developmental Evaluation traps, we look at the trap of the pivot. You’ve probably heard someone talk about innovation and ‘pivoting’ or changing the plan’s direction.

The term pivot comes from the Lean Startup methodology and is often found in Agile and other product development systems that rely on short-burst, iterative cycles that include data collection processes used for rapid decision-making. A pivot is a change of direction based on feedback. Collect the data, see the results, and if the results don’t yield what you want, make a change and adapt. That sounds pretty familiar to those looking to innovate, so where’s the trap?

The trap is that the decisions made aren’t grounded in data or the decision-making is flawed in how it uses data. In both cases, the decision to change direction is more arbitrary than evidence-based.

What do we mean by this?

The data problem

When innovating, it’s important to have the kind of data collection system in place to gather the necessary information required to make a useful decision such as whether to continue on with the process, make a change, or abandon the activity. James Dyson famously trialed his products hundreds of times with ‘tweaks’ both large and small to get to the right product design. A hallmark feature of this process is the emphasis on collecting the right data at the right time (which they call testing).

Dyson has since expanded its product offerings to include lighting, personal hair products, industrial drying tools, and an array of vacuum models. While the data needs for each product might differ, the implementation of a design-driven strategy that incorporates data throughout the decision-making process remains the same. It’s different data used for the right purpose.

Alas, DE has given cover to organizations for making arbitrary decisions based on the idea of pivoting when they really haven’t executed well or given things enough time to determine if a change of direction is warranted. Here are three things that one needs to heed when considering DE data.

  1. Process data. Without a clear indication that a program has been implemented appropriately and the constraints accounted for, how do you know that something ‘failed’ (or ‘succeeded’) based on what you did? Understanding what happened, under what conditions, and documenting the implementation behind the innovation is critical to knowing whether that innovation is really doing what we expect it to (and ensuring we capture the things we might not have expected it to do).
  2. Organizational mindfulness. The biggest challenge might be internal to the organization’s heart: it’s mindset. Organizational mindfulness is about paying attention to the activities, motivations, actions, and intentions of the entire enterprise and being willing (and able) to spot biases, identify blind spots, and recognize that, as much as we say we want to change and innovate, the reality is that it is disruptive for many people and something often unconsciously thwarted.
  3. Evaluability assessment. A real challenge with innovation work is knowing whether you’ve applied the right ‘dosage’ and given it the right amount of time to work. This means doing your homework, paying attention, and having patience. Homework comes in the form of doing background research to other, similar innovations, connecting to the wisdom of the innovators themselves (i.e., draw on experience), and connecting it together. Paying attention ensures you have a plausible means to connect intervention to effect (or product to outcome). This is like Kenny Rogers’ Gambler:

you got to know when to hold ’em

know when to fold ’em

know when to walk away

know when to run

An evaluability assessment can help spot the problems with your innovation data early by determining whether your program is ready to be evaluated in the first place and what methods might be best suited to determining its value.

The decision-making problem

Sometimes you have good data, but do you have the decision-making capabilities to act on it? With innovation, data rarely tells a straightforward story: it requires sensemaking. Sensemaking requires time and a socialization of the content to determine the value and meaning of data within the context it’s being used.

Decision-making can be impeded by a few things:

  1. Time. Straight up: if you don’t give this time and focus, no amount of good data, visualizations, summaries, or quotes will help you. Time to substantially reflect on what you’re doing, discuss it with those for whom it makes sense.
  2. Talent. Diverse perspectives around the table is an important part of sensemaking, but also some expertise in the process of decision-making and implementing decisions — particularly in design. An outside consultant can assist you in working with your data to see possibilities and navigate through blindspots in the process as well as be that support for your team in making important, sometimes difficult, decisions.
  3. Will. You can give time, have talent, but are you willing to make the most of them both? For the reasons raised above about being mindful of your intentions and biases, having the right people in place will not help if you’re unwilling to change what you do, follow when led, and lead when asked.

Developmental evaluation is powerful, it’s useful, but it is not often easy (although it can be enormously worthwhile). Like most of the important things in life, you get what you put into something. Put a little energy into DE and be mindful of the traps, and you can make this approach to evaluation be your key to innovation success.

Want to learn more about how to do DE or how to bring it to your innovation efforts? Contact us and we’d be happy to help.

Filed Under: Research + Evaluation, Social Innovation Tagged With: data, data collection, decision-making, developmental evaluation, evaluation, innovation, social innovation, strategy

Dashboards for innovation evaluation

2017-09-11 by cense

A dashboard system is a way of tracking project activities across a range of different lines of inquiry. This is of particular benefit to those interested in developmental evaluation. Developmental evaluation (DE) is an approach to evaluating programs with an innovation focus and can be used as a means of generating data to support strategy development and deployment. That’s how we use DE.

Developmental evaluations often have multiple components to them that include a variety of methods, data sources, tasks and duties, and sensemaking requirements. All of this information can make a DE difficult for the client to gain a sense of what is happening and even confuse the evaluator at the same time. One way to manage this is to create something we refer to as a dashboard system.

Dashboards: Making strategy and evaluations visible

A dashboard is a means of showcasing multiple information streams in a simple, accessible manner that is easily understood. It might include notes from a meeting, data collection summaries, schedules of activities, and reflective notes. It is not a wholesale repository of all data, rather a higher-level summary that includes enough detail to get a sense of what activities are happening, when, and to what degree.

Among our favourite tools is Trello. Trello is organized into a series of ‘boards’ that allow you to organize content based on a topic or theme and create a simultaneous ‘stream’ of content within each board.

For example, a developmental evaluation might involve something like the following boards:

  1. Interviews with key stakeholder groups
  2. Observations
  3. Document reviews
  4. Sensemaking and strategy notes and outcomes
  5. Meeting minutes
  6. Survey or other data collection results
  7. Reflective notes
  8. Data collection plan

These boards would include any project component that has a distinct, coherent set of activities associated with it. It could include methods, data components, meetings and activities, populations or key document groups.

Tools for thinking and seeing

Trello is just a tool. What we’ve illustrated above could be done as channels within Slack or even as a series of sections within an MS Word or Google Doc. Even an Excel spreadsheet can work. Anything that allows someone to visually track details about discrete areas within a larger project.

(Although tools like MS Word or Excel will work, they are less than ideal as they require strict version controls to articulate changes and this can be difficult to manage, particularly if there are multiple contributors to the document on a regular basis. The reason one might choose MS Word or Excel is less about its functionality and more because it is what people are used to.)

What makes a DE a challenge is that there is a lot of data being generated from many different sources within a complex context. This makes the management of information a challenge and that poses problems for the task of generating helpful insights from the data generated. It also makes it problematic for identifying emergent patterns.

By making data visible, a dashboard allows multiple eyes and multiple perspectives to ‘see’ the data and thus, expands the possibilities associated with seeing possibilities and emergent conditions. The dashboard also increases accountability for all parties. For clients, a dashboard allows them insight into what the evaluator has access to on a regular basis and avoids any surprises. For evaluators, it ensures that clients have the means to engage in the process and don’t sit back waiting for insights to be ‘served up’ to them. DE is best done when it is a collaboration between the client and the evaluator and a dashboard can help this.

Lastly, a dashboard system provides a means of organizing content and keeping track of what is collected, identifying what is needed, and providing a quick-look means to explore what is known and unknown in a project.

Cense employs the dashboard approach on its projects whenever possible. If this approach is of interest to your organization and you want a means of developing strategy and evaluations in a more engaging, transparent manner connect with us and we can show you how this works in practice.

 

Filed Under: Research + Evaluation, Toolkit Tagged With: dashboard, data, data collection, developmental evaluation, evaluation, innovation, tools, Trello

Search

  • Our Services
  • Who We Are
  • Tools & Ideas
  • Events
  • Contact
  • Academy

Copyright © 2021 · Parallax Pro Theme on Genesis Framework · WordPress · Log in

We use cookies to learn where our visitors come from and when they come back. Welcome to our site. OK