Cense

Innovation to Impact

  • Our Services
    • Research + Evaluation
    • Service Design and Innovation Development
    • Education + Training
    • Chief Learning Officer Service
  • Who We Are
    • About Us
  • Tools & Ideas
    • Resource Library
  • Events
    • Design Loft
  • Contact
  • Academy

Innovation Strategy: A Tool

2019-12-09 by cense

Innovation development and execution are highly affected by what kind of environment you operate in. For innovators in business, the public sector, and healthcare questions abound like:

How can we determine what kind of market — whether it’s products, services, and policies — do we want to focus on? What does innovation look like in each of these? How do we align who we are with what we want to do?

We have developed a new tool for helping you navigate the various markets you are in, could be or would like to be in. The tool is available by clicking on the link.

Blue Ocean Strategy Framework

One of the popular approaches to conceptualizing the domains of organizational strategy is the Blue Ocean strategic model developed by Chan Kim & Renée Mauborgne. The model distinguishes Red Ocean from Blue Ocean environments or marketplaces where businesses compete**.

(** or any organization seeking to differentiate itself in the marketplace of ideas and attention — it’s not just for profit-seeking ventures)

The Blue Ocean Strategy Framework presents a dichotomy between zones of competition and makes the distinction that organizations are either creating their own path or competing within existing contexts.

This dichotomy is useful for business but neglects much of the work that is done within social innovation and public sector innovation where there are spaces of co-creation and collaboration that exist with and among partners by design and necessity. In these areas the need to work together across contexts and often in ‘co-opetition‘ where organizations who might compete for resources at one moment might also rely on those others to succeed.

To help understand this we’ve developed a useful framework (or canvas, when used as a tool) for considering two additional areas to address when developing a strategy. This framework — presented visually below — introduces two new zones of strategy that complement the red/blue ocean strategy.

Beyond Oceans

The two additional zones include the Green Forest and an interstitial area akin to the Gulf Stream or Atlantic Drift ocean currents that carry water from both zones, greatly influence the climate beyond them and provide a unique ecosystem between them.

The Green Forest is an environment that feeds off the ocean and the nutrients from the currents. It’s those areas of collaborative innovation where no single organization can create a true difference on its own and where there might not even be an advantage to doing so.

Organizations might operate in many different areas depending on their size and configuration, however, the projects and work that is done and the strategy required to generate it might require the kind of mapping and zonal ‘marking’ that allows it to avoid confusion and discover the needs and challenges it faces more effectively.

Consider using this model in your work. None of these are ‘good’ or ‘bad’ rather they describe environments and contexts of strategy that can guide your innovation development and deployment. Knowing what environment you are working within allows your organization to better strategically align its planning, resources, and operations to suit that context and succeed.

Note: Are you interested in exploring different strategic domains and want some help applying this framework to your organization? Contact us and we’ll show you navigating these different terrains can help you see more and do more in your organization than you ever thought possible.

Photo by Aviv Ben Or on Unsplash and by Fezbot2000 on Unsplash

Filed Under: Social Innovation, Strategy, Toolkit Tagged With: Blue Ocean Strategy, framework, government, healthcare, innovation, social innovation, social sector, strategy

Developmental Evaluation Trap #2: The Pivot Problem

2018-05-17 by cense

In this third in a series on Developmental Evaluation traps, we look at the trap of the pivot. You’ve probably heard someone talk about innovation and ‘pivoting’ or changing the plan’s direction.

The term pivot comes from the Lean Startup methodology and is often found in Agile and other product development systems that rely on short-burst, iterative cycles that include data collection processes used for rapid decision-making. A pivot is a change of direction based on feedback. Collect the data, see the results, and if the results don’t yield what you want, make a change and adapt. That sounds pretty familiar to those looking to innovate, so where’s the trap?

The trap is that the decisions made aren’t grounded in data or the decision-making is flawed in how it uses data. In both cases, the decision to change direction is more arbitrary than evidence-based.

What do we mean by this?

The data problem

When innovating, it’s important to have the kind of data collection system in place to gather the necessary information required to make a useful decision such as whether to continue on with the process, make a change, or abandon the activity. James Dyson famously trialed his products hundreds of times with ‘tweaks’ both large and small to get to the right product design. A hallmark feature of this process is the emphasis on collecting the right data at the right time (which they call testing).

Dyson has since expanded its product offerings to include lighting, personal hair products, industrial drying tools, and an array of vacuum models. While the data needs for each product might differ, the implementation of a design-driven strategy that incorporates data throughout the decision-making process remains the same. It’s different data used for the right purpose.

Alas, DE has given cover to organizations for making arbitrary decisions based on the idea of pivoting when they really haven’t executed well or given things enough time to determine if a change of direction is warranted. Here are three things that one needs to heed when considering DE data.

  1. Process data. Without a clear indication that a program has been implemented appropriately and the constraints accounted for, how do you know that something ‘failed’ (or ‘succeeded’) based on what you did? Understanding what happened, under what conditions, and documenting the implementation behind the innovation is critical to knowing whether that innovation is really doing what we expect it to (and ensuring we capture the things we might not have expected it to do).
  2. Organizational mindfulness. The biggest challenge might be internal to the organization’s heart: it’s mindset. Organizational mindfulness is about paying attention to the activities, motivations, actions, and intentions of the entire enterprise and being willing (and able) to spot biases, identify blind spots, and recognize that, as much as we say we want to change and innovate, the reality is that it is disruptive for many people and something often unconsciously thwarted.
  3. Evaluability assessment. A real challenge with innovation work is knowing whether you’ve applied the right ‘dosage’ and given it the right amount of time to work. This means doing your homework, paying attention, and having patience. Homework comes in the form of doing background research to other, similar innovations, connecting to the wisdom of the innovators themselves (i.e., draw on experience), and connecting it together. Paying attention ensures you have a plausible means to connect intervention to effect (or product to outcome). This is like Kenny Rogers’ Gambler:

you got to know when to hold ’em

know when to fold ’em

know when to walk away

know when to run

An evaluability assessment can help spot the problems with your innovation data early by determining whether your program is ready to be evaluated in the first place and what methods might be best suited to determining its value.

The decision-making problem

Sometimes you have good data, but do you have the decision-making capabilities to act on it? With innovation, data rarely tells a straightforward story: it requires sensemaking. Sensemaking requires time and a socialization of the content to determine the value and meaning of data within the context it’s being used.

Decision-making can be impeded by a few things:

  1. Time. Straight up: if you don’t give this time and focus, no amount of good data, visualizations, summaries, or quotes will help you. Time to substantially reflect on what you’re doing, discuss it with those for whom it makes sense.
  2. Talent. Diverse perspectives around the table is an important part of sensemaking, but also some expertise in the process of decision-making and implementing decisions — particularly in design. An outside consultant can assist you in working with your data to see possibilities and navigate through blindspots in the process as well as be that support for your team in making important, sometimes difficult, decisions.
  3. Will. You can give time, have talent, but are you willing to make the most of them both? For the reasons raised above about being mindful of your intentions and biases, having the right people in place will not help if you’re unwilling to change what you do, follow when led, and lead when asked.

Developmental evaluation is powerful, it’s useful, but it is not often easy (although it can be enormously worthwhile). Like most of the important things in life, you get what you put into something. Put a little energy into DE and be mindful of the traps, and you can make this approach to evaluation be your key to innovation success.

Want to learn more about how to do DE or how to bring it to your innovation efforts? Contact us and we’d be happy to help.

Filed Under: Research + Evaluation, Social Innovation Tagged With: data, data collection, decision-making, developmental evaluation, evaluation, innovation, social innovation, strategy

Theory of change: An introduction

2017-09-26 by cense

Is the above tree alive and growing or dead and ready to be made into furniture? How does something like a tree connect to providing a swing, becoming a coffee table, or supporting the structure of a home? That is based partly on a theory of change about how a tree does what it does. That might sound strange, but for more sophisticated things like human service programs, linking what something does to what it achieves often requires a tool for explanation and a Theory of Change can serve this need well if used appropriately.

Theory of Change is described as “a comprehensive description and illustration of how and why a desired change is expected to happen in a particular context.” It has taken hold in the non-profit and philanthropic sectors in recent years as a means of providing guidance for program developers, funders, and staff in articulating the value of a program and its varied purposes by linking activities to specific behavioural theory.

Matthew Forti, writing in the Stanford Social Innovation Review (SSIR), suggests a Theory of Change (ToC) contain the following:

To start, a good theory of change should answer six big questions:
1. Who are you seeking to influence or benefit (target population)?
2. What benefits are you seeking to achieve (results)?
3. When will you achieve them (time period)?
4. How will you and others make this happen (activities, strategies, resources, etc.)?
5. Where and under what circumstances will you do your work (context)?
6. Why do you believe your theory will bear out (assumptions)?

Unlike a program logic model, which articulates program components, expected outputs, and outcomes a ToC explains how and why a particular set of actions is to produce a change and the assumptions that underpin it all. ToC can be used with a program logic model or be developed independently.

What a ToC is meant to do is allow you to explain in simple language the connection between a program’s purpose, design, and execution and what it produces in terms of benefit and impact. While it may draw on theories that have been published or tested, it may also be unique to the program itself, but in all cases, it is meant to be understandable to a variety of stakeholders and audiences.

Creating a Theory of Change

A strong ToC requires some understanding of behaviour change theory: what do we know about how change happens? It can’t simply end up with “and then change happens”, it must have some kind of logic that can be simply expressed and, whenever possible, tied to what we know about change at the individual, group, organization, system, or a combination. It’s for this reason that bringing in expertise in behaviour change is an important part of the process.

That is one of the points that Kathleen Kelly Janus, also writing in the SSIR, recently made as part of her recommendations for those looking to better the impact of creating a ToC. She suggests organizations do the following:

  1. Engage outside stakeholders
  2. Include your board and staff
  3. Bring in an outside facilitator
  4. Clearly define the outcomes that will spell success
  5. Track your results rigorously.

Inclusion, consultation, and collaboration are all part of the process of developing a ToC. The engagement with diverse stakeholders — particularly those who sit apart from the program — is critical because they will see your program differently. Outsiders will not get caught up in jargon, internal language, or be beholden to current program structures as explanations for change.

Defining the outcomes are important because change requires an explanation of the current state and what that changed state(s) look like. The more articulate you can be about what these outcomes might be, the more reflective the ToC will be of what you’re trying to do. By defining the outcomes better, a ToC can aid a program in developing the appropriate metrics and methods to best determine how (or whether) programs are manifesting these outcomes through their operations.

Supporting strategy

A ToC is best used as an active reference source for program managers, staff, and stakeholders. It can continually be referred to as a means of avoiding strategy ‘drift’ by connecting the programs that are in place to outcomes and reminding management that if the programs change, so too might the outcomes.

A ToC can be used as a developmental evaluation tool, allowing programs to see what they can do and how different adaptations might fit within the same framework for behaviour change to achieve the same outcomes. Alternatively, it can also be used to call into question whether the outcomes themselves are still appropriate.

By making a ToC accessible, easy to read and to understand the key is to make it visual. Employing someone with graphic design skills to help bring the concepts to life in visual representation can provide a means to clarify key ideas and getting people beyond words. It’s easy to get hung up on theoretical language and specific terms when using words; where possible use visuals, narrative, and representations. Metaphors, colour, and texture can bring a ToC to life.

A ToC, when developed appropriately, can provide enormous dividends for strategy, performance, and evaluation and help all members of an organization (and its supporters and partners) understand what it is all about and how what it does is linked to what it aims to achieve. The ToC can serve your communications, strategy development, and evaluation plans if done well and appropriately facilitated, particularly for complex programs. It doesn’t solve all your problems, but few things will help you understand what problems you’re trying to solve and how you might do it than a good Theory of Change.

If you need help building a Theory of Change, contact us and we can help you develop one and show you how it can support your strategy, innovation, and evaluation needs of your programs and organization as a whole.

Filed Under: Research + Evaluation, Social Innovation Tagged With: evaluation, program evaluation, social innovation, strategy, theory of change

The Experimentation Myth

2016-06-21 by cense

Innovation Growth Lab: Making innovation and growth policy work from Nesta UK on Vimeo.

The world of social innovation is filled with wonderful things and, like any creative space, has its share of myths. One of these is the role of experimentation. One view is that social innovation operates in such a space of complexity that the use of methods such as randomized controlled trials are not only inappropriate, but harmful.

The alternative perspective is one held by folks like Geoff Mulgan, the Chief Executive of Nesta UK. According to Mulgan (discussed in the video embedded above):

the only way you figure out what works is by doing experiments using control groups and seeing who benefits from an intervention.

This latter position is also a myth. Like any myth, it makes complexity much easier to handle and accept because it takes away the need to invest energy in determining ‘what is going on’.

Alas, that ‘what’s going on’ is at the crux of understanding impact in the realm of social enterprises, because we humans have a tendency to resist standardization in how we behave. This is not to suggest that there is no place for experiments, because as Nesta UK and others have shown, it can be done. But to suggest that the only way to find out whether something works is experimentation is taking things too far.

To be fair, Mulgan was speaking on matters of public policy, which have enormous consequences and his comments after the quote above about understanding potential impact before scaling are spot-on, but there are different paths to understanding how impact is achieved and what it is that go beyond experiments.

Disrupting experiments

A true controlled experiment, as the name suggests, requires control. The one way to get around some of the constraints imposed by controls are to have large numbers of participants. The problem facing the controlled experiments is that we often do not have both — either control or large numbers of participants. Further, the amount of control needed is dependent on the amount of social complexity inherent in the problem. For example, imagine comparing two options: renewing a required document like your drivers licence online or in person. This is a process that might involve a lot of effort (depending on where the nearest location of your motor vehicle registration office is), but it’s not complex. This is something that is amenable to experimentation.

However, a policy or program designed to help individuals manage chronic disease conditions involves enormous complexity given that each participants’ condition will have a combination of shared qualities with unique manifestations and that those will all be mediated by different social, biological, economic, geographic and situational variables that, depending on the chronic condition, might play a significant role in how a program is received and its impact.

This is a far more challenging task, but one that is worth doing lest we, in the words of systems scholar Russell Ackoff, do “the wrong things, righter” by imposing an experimental design on something that warrants something different. Or, put another way, perhaps we need to redesign the experiment itself to suit the conditions.

 

 

Filed Under: Complexity, Research + Evaluation, Social Innovation Tagged With: evaluation, experiments, Nesta, policy evaluation, social innovation, social policy

Why human systems innovation is social innovation

2016-05-17 by cense

Is it social?
An innovation may be more social than you think

Social innovation is described as a specific type of innovation that meets social goals. The Stanford Graduate School of Business defines social innovation this way:

A social innovation is a novel solution to a social problem that is more effective, efficient, sustainable, or just than current solutions. The value created accrues primarily to society rather than to private individuals.

We like this particular definition largely because it includes the role of social justice into the definition along with an emphasis on social impact. Social innovation is becoming more than a niche as human systems are becoming more entwined through collaborations, partnership, strategic alliances and the mass of interconnections of people from around the world. In an increasingly globalized world made possible through transnational trade, global policy, mass human migration and the digital networks of knowledge and media created through the Internet the process and outcome of innovation is increasingly social.

The reasons for this is that there is no longer a standard client or patient or person or customer or…

The diversity in human systems means that it’s increasingly problematic to apply ‘standard models’ to populations. That’s not to say that we can’t make assumptions or that certain generalizations don’t work at all, but they aren’t the same as they once were. What we need to do is design for each condition and setting in which we seek change.

This process of design, when done well, includes the involvement of those who are the beneficiaries or stakeholders in the innovation. By excluding these relevant stakeholders there is a genuine risk of designing something that might either miss something critical or worse, unintentionally create something that exacerbates social problems rather than addresses them in a satisfactory manner. For that reason our approach to innovation design is one that assumes that nearly any human system intervention is a social innovation on some level.

By approaching a problem context as a social innovation we bring together not only the design considerations, but also the social ethics and values associated with social innovation. It also ensures that innovations are made social and translated beyond the originators of the innovation into the wider world, thus increasing exposure and improving access and overall knowledge translation.

Next time you approach an innovation problem ask yourself if what you’re doing is social or not. You might be surprised where it lands you.

And if you need help with that, we’d be happy to help.

Filed Under: Design, Social Innovation Tagged With: design, ethics, innovation development, social innovation, social justice, social values

  • 1
  • 2
  • Next Page »

Search

  • Our Services
  • Who We Are
  • Tools & Ideas
  • Events
  • Contact
  • Academy

Copyright © 2021 · Parallax Pro Theme on Genesis Framework · WordPress · Log in

We use cookies to learn where our visitors come from and when they come back. Welcome to our site. OK