Cense Ltd.

Inspiration, Innovation and Impact

  • Who We Are
    • Contact
    • Principal & President
  • Our Services
    • Strategic Design
    • Evaluation Services
    • Well-Being Design
    • Coaching + Training
    • Chief Learning Officer Service
  • Innovation Kit
  • Censemaking
  • Academy
  • Events
  • Inspiration | Innovation | Impact

Copy Cat: Learning Through Observation

2021-11-23 by cense

Is there a competitor or colleague that does something you admire? Is there a small pang of jealousy or envy in how another firm does what it does? Rather than lament it, embrace it.

We can channel our impressions of others into benefit if we transform our envy or observations into actions. This is a technique called Copy Cat.

Copy Cat is a simple technique that can be done as part of a monthly review and fit in with your regular strategy and sensemaking sessions. This technique allows you to focus learning on aspects of a competitor or peer’s behaviour and activities that you would like to learn from and maybe copy. Copy Cat is a form of appreciative inquiry. It works by focusing our attention on specific qualities or actions that we can adopt in our organization and practice.

How to be a Copy Cat

Copy Cat involves systematic attention and review of specific organizations and activities you admire or wish to copy. This is a technique based on the psychological concept of modelling and self-efficacy. Copy Cat begins by identifying those individuals, organizations, or groups that we admire or wish to emulate. It may be specific persons or it may also be behaviours or practices.

After we identify what it is that we wish to model, the next step is to begin observing the person/organization/behaviour/practice we are interested in. Literal observation, use of artifacts (e.g., articles, news stories, word of mouth, or marketing materials can all help. Your data gathering should be systematic, but it does not need to be comprehensive.

The next step is to engage in sensemaking. Sensemaking is a social process that allows us to make meaning of what we find. Bring together all your data, share it with those involved (this can be done independently, but is far more powerful in a small group), and make it accessible to everyone involved. Sensemaking helps us to ask questions about what we see and what it might mean for us in our work.

We use Copy Cat to see how others’ actions might apply to our work. Copy Cat provides guidance on what to do and how it can be done.

Applying Copy Cat

When we systematically, attentively watch others using Copy Cat we begin to consider how what we see can apply to us. This is where our design skills come into play.

We ask the following questions:

  1. What resources are employed in these actions? Do we have them?
  2. What knowledge or skills are required to do these activities?
  3. What circumstances are present in these actions? Did they help or hurt what was done?
  4. What outcomes emerged from these actions and can we tell what they are?
  5. What might this look like if we did those actions? What do we need that we don’t have?
  6. What negatives might emerge from these actions?

Copy Cat allows us to dissect the core components of someone else’s actions and consider how we might apply those lessons to our work.

By taking time to do this — we recommend spending 2-3 hours per month on this activity — some distinct benefits can be revealed.

  1. We sharpen our observation skills
  2. We begin learning more about our market
  3. We open our eyes to new ways to do something and the constraints others operate in
  4. It engages us in reflective practice about what we do and why
  5. It keeps us active in our market
  6. It builds systematic learning and praxis into our organization
  7. We enhance our curiosity and use it to channel energy in our organization

This simple method can have enormous benefits for an organization and helping build learning, innovation, and engagement in your people and your market. It requires little in the way of specialized tools and only a small amount of time.

If you want help building this into your learning and innovation practice, let’s talk. This is what we do and we’d love to help you do it, too.

Photo by Jonas Lee on Unsplash and Max Baskakov on Unsplash

Filed Under: Strategy, Toolkit Tagged With: data, data collection, design, education, evaluation, innovation, learning, sensemaking, strategy, toolkit, tools

Developmental Evaluation Trap #2: The Pivot Problem

2018-05-17 by cense

In this third in a series on Developmental Evaluation traps, we look at the trap of the pivot. You’ve probably heard someone talk about innovation and ‘pivoting’ or changing the plan’s direction.

The term pivot comes from the Lean Startup methodology and is often found in Agile and other product development systems that rely on short-burst, iterative cycles that include data collection processes used for rapid decision-making. A pivot is a change of direction based on feedback. Collect the data, see the results, and if the results don’t yield what you want, make a change and adapt. That sounds pretty familiar to those looking to innovate, so where’s the trap?

The trap is that the decisions made aren’t grounded in data or the decision-making is flawed in how it uses data. In both cases, the decision to change direction is more arbitrary than evidence-based.

What do we mean by this?

The data problem

When innovating, it’s important to have the kind of data collection system in place to gather the necessary information required to make a useful decision such as whether to continue on with the process, make a change, or abandon the activity. James Dyson famously trialed his products hundreds of times with ‘tweaks’ both large and small to get to the right product design. A hallmark feature of this process is the emphasis on collecting the right data at the right time (which they call testing).

Dyson has since expanded its product offerings to include lighting, personal hair products, industrial drying tools, and an array of vacuum models. While the data needs for each product might differ, the implementation of a design-driven strategy that incorporates data throughout the decision-making process remains the same. It’s different data used for the right purpose.

Alas, DE has given cover to organizations for making arbitrary decisions based on the idea of pivoting when they really haven’t executed well or given things enough time to determine if a change of direction is warranted. Here are three things that one needs to heed when considering DE data.

  1. Process data. Without a clear indication that a program has been implemented appropriately and the constraints accounted for, how do you know that something ‘failed’ (or ‘succeeded’) based on what you did? Understanding what happened, under what conditions, and documenting the implementation behind the innovation is critical to knowing whether that innovation is really doing what we expect it to (and ensuring we capture the things we might not have expected it to do).
  2. Organizational mindfulness. The biggest challenge might be internal to the organization’s heart: it’s mindset. Organizational mindfulness is about paying attention to the activities, motivations, actions, and intentions of the entire enterprise and being willing (and able) to spot biases, identify blind spots, and recognize that, as much as we say we want to change and innovate, the reality is that it is disruptive for many people and something often unconsciously thwarted.
  3. Evaluability assessment. A real challenge with innovation work is knowing whether you’ve applied the right ‘dosage’ and given it the right amount of time to work. This means doing your homework, paying attention, and having patience. Homework comes in the form of doing background research to other, similar innovations, connecting to the wisdom of the innovators themselves (i.e., draw on experience), and connecting it together. Paying attention ensures you have a plausible means to connect intervention to effect (or product to outcome). This is like Kenny Rogers’ Gambler:

you got to know when to hold ’em

know when to fold ’em

know when to walk away

know when to run

An evaluability assessment can help spot the problems with your innovation data early by determining whether your program is ready to be evaluated in the first place and what methods might be best suited to determining its value.

The decision-making problem

Sometimes you have good data, but do you have the decision-making capabilities to act on it? With innovation, data rarely tells a straightforward story: it requires sensemaking. Sensemaking requires time and a socialization of the content to determine the value and meaning of data within the context it’s being used.

Decision-making can be impeded by a few things:

  1. Time. Straight up: if you don’t give this time and focus, no amount of good data, visualizations, summaries, or quotes will help you. Time to substantially reflect on what you’re doing, discuss it with those for whom it makes sense.
  2. Talent. Diverse perspectives around the table is an important part of sensemaking, but also some expertise in the process of decision-making and implementing decisions — particularly in design. An outside consultant can assist you in working with your data to see possibilities and navigate through blindspots in the process as well as be that support for your team in making important, sometimes difficult, decisions.
  3. Will. You can give time, have talent, but are you willing to make the most of them both? For the reasons raised above about being mindful of your intentions and biases, having the right people in place will not help if you’re unwilling to change what you do, follow when led, and lead when asked.

Developmental evaluation is powerful, it’s useful, but it is not often easy (although it can be enormously worthwhile). Like most of the important things in life, you get what you put into something. Put a little energy into DE and be mindful of the traps, and you can make this approach to evaluation be your key to innovation success.

Want to learn more about how to do DE or how to bring it to your innovation efforts? Contact us and we’d be happy to help.

Filed Under: Research + Evaluation, Social Innovation Tagged With: data, data collection, decision-making, developmental evaluation, evaluation, innovation, social innovation, strategy

Dashboards for innovation evaluation

2017-09-11 by cense

A dashboard system is a way of tracking project activities across a range of different lines of inquiry. This is of particular benefit to those interested in developmental evaluation. Developmental evaluation (DE) is an approach to evaluating programs with an innovation focus and can be used as a means of generating data to support strategy development and deployment. That’s how we use DE.

Developmental evaluations often have multiple components to them that include a variety of methods, data sources, tasks and duties, and sensemaking requirements. All of this information can make a DE difficult for the client to gain a sense of what is happening and even confuse the evaluator at the same time. One way to manage this is to create something we refer to as a dashboard system.

Dashboards: Making strategy and evaluations visible

A dashboard is a means of showcasing multiple information streams in a simple, accessible manner that is easily understood. It might include notes from a meeting, data collection summaries, schedules of activities, and reflective notes. It is not a wholesale repository of all data, rather a higher-level summary that includes enough detail to get a sense of what activities are happening, when, and to what degree.

Among our favourite tools is Trello. Trello is organized into a series of ‘boards’ that allow you to organize content based on a topic or theme and create a simultaneous ‘stream’ of content within each board.

For example, a developmental evaluation might involve something like the following boards:

  1. Interviews with key stakeholder groups
  2. Observations
  3. Document reviews
  4. Sensemaking and strategy notes and outcomes
  5. Meeting minutes
  6. Survey or other data collection results
  7. Reflective notes
  8. Data collection plan

These boards would include any project component that has a distinct, coherent set of activities associated with it. It could include methods, data components, meetings and activities, populations or key document groups.

Tools for thinking and seeing

Trello is just a tool. What we’ve illustrated above could be done as channels within Slack or even as a series of sections within an MS Word or Google Doc. Even an Excel spreadsheet can work. Anything that allows someone to visually track details about discrete areas within a larger project.

(Although tools like MS Word or Excel will work, they are less than ideal as they require strict version controls to articulate changes and this can be difficult to manage, particularly if there are multiple contributors to the document on a regular basis. The reason one might choose MS Word or Excel is less about its functionality and more because it is what people are used to.)

What makes a DE a challenge is that there is a lot of data being generated from many different sources within a complex context. This makes the management of information a challenge and that poses problems for the task of generating helpful insights from the data generated. It also makes it problematic for identifying emergent patterns.

By making data visible, a dashboard allows multiple eyes and multiple perspectives to ‘see’ the data and thus, expands the possibilities associated with seeing possibilities and emergent conditions. The dashboard also increases accountability for all parties. For clients, a dashboard allows them insight into what the evaluator has access to on a regular basis and avoids any surprises. For evaluators, it ensures that clients have the means to engage in the process and don’t sit back waiting for insights to be ‘served up’ to them. DE is best done when it is a collaboration between the client and the evaluator and a dashboard can help this.

Lastly, a dashboard system provides a means of organizing content and keeping track of what is collected, identifying what is needed, and providing a quick-look means to explore what is known and unknown in a project.

Cense employs the dashboard approach on its projects whenever possible. If this approach is of interest to your organization and you want a means of developing strategy and evaluations in a more engaging, transparent manner connect with us and we can show you how this works in practice.

 

Filed Under: Research + Evaluation, Toolkit Tagged With: dashboard, data, data collection, developmental evaluation, evaluation, innovation, tools, Trello

The Survey Engagement Dividend

2016-10-03 by cense

amsterdamsign
Engaging with your products, people and places

Wouldn’t you love to know how you customers, clients, constituents or colleagues think about an issue of importance so that you can use their position to help inform design of your services or validate the choices already made?

For those looking for this kind of data they are probably thinking of using a survey of some sort. You know the survey, it’s something that you’ve probably been invited to complete at least once in the last week (maybe the last hour); and that is part of the problem.

Use an app for more than a few times and you’ll likely get a question about whether you like it and whether you will rate it on the iTunes Store or Google Play. Rent a car, stay at a hotel, take a plane/train/bus trip, buy something online, visit an attraction, or even visit a health centre and there is a very good chance that, if someone connected to that transaction has your contact information, you will be asked for your opinion and to rate the experience. Polling, or the use of surveys to determine voter preferences or public opinion on a variety of issues, is used to illustrate the ‘mood of the public’ and guide political decision making.

In a commentary in the New York Times, professor of public policy Jeff Zukan describes the situation as dire for those seeking to make political decisions based on current polling data.

Evaluation research (pdf) continues to point to an overall decline in response rates over time although recent research from Japan suggests that valid response data can still be obtained from small response rates, but that it remains sub-optimal.

In short: it’s getting harder to get responses to a survey and that is casting doubt on the validity of the data that is being generated.

Going from passive surveys to active engagement

Part of the problem is the sheer volume of surveys being deployed in the world. There was a time when a survey was a unique thing to encounter, something done through the census or perhaps through the decision to participate in a research study at a university or hospital. Research was also done by trained professionals for professional purposes that were made obvious to the respondent (e.g., contributing to scientific knowledge). Now that has shifted to include a panoply of reasons from guiding product marketing to quality improvement to scientific research and often to no obvious destination by people who know little about the scientific basis for survey methods.

Easy to deploy survey creation tools such as Google Forms or through Facebook or even Twitter allow anyone to develop and distribute a survey to a nearly unlimited number of people.

But just because you can survey doesn’t mean you should or that it is worthwhile. Here are some tips to increase the utility of your surveys, get better data and better response rates while doing more for your organization at the same time:

  1. Engage your community/clients/potential respondents early. We worked with the folks at Eat Right Ontario to see how to improve the feedback process on new health promotion materials using surveys and we found that the organization’s Facebook group was a place where people had already self-selected as being interested in the subject matter. In a study published earlier this year, we demonstrated that we could get data of similar quality, quantity and utility drawing on this engaged population than we could if using a personalized, face-to-face approach (which is often the gold-standard for recruitment).
  2. Make it simple and to the point. Ask questions that are clear, direct and to the point. Make the survey clearly relevant to the participants. If you are asking questions about some underlying hypothesis that doesn’t seem obvious to the participants, they will see less relevance in it and will be less likely to complete the survey. You have a limited amount of attention time; use it wisely
  3. Ask only what you can use. Related to the previous point, there is a tendency to think that because you’re doing a survey that you need to ask certain questions like demographics for example. However, if you have no perceived need for the data, don’t ask. Sure, you might find a reason down the road, but if you can’t think of one now it’s less likely you’ll need this information. For example, we designed a survey for a client looking to get feedback on their network and advised against any demographic data because there was no obvious need for it on their part because no matter what the responses were they were not in a position to use it for anything. Thus, asking things like age, gender or location didn’t matter and would only put people off as it makes data that was meant to be impersonal and anonymous more personal and more risky.
  4. Tell them the story of the data when you’re finished. Share what you learn so that your participants can learn with you. Giving back the data invites engagement, creates transparency, and fosters accountability. Many reasons people disengage from surveys is that they believe the data isn’t going to be used. By not only showing the data, but also what you learned from it and did with it, you create a trust for future participants to engage with you on future projects.

These four tips can help you get people interested in your survey and increase the likelihood that they will respond to the questions you ask. After all, that is the reason you want to do a survey in the first place.

If you need support developing data collection strategies for your organization, contact us and we’d be happy to discuss survey development or many other methods that can help you get the right information at the right time.

Filed Under: Research + Evaluation Tagged With: data collection, Eat Right Ontario, opinion, polling, survey, survey methods

Search

  • Who We Are
  • Our Services
  • Innovation Kit
  • Censemaking
  • Academy
  • Events
  • Inspiration | Innovation | Impact

Copyright © 2022 · Parallax Pro Theme on Genesis Framework · WordPress · Log in