Categories
Design Strategy Project Management Talks & Workshops

Strategy and Prioritisation

This post will discuss how prioritization can facilitate good decisions by helping us focus on what is essential, steering us closer to our vision and goals.

In the last few posts, I talked about how designers and strategists can facilitate the discussions that lead to making good decisions and creating choices.

This post will discuss how prioritization can facilitate good decisions by helping us focus on what is essential, steering us closer to our vision and goals.

TL;DR;

  • What slows progress and wastes the most time on projects is confusion about the goals or which things should come before which other things.
  • Leaders understand that Activity is not necessarily Accomplishment.
  • Companies that scale are the ones who choose to do less stuff.
  • We don’t protect our hours from being stolen. We allow thieves to steal time from us, day after day.
  • Priorities Make Things Happen.
  • Any prioritization method should help facilitate making hard choices, taking into account not just feasibility but also viability and desirability.
  • What is your prioritization policy, and how is it visualized? How does each and every item of work that has been prioritized help get us closer to our vision and achieve our goals?
  • Having clarity around prioritization policy is good; even better is to have clear goals and principles!
  • If discussions around priorities are getting the team stuck (or hurting team morale), step up and help!

Good Decisions and Priorities

What slows progress and wastes the most time on projects is confusion about what the goals are or which things should come before which other things. Many miscommunications and missteps happen because person A assumed one priority (make it faster), and person B assumed another (make it more stable). This is true for programmers, testers, marketers, and entire teams of people. If these conflicts can be avoided, more time can be spent actually progressing toward the project goals (Berkun, S., Making things happen: Mastering project management, 2008).

Prioritization versus “Being Busy” Addiction

Unfortunately, this sense of priorities might not always be clear with teams, either because leaders have not defined priorities or priorities have not been clearly communicated.

It might sound a little counterintuitive, but the companies that scale are the ones that do fewer things. It works because they can do those few things really well. By comparison, the companies that get stuck often get stuck because they keep trying to do too many things. You can’t do too many things well (Azzarello, P., “Too Busy to Scale” in Move: How decisive leaders execute strategy despite obstacles, setbacks, and stalls, 2017).

Companies that scale are the ones who choose to do less stuff.

Azzarello, P., “Too Busy to Scale” in Move: How decisive leaders execute strategy despite obstacles, setbacks, and stalls (2017)

It’s so tempting to stay busy because — first of all — it’s scary to say no, and secondly, being really busy can make people feel heroic and important. Much of the busy stuff can be related directly to bringing in revenue. What could be more important than that? How can that be wrong? It’s wrong because it is stalling you (Azzarello, P., “Too Busy to Scale” in Move: How decisive leaders execute strategy despite obstacles, setbacks, and stalls, 2017).

A few reasons why not every leader practices prioritizing (Maxwell, J. C., The 21 irrefutable laws of leadership: Follow them and people will follow you, 2007):

  • When we are busy, we naturally believe that we are achieving. Activity is not necessarily accomplishment.
  • Prioritizing requires leaders to continually think ahead, to know what is important, to know what’s next, to see how everything relates to the overall vision.
  • Prioritizing causes us to do things that are at the least uncomfortable and sometimes downright painful.

Leaders understand that Activity is not necessarily Accomplishment

“Law of Priorities” in The 21 irrefutable laws of leadership: Follow them and people will follow you, Maxwell, J. C. (2007).

Prioritization versus the Time Thieves

When short-term pressures chronically prevent you from doing more strategic stuff, you end up burning all your time and resources reacting to issues and opportunities in an ad hoc manner instead of making progress on strategic work that will let you scale (Azzarello, P., “Too Busy to Scale” in Move: How decisive leaders execute strategy despite obstacles, setbacks, and stalls, 2017).

We grumble the there just aren’t enough hours in the day and that someone else seems to have a lot of free time. But we regular mortals only have twenty-four hours in a day. The problem is that we don’t protect our hours from being stolen. We allow thieves to steal time from us, day after day. Who are these thieves of time? The five thieves of time that prevent you from getting work done include (DeGrandis, D., Making work visible: Exposing time theft to optimize workflow, 2017):

  1. Too Much Work-in-Progress (WIP) is work that has started but is not yet finished. Sometimes referred to as partially completed work.
  2. Unknown Dependencies are things you weren’t aware of that need to happen before you can finish.
  3. Unplanned Work is interruptions that prevent you from finishing something or from stopping at a better breaking point.
  4. Conflicting Priorities are projects and tasks competing for people and resources, which block flow and increase partially completed work.
  5. Neglected Work is partially completed work that sits idle on the bench.
WIP Flow Chart
In this 7-minute video, John Cutler explains why limiting work in progress can increase flow and value in product development.

Priorities Make Things Happen

As a design manager, I’ve always found that — while defining and shaping the Product Design vision to ensure cohesive product narratives through sound strategy and design principles — the way priorities are defined can potentially create a disconnect from vision, especially when tough choices around scope need to be made. It’s important that we facilitate discussions around priorities so the hard choices that need to be made take into account not just feasibility but also viability and desirability.

Product Definition and Requirements Prioritization
Visualising the impact of user experience of any given use case based on its opportunity score, while helping in the product decision making process by providing a better sense of priorities.

The goal of prioritization is to determine what to complete next in order to get maximum value in the shortest amount of time and to avoid multi-tasking due to competing priorities (DeGrandis, D., Making work visible: Exposing time theft to optimize workflow, 2017).

Priorities Make Things Happen

Berkun, S., Making things happen: Mastering project management (2008)

It’s essential to set priorities and remove distractions so that people can get on with providing service to customers, thus increasing profits and the value of the business (Kourdi, J., Business Strategy: A guide to effective decision-making, 2015).

While priorities can make things happen, we need to make sure that we prioritize things that create value.

The build trap is when organizations become stuck measuring their success by outputs rather than outcomes. It’s when they focus more on shipping and developing features rather than on the actual value of those things.

Perri, M., Escaping the build trap (2019)

There are a few things you should ask yourself and/or the team when we keep coming revisiting and renegotiating the scope of work (DeGrandis, D., Making work visible: Exposing time theft to optimize workflow, 2017):

  • What is your prioritization policy, and how is it visualized? How does each and every item of work that has been prioritized help get us closer to our vision and achieve our goals?
  • How will you signal when work has been prioritized and is ready to be worked on? In other words — where is your line of commitment? How do people know which work to pull?
  • How will we visually distinguish between higher priorities and lower priority work?

If you have priorities in place, you can always ask questions in any discussion that reframe the argument around a more useful primary consideration. This refreshes everyone’s sense of success, visibly dividing the universe into two piles: things that are important, and things that are nice but not important. Here are some sample questions (Berkun, S., Making things happen: Mastering project management, 2008):

  • What problem are we trying to solve?
  • If there are multiple problems, which one is most important?
  • How does this problem relate to or impact our goals?
  • What is the simplest way to fix this that will allow us to meet our goals?
yellow letter tiles
Learn more about problem framing techniques that can help you get team alignment by creating clarity of what problems they are trying to solve in Problem Framing for Strategic Design (Photo by Ann H on Pexels.com

Priorities and Enabling Constraints

When thinking of change efforts, it helps to focus on the collective behaviors of a system and the “constraints” of that system inform and shape that behavior. Constraints shape a system by modifying its phase space (its range of possible actions) or the probability distribution (the likelihood) of events and movements within that space. Because constraints are both key actors and key indicators of a system, constraint mapping can be a highly productive first step in considering how to intervene (Juarrero, A., Dynamics in Action: Intentional Behavior as a Complex System, 1999)

Enabling constraints force alignment of the agents which leads to resonance and this creates a higher order system. The higher order system provides feedback to the agents which constrains their behavior and stabilizes the higher order system

Matts, C., Constraints that enable (2018)

Designing effective enabling constraints is an art. Many things feel intuitively correct but have potentially harmful consequences. For example (Cutler, J. Making things better with enabling constraints, 2022):

  1. In an effort to increase certainty about plans and commitments, the team undertakes a comprehensive annual planning effort. This feels good on the surface, but it forces premature convergence, encourages over-utilization of shared resources, and encourages big, inflexible projects.
  2. In an effort to centralize communication, the team adopts a single tool for documentation (a theoretically enabling constraint). This feels good on the surface—having documentation everywhere is painful—but since a large % of communication with external teams happens outside the central tool, you find a two or three (or more) tiered system of communication (e.g., executive communication happens in slides, not in the tool).
Mapping of constraints type to domains with four quadrants: in the Upper Left, "Exaptive Practices" for enabling constraints; in the Upper Right, "Good Practices" for governing constraints; in the Lower Left, "Chaotic" for no effective constraints; in the Lower Right, "Clear" for Fixed Constraints (picture: cynefin.io)
“Constraints and domains” (Cynefin, Constraints, 2022)

No enabling constraint is guaranteed to work, but some are better than others. What should someone designing an enabling constraint look out for? (Cutler, J. Making things better with enabling constraints, 2022):

  1. It is easy to know if you are doing it or not. For example, asking everyone to use a single document repository is a bit vague. People WILL need to use other systems to document things. Do those count? What goes in it? What doesn’t? An alternative might be to run an experiment where the team commits to putting ONE document type in the centralized repository or tool.  Put another way, it is within reach and achievable
  2.  It has an expiration date and is treated as an experiment. The best enabling constraints are treated as an experiment. The team commits to giving it an honest try for a period of time. The team is promised an opportunity to weigh in on the experiment before agreeing to extend it.
  3. It helps people go through the motions. If you have a future state in mind, it helps to help people go through the motions and try things out safely.
  4. The world doesn’t end if it “fails”. Sometimes things don’t go as planned. That’s normal. The best enabling constraints fail gracefully. They are safe-to-fail probes.
  5. Fast feedback potential. The best enabling constraints will provide fast feedback. Experiments that last forever, with no sense if they are helping/hurting, are dangerous (or at a minimum draining, and encourage people to just work around them).
blue printer paper
Learn more about enabling constraints and some other Project Management skills that will prove invaluable for the effectiveness of design strategists (Photo by Startup Stock Photos on Pexels.com)

Prioritisation Methods

As I mentioned above, any prioritisation method is — in my option — only as good as it helps facilitate discussions around priorities, so the hard choices that needs to be made take in account not just feasibility, but also viability and desirability.

While in the past designers would concentrate on enhancing desirability, the emerging strategic role of designers means they have to balance desirability, feasibility and viability simultaneously. Designers need to expand their profiles and master a whole new set of strategic practices.”

“Strategic Designers: Capital T-shaped professionals” in Strategic Design (Calabretta et al., 2016)

Many companies try to deal with complexity with analytical firepower and sophisticated mathematics. That is unfortunate, since the most essential elements of creating a hypothesis can typically be communicated through simple pencil-and-paper sketches (Govindarajan, V., & Trimble, C., The other side of innovation: Solving the execution challenge, 2010.)

The key to dealing with complexity is to focus on having good conversations about assumptions.

Break Down the Hypothesis in The other side of innovation: Solving the execution challenge, Govindarajan, V., & Trimble, C., (2010)

To understand the risk and uncertainty of your idea you need to ask: “What are all the things that need to be true for this idea to work?” This will allow you to identify all four types of hypotheses underlying a business idea: desirability, feasibility, viability, and adaptability (Bland, D. J., & Osterwalder, A., Testing business ideas, 2020):

  • Desirability: Does the market want this idea?
  • Feasibility: Can we deliver at scale?
  • Viability: Is the idea profitable enough?
  • Adaptability: Can the idea survive and adapt in a changing environment?
crop laboratory technician examining interaction of chemicals in practical test modern lab
Testing Business Ideas throughly, regardless of how great they may seem in theory, is a way to mitigate risks of your viability hypothesis being wrong (Photo by RF._.studio on Pexels.com)

With that in mind, you’ll probably notice that all the methods I’ll recommend involve some degree of facilitation through visual thinking.

Prioritization of Value and Desirability

From a user-centered perspective, the most crucial pivot that needs to happen in the conversation between designers and business stakeholders is the framing of value:

  • Business value
  • User value
  • Value to designers (sense of self-realization? Did I positively impact someone’s life?)
measurement-millimeter-centimeter-meter-162500.jpeg
Learn about ways to objectively measure the value of design in The Need for Quantifying and Qualifying Strategy (Photo by Pixabay on Pexels.com)

So how do you facilitate discussions that help teams clearly see value from different angles?

Outcome-driven Innovation (ODI)

Outcome-Driven Innovation (ODI) is a strategy and innovation process built around the theory that people buy products and services to get jobs done. It links a company’s value creation activities to customer-defined metrics. Ulwick found that previous innovation practices were ineffective because they were incomplete, overlapping, or unnecessary.

Outcome-Driven Innovation® (ODI) is a strategy and innovation process that enables a company to create and market winning product and service offerings with a success rate that is 5-times the industry average

Ulwick, A.,  What customers want: Using outcome-driven innovation to create breakthrough products and services (2005)

Clayton Christensen credits Ulwick and Richard Pedi of Gage Foods with the way of thinking about market structure used in the chapter “What Products Will Customers Want to Buy?” in his Innovator’s Solution and called “jobs to be done” or “outcomes that customers are seeking”.

UX Matrix: OPPORTUNITY SCORES
Clayton Christensen credits Ulwick and Richard Pedi of Gage Foods with the way of thinking about market structure used in the chapter “What Products Will Customers Want to Buy?” in his Innovator’s Solution and called “jobs to be done” or “outcomes that customers are seeking”.

Ulwick’s “opportunity algorithm” measures and ranks innovation opportunities. Standard gap analysis looks at the simple difference between importance and satisfaction metrics; Ulwick’s formula gives twice as much weight to importance as to satisfaction, where importance and satisfaction are the proportion of high survey responses.

You’re probably asking yourself “where these values come from?” That’s where User Research comes in handy: once you’ve got the List of Use Cases, you go back to your users and probe on how important each use case is, and how satisfied with the product they are with regards to each use case.

Once you’ve obtained the opportunity scores for each use case, what comes next? There are two complementary pieces of information that the scores reveal: where the market is underserved and where the it is overserved. We can use this information to make some important targeting and resource-related decisions.

Opportunity Scores: GRAPH
Plotting the Jobs-to-be-Done, in order to map where the market is underserved and where the it is overserved (Ulwick, A.,  What customers want: Using outcome-driven innovation to create breakthrough products and services, 2005)

Almost as important as knowing where the market is underserved is knowing where it is overserved. Jobs and outcomes that are unimportant or already satisfied represent little opportunity for improvement and consequently should not receive any resource allocation in most markets, it is not uncommon to find a number of outcomes that are overserved-and companies that are nevertheless continuing to allocate them development resources (Ulwick, A. W., What customers want, 2005).

man in red long sleeve shirt holding a drilling tool
Learn how Jobs to be Done (JTBD) work as a great “exchange” currency to facilitate strategy discussions around value between designers, business stakeholders and technology people (Photo by Blue Bird on Pexels.com)

Kano Model

The Kano Model, developed by Dr. Noriaki Kano, is a way of classifying customer expectations into three categories: expected needs, normal needs, exciting needs. This hierarchy can be used to help with our prioritization efforts by clearly identifying the value of solutions to the needs in each category (“Kano Model” in Product Roadmaps Relaunched, Lombardo, C. T., McCarthy, B., Ryan, E., & Connors, M., 2017):

  • The customer’s expected needs are roughly equivalent to the critical path: if those needs are not met, they become dissatisfiers.
  • If you meet the expected needs, customers will start articulating normal needs, or satisfiers — things they don’t normally need in the product but will satisfy them.
  • When normal needs are largely met, then exciting needs (delighters or wows) go beyond the customers’ expectations.
classifying customer expectations into three categories: expected needs, normal needs, exciting needs.
“X axis: Investment; Y axis: Satisfaction” in Kano Model Analysis in Product Design

The Kano methodology was initially adopted by operations researchers, who added statistical rigor to the question pair results analysis. Product managers have leveraged aspects of the Kano approach in Quality Function Deployment (QFD). More recently, this methodology has been used by Agile teams and in market research (Moorman, J., “Leveraging the Kano Model for Optimal Results” in UX Magazine, 2012).

Jan Moorman: Measuring User Delight using the Kano Methodology
Learn more about the Kano Method from Measuring User Delight using the Kano Methodology (Moorman, J., 2012)

Check Also

Value Opportunity Analysis (VOA) maps the extent to which a product or a service’s aspirational Qualities connect with an audience (Hanington, B., & Martin, B., Universal methods of design, 2012).

Desirability Testing gauges first-impression emotional responses to products and services, exploring the affective responses that different designs elicit from people based on first impressions. Using index cards with positive, neutral and negative adjectives written on them, participants pick those that describe how they feel about a design or a prototype (Hanington, B., & Martin, B., Universal methods of design, 2012).

Buy a Feature / 100$ Test have participants assign relative value to a list of items by spending imaginary “X” amount of current (e.g.: 100 US dollars) together. Using cash, the exercise captures more attention and keeps participants more engaged than an arbitrary point or ranking system (Gray, D., Brown, S., & Macanufo, J., Gamestorming, 2010).

Prioritisation Grids

These grids are visualization exercises that help the team answer the questions like what’s actually worth our time and effort? What’s worth the organization’s investment in the project? What’s worth our time and investment in the project?

Importance versus Feasibility

We answer these questions by determining the tradeoffs between the product’s importance and its feasibility/viability (Natoli, J., Think first, 2015).

Prioritisation Grid
IBM Enterprise Design Thinking, “Decide your next move by focusing on the intersection of importance and feasibility” in Prioritisation Grid

Furthermore, we can adapt these axes in these prioritization grids to suit the discussion at hand (value to business and time to market; number of customers impacted and speed to adoption; importance, and urgency, etc.) as long as all the stakeholders involved agree on the which criterion are more beneficial to the decision being discussed and if there is enough expertise and data available for the team doing the prioritization exercise.

Hypothesis Prioritisation Canvas

If you only have one hypothesis to test it’s clear where to spend the time you have to do discovery work. If you have many hypotheses, how do you decide where your precious discovery hours should be spent? Which hypotheses should be tested? Which ones should be de-prioritised or just thrown away? To help answer this question, Jeff Gothelf put together the Hypothesis Prioritisation Canvas (Gothelf, J., The hypothesis prioritization canvas, 2019):

Hypothesis Prioritisation Canvas
The hypothesis prioritization canvas helps facilitate an objective conversation with your team and stakeholders to determine which hypotheses will get your attention and which won’t (Gothelf, J., 2019)

The Eisenhower Matrix

Also referrdoingas Urgent-Important Matrix, The Eisenhower Matrix helps you decide on and prioritise tasks by urgency and importance, sorting out less urgent and important tasks which you should either delegate or not do at all (Krogerus, M., & Tschappeler, R., “The Eisenhower Matrix” in The decision book: Fifty models for strategic thinking, 2018).

“The Eisenhower Matrix” in The decision book: Fifty models for strategic thinking Krogerus, M., & Tschappeler, R. (2018)

What Would You Bet?

I’ve picked this one up from Jeff Patton. As the name suggest, ht method starts with the question What would you bet that your hypothesis is correct? (Patton, J.,  User Story Mapping: Discover the whole story, build the right product, 2014).

Jeff Patton's What Would You Bet
Here is my adaptation of Jeff Patton’s “What would you bet?” (Patton, J.,  User Story Mapping: Discover the whole story, build the right product, 2014).

According to Jeff Patton, What Would You Bet is based on Hias’s simple drawing of how Jeff Gothelf and “bastardized” Giff Constable’s truth curve.

Check Also

Innovation Ambition Matrix considers the product’s newness in the horizontal axis and the newness of the market on the vertical axis. This allows us to distinguish three different innovation types, core, adjacent, and disruptive (Pichler, R., Strategize, 2016)

Impact & Effort Matrix maps possible action on two factors: the effort required to implement and potential impact. Some ideas are costly but may have a bigger long-term payoff than short-term actions. Categorize ideas along these lines is a helpful technique in decision-making, as it obliges contributors to balance and evaluate suggested actions before committing to them (Gray, D., Brown, S., & Macanufo, J., Gamestorming, 2010).

Design Criteria Canvas (a.k.a. MoSCoW)

Whether you’re designing a new Value PropositionBusiness Model, or even an entire strategy for the future, design criteria form the principles and benchmarks of the change you’re after. Design criteria incorporate information from your business, vision, customer research, cultural and economic context, and mindset that you have formed along the way (Van Der Pijl, P., Lokitz, J., & Solomon, L. K., Design a better business: New tools, skills, and mindset for strategy and innovation, 2016):

Design Criteria Canvas
Design Criteria Canvas from Design a better business: New tools, skills, and mindset for strategy and innovation. Van Der Pijl, P., Lokitz, J., & Solomon, L. K. (2016).

Also known as MoSCoW, the plain English meaning of the prioritization categories has value in getting customers to better understand the impact of setting a priority, compared to alternatives like High, Medium, and Low.

What I also like about discussing design in terms of principles is that forces the team to look at decisions with the bigger picture in mind. For whatever aspect of a design you’re critiquing, you can ask of them, “Does this help us reach our goal of …” or “Does this adhere to the principle of … that we set?” Followed by “How?” and “Why?” (Connor, A., & Irizarry, A., Discussing Design, 2015):

  • Goals are the desired, measurable outcomes resulting from a used product. The team should feel that the goals set forth are achievable and meaningful and should correlate to a change in user behavior.
  • Principles are the qualities and characteristics the product will exhibit in its content, behavior, and so on as people use and interact with it. Good principles should be somewhat specific. Characteristics like “fun” or “amusing” don’t make good principles because they are still pretty broad, and each team member might have a different interpretation of what “fun” is.

Goals and Principles describe where you’re trying to go with the design; they outline the future you’re trying to create and ways in which you want to create it.

Connor, A., & Irizarry, A., Discussing Design (2015)

Having a clear prioritization policy is good; having clear goals and principles is better!

Alignment Diagrams

Jim Kalbach uses the term alignment diagram to refer to any map, diagram, or visualization that reveals both sides of value creation in a single overview. They are a category of diagram that illustrates the interaction between people and organizations (Kalbach, J., ”Visualizing Value: Aligning Outside-in” in Mapping Experiences, 2021).

Such diagrams are not new and already used in practice. Thus his definition of alignment diagram is less of a proposition for a specific technique than a recognition of how existing approaches can be seen in a new, constructive way.

You may have already used them: service blueprintscustomer journey mapsexperience maps, and mental model diagrams are widespread examples.

Customer Journey Maps

Customer Journey Maps are visual thinking artifacts that help you get insight into, track, and discuss how a customer experiences a problem you are trying to solve. How does this problem or opportunity show up in their lives? How do they experience it? How do they interact with you? (Lewrick, M., Link, P., & Leifer, L., The design thinking playbook. 2018)

Costumer Journey Canvas
Example of a Customer Journey Canvas in Take a Walk Through Your Company’s Customer Journey

Experience Maps look at a broader context of human behavior. They reverse the relationship and show how the organization fits into a person’s life (Kalbach, J., ”Visualizing Value: Aligning Outside-in” in Mapping Experiences, 2021).

Experience Maps are good visualisation tools that help bring the user and system perspectives, facilitating prioritisation discussions.
Onboarding Experience Map of MURAL

User Story Maps

User story mapping is a visual exercise that helps product managers and their development teams define the work that will create the most delightful user experience. User Story Mapping allows teams to create a dynamic outline of a set of representative user’s interactions with the product, evaluate which steps have the most benefit for the user, and prioritise what should be built next (Patton, J.,  User Story Mapping: Discover the whole story, build the right product, 2014).

User story mapping is an agile software development method, which supports the project team bringing requirements in the form of user stories in a clear structure and creates a realistic release plan
User story mapping is an agile software development method which supports the project team in bringing requirements in the form of user stories in a clear structure and creates a realistic release plan. The method can be applied within the team or collaboratively with the customer (D-Labs, User Story Mapping, 2022)

Jeff Patton is one of the few people who has been able to translate Agile into a User Centric practice. User Story Mapping is probably my favorite visualization tool to create shared understanding around product, users, and context, and it helps with prioritization discussions.

Jeff Patton
Watch “Owning Agile” by Jeff Patton

Opportunity-Solution Tree

Many teams generate a lot of ideas when they go through a journey-mapping or experience-mapping exercise. There are so many opportunities for improving things for the customer that they quickly become overwhelmed by a mass of problems, solutions, needs, and ideas without much structure or priority (“Opportunity-Solution Tree” in Product Roadmaps Relaunched, Lombardo, C. T., McCarthy, B., Ryan, E., & Connors, M., 2017).

Opportunity solution trees are a simple way of visually representing the paths you might take to reach a desired outcome (Torres, T., Continuous Discovery Habits, 2021):

  • The root of the tree is your desired outcome—the business need that reflects how your team can create business value.
  • Below the opportunity space is the solution space. This is where we’ll visually depict the solutions we are exploring.
  • Below the solution space are assumption tests. This is how we’ll evaluate which solutions will help us best create customer value in a way that drives business value.
“Opportunity Solution Tree” in Continuous Discovery Habits: Discover Products that Create Customer Value and Business Value (Torres, T., 2021)
Opportunity-Solution Trees (OST) are a simple way of visually representing the paths you might take to reach a desired outcome (Torres, T., Continuous Discovery Habits, 2021)

Opportunity solution trees have a number of benefits. They help product trios (Torres, T., Continuous Discovery Habits, 2021):

  • Resolve the tension between business needs and customer needs
  • Build and maintain a shared understanding of how they might reach their desired outcome
  • Adopt a continuous mindset
  • Unlock better decision-making
  • Unlock faster learning cycles
  • Build confidence in knowing what to do next
  • Unlock simpler stakeholder management

Impact Mapping

Like highway maps that show towns and cities and the roads, connecting them, Impact Maps layout out what we will build and how these connect to ways we will assist the people who will use the solution. An impact map is a visualization of the scope and underlying assumptions created collaboratively by senior technical people and business people. It’s a mind map grown during a discussion facilitated by answering four questions: WHY, WHO, HOW, and WHAT of the problem the team is confronting (Adzic, G., Impact Mapping, 2012)

"Goals, Actors, Impact, and Deliverables" in Impact Mapping: Brings visibility to what is important and facilitates prioritisation discussions.
“Goals, Actors, Impact, and Deliverables” in Impact Mapping: Making a big impact with software products and projects (Adzic, G., 2012).

Check also

Mental models are simply affinity diagrams of behaviors made from ethnographic data gathered from audience representatives. They give you a deep understanding of people’s motivations, thought-processes, and the emotional and philosophical landscape in which they operate (Young, I., Mental Models, 2008).

Service Blueprints are visual thinking artifacts that help to capture the big picture and interconnections and are a way to plan out projects and relate service design decisions back to the original research insights. The blueprint differs from the service ecology in that it includes specific detail about the elements, experiences, and delivery within the service itself (Polaine, A., Løvlie, L., & Reason, B., Service design: From insight to implementation, 2013).

Value Stream Mapping is a practical and highly effective way to lean to see and resolve disconnects, redundancies, and gaps in how work gets done (Martin, K., & Osterling, M., Value stream mapping, 2014)

Strategy Canvas help you compare how well competitors meet costumer buying criteria or desired outcomes. To create your own strategy canvas, list the 10-12 most important functional desired outcomes — or buying criteria — on the x-axis. On the y-ais, list the 3-5 most common competitors (direct, indirect, alternative solutions and multi-tools solutions) for the job. (Garbugli, É., Solving Product, 2020).

white dry erase board with red diagram
Learn more about Aligment Diagrams in Strategy, Facilitation and Visual Thinking (Photo by Christina Morillo on Pexels.com)

Decision Matrices, Scorecards and Formulas

decision matrix is a list of values in rows and columns that allows an analyst to systematically identify, analyze, and rate the performance of relationships between sets of values and information. Elements of a decision matrix show decisions based on certain decision criteria. The matrix is useful for looking at large masses of decision factors and assessing each factor’s relative significance (Wikipedia, Decision matrix. Retrieved July 28, 2021).

Here are some of the most useful matrices, scorecards and formulas for facilitating discussions around investment decisions.

Scorecard Formulas

You can create a formula so that you can compare the Return of Investment (ROI) of proposed initiatives and derive a priority list. Scoring even job, them, feature idea, initiate or solution allows you to develop a scorecard ranking each against the others (“A formula for Prioritisation” in Product Roadmaps Relaunched, Lombardo, C. T., McCarthy, B., Ryan, E., & Connors, M., 2017).

A Formular for Prioritization
A formula for Prioritisation in Product Roadmaps Relaunched (Lombardo, C. T., McCarthy, B., Ryan, E., & Connors, M., 2017)

In the example above, CN stands for Customer Needs, BO stands for Business Objectives, E stands for Effort, C stands for Confidence, and P stands for Priority.

Use Cases Lists: Pugh Matrix

The UXI Matrix is a simple, flexible, tool that extends the concept of the product backlog to include UX factors normally not tracked by agile teams. To create a UX Integration Matrix, you add several UX-related data points to your user stories (Innes, J., Pugh Matrix in Integrating UX into the product backlog, 2012)

Pugh Matrix helps us visualise the complete backlog and facilitates prioritisation discussions.
Pugh Matrix in Integrating UX into the product backlog (Innes, J., 2012)

The UXI Matrix helps teams integrate UX best practices and user-centered design by inserting UX at every level of the agile process:

  • Groom the backlog: During release and sprint planning you can sort, group, and filter user stories in Excel.
  • Reduce design overhead: if a story shares several personas with another story in a multi-user system, then that story may be a duplicate. Grouping by themes can also help here.
  • Facilitate Collaboration: You can share it with remote team members. Listing assigned staff provides visibility into who’s doing what (see the columns under the heading Staffing). Then team members can figure out who’s working on related stories and check on what’s complete, especially if you create a hyperlink to the design or research materials right there in the matrix.
  • Track user involvement and other UX metrics: It makes it easier to convince the team to revisit previous designs when metrics show users cannot use a proposed design, or are unsatisfied with the current product or service. Furthermore, it can be useful to track satisfaction by user story (or story specific stats from multivariate testing) in a column right next to the story.

I’ve created Use Cases Lists (or Pugh Matrix), which is decision matrix to help evaluate and prioritize a list of options while working with Product Management and Software Architecture teams in both AutoCAD Map3D and AutoCAD Utility Design projects to first establish a list of weighted criteria, and then evaluates each use case against those criteria, trying to take the input from the different stakeholders of the team into account (user experience, business values, etc).

Using the Outcome-driven Innovation Framework above, you can prioritize the Use Cases based on their Opportunities Scores

The RICE Scoring Model

The RICE Scoring Model is a specific type of decision formula that consists of four factors: Reach, Impact, Confidence and Effort. This method ranks features and calculates a score from these four factors to help prioritize (Sandy, K., The influential product manager, 2020).

Using the RICE scoring model, you evaluate your competing ideas (new products, product extensions, features, etc.) by scoring them according to the following formula (Product Plan, RICE Scoring Model. 2023):

  • Reach: How many people you estimate your initiative will reach in a given timeframe. You have to decide both what “reach” means in this context and the timeframe over which you want to measure it. You can choose any time period—one month, a quarter, etc.—and you can decide that reach will refer to the number of customer transactions, free-trial signups, or how many existing users try your new feature. Your reach score will be the number you’ve estimated. For example, if you expect your project will lead to 150 new customers within the next quarter, your reach score is 150. On the other hand, if you estimate your project will deliver 1,200 new prospects to your trial-download page within the next month, and that 30% of those prospects will sign up, your reach score is 360.
  • Impact can reflect a quantitative goal, such as how many new conversions for your project will result in when users encounter it, or a more qualitative objective such as increasing customer delight. Even when using a quantitative metric (“How many people who see this feature will buy the product?”), measuring impact will be difficult, because you won’t necessarily be able to isolate your new project as the primary reason (or even a reason at all) for why your users take action. If measuring the impact of a project after you’ve collected the data will be difficult, you can assume that estimating it beforehand will also be a challenge
  • The confidence component of your RICE score helps you control for projects in which your team has data to support one factor of your score but is relying more on intuition for another factor. For example, if you have data backing up your reach estimate but your impact score represents more of a gut feeling or anecdotal evidence, your confidence score will help account for this.
  • Effort simply estimates the total number of resources (product, design, engineering, testing, etc.) needed to complete the initiative over a given period of time—typically “person-months”—and that is your score. If you think of RICE as a cost-benefit analysis, the other three components are all potential benefits, while effort is the single score that represents the costs.
RICE Scoring Method
To use the RICE scoring model, you evaluate each of your competing ideas (new products, product extensions, features, etc.) by scoring them according to Reach, Impact, Confidence, and Effort (Product Plan, RICE Scoring Model, 2023)

So, to quickly summarise our four factors (McBride, S., RICE: Simple prioritization for product managers. 2018):

  • Reach: how many people will this impact? (Estimate within a defined time period.)
  • Impact: how much will this impact each person? (Massive = 3x, High = 2x, Medium = 1x, Low = 0.5x, Minimal = 0.25x.)
  • Confidence: how confident are you in your estimates? (High = 100%, Medium = 80%, Low = 50%.)
  • Effort: how many “person-months” will this take? (Use whole numbers and minimum of half a month – don’t get into the weeds of estimation.)
RICE Method Formula
Once you’ve got all your numbers for each feature, it’s time to put them into a simple equation (MacKay, J. (2018). Feature prioritization: 7 ways to prioritize features and product improvements. 2018).

Of course, RICE scores shouldn’t be used as a hard and fast rule. There are many reasons why you might work on a project with a lower score first. One project may be a dependency on another project, so it needs to happen first, or another feature might be “table stakes” to sell to specific customers. Sometimes you might want or need to work on projects “out of order.” And that’s okay! With a scoring system, you can identify when you’re making these trade-offs (McBride, S., RICE: Simple prioritization for product managers. 2018).

In my practice, I noticed that some teams have difficulty dealing with RICE because they don’t feel comfortable putting numbers in the formula, which is usually related to the difference between metrics as hard measures of evidence versus estimates. If this exercise is purely based on estimates only, it might actually create more problems than it solves! That’s why I coach teams to be very transparent about their confidence level in their numbers!

There is only one way to calculate confidence: looking for supporting evidence! For this purpose, Itamar Gilad created the tool shown below: the Confidence Meter. It lists common types of evidence you may find and what confidence level they each provide (Gilad, I., Idea prioritization with ICE and the confidence meter, 2018).

Itamar Gilad's Confidence Meter
The Confidence Meter lists common types of evidence you may find and what confidence level they each provide (Gilad, I., Idea prioritization with ICE and the confidence meter, 2018).

Inevitably, someone on your team will be concerned about making decisions based on estimates. This is why Testing Business Ideas before a prioritization exercise is probably the best way to increase your confidence that you’ve got enough evidence to justify the investment in your ideas.

crop laboratory technician examining interaction of chemicals in practical test modern lab
Testing Business Ideas thoroughly, regardless of how great they may seem in theory, is a way to mitigate risks of your viability hypothesis being wrong (Photo by RF._.studio on Pexels.com)

Check Also

Pareto Principle states that 80% of the benefit can be achieved by doing only 20% of the work. Applied to problem management, 80% of the occurrences of an undesired effect (e.g.: downtime) can probably be traced to 20% of the causes (Powesta, H., The Business Analyst’s Handbook, 2008). In product design, the Pareto Principle can be applied to optimization efforts. Within any given system, only a few main variables affect the outcomes, while most other factors will return little to no impact.

Cost of Delay is a numerical value that describes the impact of time on the outcomes you hope to achieve. It combines urgency and value to measure impact and prioritize what you should be doing first (Perri, M., Escaping the build trap, 2019).

close up photo of survey spreadsheet
Learn more about the visibility and traceability aspects of the execution of an idea/approach (Photo by Lukas on Pexels.com)

Prioritization and Investment Discussions

As I mentioned in a previous post, designers must become skilled facilitators that respond, prod, encourage, guide, coach and teach as they guide individuals and groups to make decisions that are critical in the business world though effective processes. There are few decisions that are harder than deciding how to prioritise. The mistake I’ve seen many designers make is to look at all of the above as a zero-sum game:

  • Our user-centered design tools set may have focused too much on the user’s needs at the expense of business needs and technological constraints.
  • We need to point at futures that are both desirableprofitable, and viability (“Change By Design“, Brown, T., & Katz, B., 2009).

So the facilitation methods and approaches mentioned above should help you engage with the team to find objective ways to value design ideas/ approaches/ solutions to justify their investment. From that perspective, prioritization goes hand in hand with selecting alternatives.

calculator and pen on table
Learn more about facilitating investment discussions by finding objective ways to assess desirability, feasibility, and viability (Photo by Pixabay on Pexels.com)

My recommendation is to look at the methods and approaches mentioned above like any other facilitation tool: if discussions around priorities are getting the team stuck (or is hurting team morale), step up to the plate and help!

Adzic, G. (2012). Impact Mapping: Making a big impact with software products and projects (M. Bisset, Ed.). Woking, England: Provoking Thoughts.

Azzarello, P. (2017). Move: How decisive leaders execute strategy despite obstacles, setbacks, and stalls. Nashville, TN: John Wiley & Sons.

Berkun, S. (2008). Making things happen: Mastering project management. Sebastopol, CA: O’Reilly Media.

Bland, D. J., & Osterwalder, A. (2020). Testing business ideas: A field guide for rapid experimentation. Standards Information Network.

Calabretta, G., Gemser G., Karpen, I., (2016) “Strategic Design: 8 Essential Practices Every Strategic Designer Must Master“, 240 pages, BIS Publishers; 1st edition (22 Nov. 2016)

Connor, A., & Irizarry, A. (2015). Discussing Design (1st ed.). Sebastopol, CA: O’Reilly Media.

Christensen, C. M., & Raynor, M. E. (2013). The innovator’s solution: Creating and sustaining successful growth. Boston, MA: Harvard Business Review Press.

Cutler, J. (2022). Making things better (with enabling constraints and POPCORN). Retrieved March 27, 2022, from The Beautiful Mess website: https://cutlefish.substack.com/p/making-things-better-with-enabling?s=w

DeGrandis, D. (2017). Making work visible: Exposing time theft to optimize workflow. Portland, OR: IT Revolution Press.

Garbugli, É. (2020). Solving Product: Reveal Gaps, Ignite Growth, and Accelerate Any Tech Product with Customer Research. Wroclaw, Poland: Amazon.

Gilad, I. (2018). Idea prioritization with ICE and the confidence meter. Retrieved March 14, 2023, from Itamar Gilad website: https://itamargilad.com/the-tool-that-will-help-you-choose-better-product-ideas/

Gothelf, J. (2019, November 8). The hypothesis prioritization canvas. Retrieved April 25, 2021, from Jeffgothelf.com website: https://jeffgothelf.com/blog/the-hypothesis-prioritization-canvas/

Govindarajan, V., & Trimble, C. (2010). The other side of innovation: Solving the execution challenge. Boston, MA: Harvard Business Review Press.

Gray, D., Brown, S., & Macanufo, J. (2010). Gamestorming: A Playbook for Innovators, Rulebreakers, and Changemakers. Sebastopol, CA: O’Reilly Media.

Hanington, B., & Martin, B. (2012). Universal methods of design: 100 Ways to research complex problems, develop innovative ideas, and design effective solutions. Beverly, MA: Rockport.

Innes, J. (2012, February 3). Integrating UX into the product backlog. Retrieved July 28, 2021, from Boxesandarrows.com website: https://boxesandarrows.com/integrating-ux-into-the-product-backlog/

Juarrero, Alicia. 1999. Dynamics in Action: Intentional Behavior as a Complex System. MIT Press

Kalbach, J. (2020), “Mapping Experiences: A Guide to Creating Value through Journeys, Blueprints, and Diagrams“, 440 pages, O’Reilly Media; 2nd edition (15 December 2020)

Kourdi, J. (2015). Business Strategy: A guide to effective decision-making. New York, NY: PublicAffairs

Krogerus, M., & Tschappeler, R. (2018). The decision book: Fifty models for strategic thinking (J. Piening, Trans.). New York, NY: WW Norton.

Lewrick, M., Link, P., & Leifer, L. (2018). The design thinking playbook: Mindful digital transformation of teams, products, services, businesses and ecosystems. Nashville, TN: John Wiley & Sons

Lombardo, C. T., McCarthy, B., Ryan, E., & Connors, M. (2017). Product Roadmaps Relaunched. Sebastopol, CA: O’Reilly Media.

MacKay, J. (2018). Feature prioritization: 7 ways to prioritize features and product improvements. Retrieved March 14, 2023, from Planio website: https://plan.io/blog/feature-prioritization/

Maxwell, J. C. (2007). The 21 irrefutable laws of leadership: Follow them and people will follow you. Nashville, TN: Thomas Nelson.

Martin, K., & Osterling, M. (2014). Value stream mapping: How to visualize work and align leadership for organizational transformation. New York, NY: McGraw-Hill Professional.

Matts, C. (2018). Constraints that enable. Retrieved March 27, 2022, from The IT Risk Manager website: https://theitriskmanager.com/2018/12/09/constraints-that-enable/

McBride, S. (2018). RICE: Simple prioritization for product managers. Retrieved March 14, 2023, from The Intercom Blog website: https://www.intercom.com/blog/rice-simple-prioritization-for-product-managers/

Moorman, J., (2012), “Leveraging the Kano Model for Optimal Results” in UX Magazine, captured 11 Feb 2021 from https://uxmag.com/articles/leveraging-the-kano-model-for-optimal-results

Natoli, J. (2015). Think first: My no-nonsense approach to creating successful products, memorable user experiences + very happy customers. Bookbaby.

Patton, J. (2014). User Story Mapping: Discover the whole story, build the right product (1st ed.). Sebastopol, CA: O’Reilly Media.

Perri, M. (2019). Escaping the build trap. Sebastopol, CA: O’Reilly Media.

Pichler, R. (2016). Strategize: Product strategy and product roadmap practices for the digital age. Pichler Consulting.

Polaine, A., Løvlie, L., & Reason, B. (2013). Service design: From insight to implementation. Rosenfeld Media.

Powesta, H., (2008), The Business Analyst’s Handbook, Cengage Learning PTR; 1st Edition (December 8, 2008)

Product Plan. (2023). RICE Scoring Model. Retrieved March 14, 2023, from Productplan.com website: https://www.productplan.com/glossary/rice-scoring-model/

Sandy, K. (2020). The influential product manager: How to lead and launch successful technology products. Oakland, CA: Berrett-Koehler.

Torres, T. (2021). Continuous Discovery Habits: Discover Products that Create Customer Value and Business Value. Product Talk LLC.

Ulwick, A. (2005). What customers want: Using outcome-driven innovation to create breakthrough products and services. Montigny-le-Bretonneux, France: McGraw-Hill.

Van Der Pijl, P., Lokitz, J., & Solomon, L. K. (2016). Design a better business: New tools, skills, and mindset for strategy and innovation. Nashville, TN: John Wiley & Sons.

Wikipedia contributors. (2020, December 12). Decision matrix. Retrieved July 28, 2021, from Wikipedia, The Free Encyclopedia website: https://en.wikipedia.org/w/index.php?title=Decision_matrix&oldid=993728786

Young, I. (2008). Mental Models. Brooklyn, New York: Rosenfeld Media.

By Itamar Medeiros

Originally from Brazil, Itamar Medeiros currently lives in Germany, where he works as VP of Design Strategy at SAP, where he leads the design vision for the entire Human Capital Management product line, ensuring cohesive product narratives and establishing best practices.

Working in the Information Technology industry since 1998, Itamar has helped truly global companies in multiple continents create great user experience through advocating Design and Innovation principles. Itamar has also served as a juror for prestigious design competitions and lectured on design topics at universities worldwide.

During his 7 years in China, he promoted the User Experience Design discipline as User Experience Manager at Autodesk and Local Coordinator of the Interaction Design Association (IxDA) in Shanghai.

Itamar holds a MA in Design Practice from Northumbria University (Newcastle, UK), for which he received a Distinction Award for his thesis Creating Innovative Design Software Solutions within Collaborative/Distributed Design Environments.

10 replies on “Strategy and Prioritisation”

Leave a Reply

This site uses Akismet to reduce spam. Learn how your comment data is processed.