In the previous post, we discussed how Jobs to be Done (JTBD) works as a great “exchange” currency to facilitate strategy discussions between designers, business stakeholders, and technology people.
In this post, I’ll show you a few ways to use Jobs to be Done (JTBD) to facilitate the two-way negotiations between leadership and product teams that allow for managing by outcomes.
TL;DV;
Watch the highlights of this article in 9 minutes!
TL;DR;
- Managing by outcomes is a strategic approach that focuses on achieving measurable changes in human behavior to drive business results.
- By emphasizing outcomes over outputs, needless work is eliminated, and the customer becomes the central focus.
- This approach grants teams autonomy and ownership to solve customer problems or address business needs, rather than following a fixed roadmap.
- To be effective, it is crucial to choose the right outcomes.
- Business outcomes, product outcomes, and traction metrics must be differentiated to assess progress accurately.
- The key to creating value lies in shifting the emphasis from outputs to outcomes by changing goal-setting and success measurement within the organization.
- TL;DV;
- TL;DR;
- Outcomes over Vague Goals
- Outcomes and Value
- Outcomes over Outputs
- Managing by Outcomes and the Product Trio
- Outcomes and Metrics
- Managing by Outcomes at the Right Level of Altitude
- Facilitating Two-way Negotiations
- Managing by Outcomes through Making Collaboration Possible
- The Right Time for Managing by Outcomes Discussions
- Recommended Reading
If you are reading this, then you probably want to make your job easier. You want to be able to accomplish more with fewer people in less time. It’s a noble goal. One that I also share. To make this possible, you will need to “manage by outcomes.”
While managing by outcomes, we give our teams the autonomy, responsibility, and ownership to chart their own path. Instead of asking them to deliver a fixed roadmap full of features by a specific time, we ask them to solve a customer problem or address a business need.
It is a difficult thing to do. We have all seen it: strategy is developed, tactics are developed, and large chunks of the organization start working on those tactics without making any progress whatsoever.
You will only see progress in creating value when you stop working on outputs and start focusing on outcomes. That means changing how you set goals for your organization and how you measure success for teams.
Outcomes over Vague Goals
Think about the really important goals your team talks about all the time. Everyone agrees they are critical when you talk about them: We must improve quality. We must innovate. We must respond to a competitive threat. We must evolve our business model to provide better service. To move your team from talking about important stuff in a vague way to actually making progress on these things in a real way, the first step is to realize that you are stuck because you are still only talking. (Azzarello, P., “Concrete Outcomes” in Move: How decisive leaders execute strategy despite obstacles, setbacks, and stalls, 2017)
You need to change the nature of the conversation to become one that drives action, instead of just more talking. One of the biggest hazards to watch for is a concept called “smart talk.”
Brought to a standstill by inertia, their problems fester, their opportunities for growth are lost, and their best employees become frustrated and leave. If the inactivity continues, customers and investors react accordingly and take their money elsewhere (Pfeffer, J., & Sutton, R. I., The Smart-Talk Trap, 1999).
“Between the conception / And the creation / Falls the shadow,” T.S. Eliot wrote in “The Hollow Men,” his great poem about human inertia. In business, that shadow is composed of words. When confronted with a problem, people act as if discussing it, formulating decisions, and hashing out plans for action are the same as actually fixing it. It’s an understandable response—after all, talk, unlike action, carries little risk. But it can paralyze a company (Pfeffer, J., & Sutton, R. I., The Smart-Talk Trap, 1999).
They will always be ready to shed more light on the problem by providing details, benchmarks, and customer examples. They will have lots of smart stuff to say. Everyone will think, “Wow, they’re really smart.” (Azzarello, P., “Concrete Outcomes” in Move: How decisive leaders execute strategy despite obstacles, setbacks, and stalls, 2017)
Describing the “Situation”
It’s vitally important as a leader to recognize when your team is falling into the pattern of accepting smart sounding ideas and inputs instead of measurable forward progress. The most effective way I have found to break through this is to recognize when you get stuck in a pattern of smart-talking about the “situation.”
Groups of people have a very strong tendency to discuss the situation. Over and over again. For a really long time. Situation discussions describe what we are doing, what the market is doing, what the competitors are doing, what the investors are saying, what the problems are, what the costs are, what the customers are demanding, what the changes in the business model are causing, what the opportunities are, what the employees are doing and not doing. Situation discussions don’t go anywhere; they only gather more detail. (Azzarello, P., “Concrete Outcomes” in Move: How decisive leaders execute strategy despite obstacles, setbacks, and stalls, 2017).
Sure, it’s important to use some time to note and understand the situation, but you can just feel it when everyone has internalized the situation, and then . .. you keep talking about it! Talking and talking and talking about it. You can feel it in your stomach when the meeting is not going anywhere, and you’re still talking. The talk gets smarter and smarter, and the forward motion everyone is craving never happens (Azzarello, P., “Concrete Outcomes” in Move: How decisive leaders execute strategy despite obstacles, setbacks, and stalls, 2017).
Situation versus Outcome
The way to break through this type of stall is to train your team members to catch themselves having a situation discussion, and then say, “Let’s stop talking about the situation, and let’s try to define an outcome we want to achieve.” (Azzarello, P., “Concrete Outcomes” in Move: How decisive leaders execute strategy despite obstacles, setbacks, and stalls, 2017)
You’ve probably been in a conversation like this. We need to improve the quality of our product to be more competitive, but all of our resources are tied up in creating new features. We can’t fall behind on features and have no extra resources. But we really need to improve quality. But we don’t have the budget and around and around. Instead of adding fur to that situation discussion, let’s take this situation discussion and turn it into an outcome discussion. Here is an example. Note how resisting situation talk allows the discussion to move forward (Azzarello, P., “Concrete Outcomes” in Move: How decisive leaders execute strategy despite obstacles, setbacks, and stalls, 2017):
- Okay. We can’t afford to fix all the quality problems, so let’s stop vaguely talking about this. Let’s talk about some concrete things we can do on a smaller scale that would make a positive difference. Which quality problems are having the most negative business impact right now?
- There are two issues in the user interface that our biggest customers are complaining about. (Situation)
- How about if we fix those two problems first? (Outcome proposal)
- But that doesn’t take into account the issue in Europe. The quality issues in Europe are related to differences in governance laws. (Situation)
- I suggest we fix only the top one issue in the United States right away, but we fix the top three in Europe now, too (Outcome proposal), as we have more pipeline held up in Europe.
- But that doesn’t solve our overall quality problems, which are related to the fundamental structure of our product, which I have assessed is slowing our sales pipeline growth by 20 percent.
(Smart talk. Rat hole. Situation) - What outcome do you suggest we target to solve that particular point? (Challenge to smart talk)
- I don’t know; we just need to fix it. It’s really important.
(Situation. Stall) - That is still a situation discussion. How about we fix the problems we just listed first, and right away, we train the sales force on how to help customers work around these platform issues temporarily? (Outcome proposal)
- But when can we fix the main platform? We don’t have the resources to do it. (Age-old situation)
- Let’s look at doing a platform release one year from now. After we fix this initial round of quality issues and release this current round of features, we then prioritize the platform changes and get it done. (Outcome proposal)
- But if we do that, we’ll again fall behind our competitors in functionality. (Shut up. Situation)
- We need t agree that if the platform change is a priority, we must accomplish it no matter what our level of resources, even if we need to move resources from the work to add new functionality. (Outcome proposal)
- We will work with marketing and sales to improve our conversion rate in the part of our pipeline with customers not currently affected by the platform issue. (Outcome proposal)
Note the difference between situation and outcome conversation. Outcome discussions can be long and painful too, but the big difference is that they are going some-where. Outcome conversation is a productive conversation. It leads to action (Azzarello, P., “Concrete Outcomes” in Move: How decisive leaders execute strategy despite obstacles, setbacks, and stalls, 2017)
How to Avoid Distractions
The Law of triviality is C. Northcote Parkinson‘s 1957 argument that people within an organization commonly or typically give disproportionate weight to trivial issues.[1] Parkinson provides the example of a fictional committee whose job was to approve the plans for a nuclear power plant, spending the majority of its time on discussions about relatively minor but easy-to-grasp issues, such as what materials to use for the staff bike shed, while neglecting the proposed design of the plant itself, which is far more important and a far more difficult and complex task (Wikipedia, 2022)
Avoiding Distractions through Clarity of Purpose
Once you acknowledge the problem of bikeshedding, there are several steps you can take to avoid the issue and spend the appropriate time each issue demands (Melnick, L, How to avoid meetings about the trivial, aka bikeshedding, 2020):
- Have a clear purpose. Successful meetings need to have a clear and well defined purpose. Specificity is central to having a purpose and conveying it.
- Invite the right people. Only invite people who can contribute to the discussion or are needed for execution of the decision. If the purpose is to discuss the nuclear power plant, this purpose will make it clear who should and should not be in the meeting. As the post points out, “the most informed opinions are most relevant. This is one reason why big meetings with lots of people present, most of whom don’t need to be there, are such a waste of time in organizations. Everyone wants to participate, but not everyone has anything meaningful to contribute.”
- Appoint a decision maker. To reach the best outcome, you need a designated decision maker. First, it avoids forcing a consensus when there should be a black and white winner, a compromise is not always better than an extreme option. Also, it is often impossible to reach a consensus when nobody is in charge. The discussion just drags on and on.
- Have the decision maker set clear parameters. With one person in change, they can decide in advance how much importance to accord to the issue (for instance, by estimating how much its success or failure could help or harm the company’s bottom line). They can set a time limit for the discussion to create urgency. And they can end the meeting by verifying that it has indeed achieved its purpose.
Avoiding Distractions by focusing on ‘Next’
There are many other benefits to moving from situation conversations to outcome conversations. One of the other great things about outcome-oriented conversations is that they can be used to resolve disputes (Azzarello, P., “Concrete Outcomes” in Move: How decisive leaders execute strategy despite obstacles, setbacks, and stalls, 2017):
- When discussing a situation and what to do next, “next” is a concept fraught with opinion and emotion. It might involve someone giving something up or stopping something. It might involve doing or learning something new.
- “Next” has all the personal investment of the present wrapped up in it. To get people to agree on what to do next if a clear outcome is not defined, there could be a million possible choices, all laden with personal investment, experience, insight, opinion, and emotion.
- But instead, you can pick a point in the future and say, “Let’s describe that point. Let’s agree on that point in the future.” Suddenly, everyone’s focus is shifted away from their invested and urgent personal space and placed on a goal in the distance. It breaks the emotional stranglehold of something that threatens to change right now.
The other benefit is that if you can agree on what the point in the future looks like, it reduces the set of possible next steps from a million to several. There are far fewer choices of what to do next to serve a well-defined outcome. You can have a much more focused and productive debate (Azzarello, P., “Concrete Outcomes” in Move: How decisive leaders execute strategy despite obstacles, setbacks, and stalls, 2017).
Outcomes and Value
As I mentioned in a previous post, designers must become skilled facilitators that respond, prod, encourage, guide, coach, and teach as they guide individuals and groups to make decisions that are critical in the business world through effective processes. Few decisions are harder than deciding how to prioritise.
I’ve seen too many teams that a lot of their decisions seem to be driven by the question “What can we implement with the least effort” or “What are we able to implement,” not by the question “what brings value to the user.”
From both a user-centered and a market fit perspective, the most crucial pivot that needs to happen in the conversation between designers and business stakeholders is the framing of value:
- Business value
- User value
- Value to designers (sense of self-realization? Did I impact someone’s life in a positive way?)
The mistake I’ve seen many designers make is to look at prioritization discussion as a zero-sum game: our user-centered design tools set may have focused too much on the needs of the user, at the expense of business needs and technological constraints.
That said, there is a case to be made that designers should worry about strategy because it helps shape the decisions that not only create value for users but value for employees. And here is why.
Therefore, a strategic initiative is worthwhile only if it does one of the following (Oberholzer-Gee, F. (2021). Better, simpler strategy. 2021):
- Creates value for customers by raising their willingness to pay (WTP): If companies find ways to innovate or to improve existing products, people will be willing to pay more. In many product categories, Apple gets to charge a price premium because the company raises the customers’ WTP by designing beautiful products that are easy to use, for example.A value-focused company convinces its customers in every interaction that it has their best interests at heart.
- Creates value for employees by making work more appealing: When companies make work more interesting, motivating, and flexible, they are able to attract talent even if they do not offer industry-leading compensation. Paying employees more is often the right thing to do, of course. But keep in mind that more-generous compensation does not create value in and of itself; it simply shifts resources from the business to the workforce. By contrast, offering better jobs not only creates value, it also lowers the minimum compensation that you have to offer to attract talent to your business, or what we call an employee’s willingness-to-sell (WTS) wage. Value-focused businesses think holistically about the needs of their employees (or the factors that drive WTS).
- Creates value for suppliers by reducing their operating cost: Like employees, suppliers expect a minimum level of compensation for their products. A company creates value for its suppliers by helping them raise their productivity. As suppliers’ costs go down, the lowest price they would be willing to accept for their goods—what we call their willingness-to-sell (WTS) price—falls. When Nike, for example, created a training center in Sri Lanka to teach its Asian suppliers lean manufacturing, the improved production techniques helped suppliers reap better profits, which they then shared with Nike.
The Value Stick is an interesting tool that provides insight into where the value is in a product or service. It relates directly to Michael Porter’s Five Forces, reflecting how strong those forces are: Willingness to Pay (WTP), Price, Cost, and Willingness to Sell (WTS). The difference between Willingness to Pay (WTP) and Willingness to Sell (WTS) — the length of the stick — is the value that a firm creates (Oberholzer-Gee, F., Better, simpler strategy, 2021)
This idea is captured in a simple graph, called a value stick. WTP sits at the top and WTS at the bottom. When companies find ways to increase customer delight and increase employee satisfaction and supplier surplus (the difference between the price of goods and the lowest amount the supplier would be willing to accept for them), they expand the total amount of value created and position themselves for extraordinary financial performance.
Organizations that exemplify value-based strategy demonstrate some key behaviors (Oberholzer-Gee, F., “Eliminate Strategic Overload” in Harvard Business Review, 2021):
- They focus on value, not profit. Perhaps surprisingly, value-focused managers are not overly concerned with the immediate financial consequences of their decisions. They are confident that superior value creation will improve financial performance over time.
- They attract the employees and customers whom they serve best. As companies find ways to move WTP or WTS, they make themselves more appealing to customers and employees who like how they add value.
- They create value for customers, employees, or suppliers (or some combination) simultaneously. Our early understanding of success in manufacturing holds that costs for companies will rise if they boost consumers’ willingness to pay—that is, it takes more-costly inputs to create a better product. But value-focused organizations find ways to defy that logic.
For such a conversation to pivot to focus on value, designers will need to get better at influencing the strategy of their design project. However, some designers lack the vocabulary, tools, and frameworks to influence it in ways that drive user experience vision forward.
I don’t know about you, but I’m not in this (only) for the money: I want my work to mean something, create value, and change people’s lives! For the better! We need to bring users’ needs to the conversation and influence the decisions that increase our customer’s Willingness to Pay (WTS) by — for example — increasing customers’ delight so that we can create products and services we are proud to bring into the world!
Outcomes over Outputs
In traditional planning, the solution provider commits to delivering specified deliverables (the scope) at a specified cost within a given time frame. This approach doesn’t work when requirements are volatile because it locks all parties into predetermined specifications that are likely to be outdated by the time the product is delivered (Podeswa, H., The Agile Guide to Business Analysis and Planning: From Strategic Plan to Continuous Value Delivery, 2021).
Instead of focusing on predetermined deliverables, agile enterprises focus on desired outcomes, such as increased revenues and increased customer loyalty (Podeswa, H., The Agile Guide to Business Analysis and Planning: From Strategic Plan to Continuous Value Delivery, 2021).
You might be asking, “what do you mean by outcomes?” Joshua Seiden defines outcome as “a measurable change in behavior that drives business results.”
Outcomes are the benefit your customers receive from your stuff. This starts with truly understanding your customers’ needs—their challenges, issues, constraints, priorities—by walking in their shoes and in their neighborhoods, businesses, and cultures. See what’s inconvenient, taking a lot of time, money, and/or effort. Your customers are too busy to plan, shop for, and cook healthy meals. What if you made a healthy, reasonably priced, fast-cooking meal so a family could eat better? Create a solution that your customers can sustain, and you enable life-changing outcomes, big and small (Mills-Scofield, D., It’s not just semantics: Managing outcomes vs. Outputs, 2012)
You can help the team and leaders to start thinking in terms of outcomes by asking three simple questions (Seiden, J., Outcomes over Output, 2019):
- What are the user and customer behaviors that drive business results? I’ve suggested in another post that facilitating discussions around Jobs to be Done can be a great way to get the team to align.
- How do we get people to do more of these things?
- How do we know we’re right? The easiest (and the hardest) way to answer that question is to design and conduct tests.
Business thought leaders have been advocating for managing by outcomes for decades. A renowned managerial thought leader, Peter Drucker wrote about its benefits countless times. Andy Grove, the former CEO of Intel, utilized the practice at Intel and wrote about it in his best-selling book High Output Management. More recently, Google, Google Ventures, and John Doerr, a venture capital partner at Kleiner Perkins, have popularized the topic again with their advocacy for objectives and key results (OKRs), one flavor of managing by outcomes. You’ll hear from prominent thought leaders in most industries and broadly across the technology sector (including me) that shifting from dictating outputs to managing outcomes is critical to a company’s success. (Torres, T., Continuous Discovery Habits, 2021).
Managing by Outcomes and the Product Trio
Your job at making the strategy come true does not stop after you announce it. One of the hardest things to do is to get an organization to stop doing what it is currently doing and start doing the different thing that it needs to be doing. You can’t just expect your team to find its way through the Middle. Without your involvement, your organization will go back to doing what it is already doing (Azzarello, P., “The Beginning of the Middle” in Move: How decisive leaders execute strategy despite obstacles, setbacks, and stalls, 2017)
That said, your strategy should say what we will do, not how we will do it. To avoid being perceived as micromanaging, clarify what success looks like — and how to measure it — and let the teams figure out the how. That’s when outcomes come in handy!
Managing by outcomes communicates to the team how they should measure success. A clear outcome helps a team align around the work they should be prioritizing, helps them choose the right customer opportunities to address, and helps them measure the impact of their experiments. Without a clear outcome, discovery work can be never-ending, fruitless, and frustrating (Torres, T., Continuous Discovery Habits, 2021).
The key distinction between this strategy over traditional roadmaps is that we are giving the team the autonomy to find the best solution. If they are truly a continuous-discovery team, the product trio has a depth of customer and technology knowledge, giving them an advantage when making decisions about how to solve specific problems. (Torres, T., Continuous Discovery Habits, 2021).
Additionally, this strategy leaves room for doubt (Torres, T., Continuous Discovery Habits, 2021):
- A fixed roadmap communicates false certainty. It says we know these are the right features to build, even though we know from experience their impact will likely fall short.
- An outcome communicates uncertainty. It says we know we need this problem solved, but we don’t know the best way to solve it. It gives the product trio the latitude they need to explore and pivot when needed.
The keyword here is uncertainty, which sounds scary to many people! Uncertainty draws people to the conversation by admitting you don’t have all the answers, and inviting others to figure it out!
While we should “use” uncertainty to leave some room for exploration regarding the solutions, we should provide clarity around the outcomes we expect from teams!
Clarity addresses uncertainty. It doesn’t remove it. While you can’t remove uncertainty, clarity is your best bet for equipping our families, our coworkers, and our communities to navigate it. Clarity says, “I don’t know what the future holds, but here’s what we’re gonna do in the meantime.” Clarity says, “I don’t know what’s gonna happen, but we’re gonna prepare for whatever happens.” Clarity says, “Here’s the plan for now, and we will adjust the plan as circumstances demand” (Andy Stanley, Leading with Clarity, 2022).
The absence of clarity also creates an opportunity for biases and assumptions to influence how people interpret information. Ambiguity tempts organizations to be reactive: instead of addressing the most important issues, they address those attracting attention at this moment. Ambiguity prevents organizations from operating with focus, discipline, and engagement (Martin, K., Clarity first, 2018).
Without a clear outcome, discovery work can be never-ending, fruitless, and frustrating (Torres, T., Continuous Discovery Habits, 2021).
Dealing with Uncertainty and Ambiguity
Designers often find themselves with incomplete information about their users, the problem space, and its parameters. We must therefore be able to deal with Uncertainty and Ambiguity while not being paralyzed by them (Photo by Startup Stock Photos on Pexels.com)
Assigning Outcomes to Product Teams
Managing by outcomes is only as effective as the outcomes themselves. If we choose the wrong outcomes, we’ll still get the wrong results. When considering outcomes for specific teams, it helps to distinguish between business outcomes, product outcomes, and traction metrics (Torres, T., Continuous Discovery Habits, 2021):
- A business outcome measures how well the business is progressing.
- A product outcome measures how well the product is moving the business forward.
- A traction metric measures a specific feature or workflow usage in the product.
If you read my last post on Bringing Business Impact and User Needs together with Jobs to be Done, you understand why I think we should add user or customer outcomes to this list and how product outcomes and customer outcomes are interconnected.
Business Outcomes
Business outcomes start with financial metrics (e.g., grow revenue, reduce costs), but they can also represent strategic initiatives (e.g., grow market share in a specific region, increase sales to a new customer segment). Many business outcomes, however, are lagging indicators. They measure something after it has happened ((Torres, T., Continuous Discovery Habits, 2021).
By the time the team could measure the impact of their product changes, customers had already churned. Therefore, we want to identify leading indicators that predict the direction of the lagging indicator. Assigning a team a leading indicator is always better than assigning a lagging indicator (Torres, T., Continuous Discovery Habits, 2021).
User or Customer Outcomes
The challenge of arriving at business definition for human needs starts with language. An agreed-on language is fundamental to success in any discipline, yet confusion has permeated product development because companies continue to define “requirements” as any kind of customer input: customer wants, needs, benefits, solutions, ideas, desires, demands, specifications, and so on. But really, those are all different types of inputs, none of which can be used predictably to ensure success (Ulwick, A. W., What customers want, 2005).
Clayton Christensen credits Ulwick and Richard Pedi of Gage Foods with the way of thinking about market structure used in the chapter “What Products Will Customers Want to Buy?” in his Innovator’s Solution and called Jobs to be Done (JTBD) or “outcomes that customers are seeking”.
A customer job could be the tasks that they are trying to perform and complete, the problems the are trying to solve, or the needs they are trying to satisfy. (Osterwalder, A., Pigneur, Y., Papadakos, P., Bernarda, G., Papadakos, T., & Smith, A. Value proposition design, 2014).
Because they don’t mention solutions or technology, jobs should be as timeless and unchanging as possible. Ask yourself, “How would people have gotten the job done 50 years ago?” Strive to frame jobs in a way that makes them stable, even as technology changes. (Kalbach, J. Jobs to be Done Playbook, 2020).
Jobs to be Done (JTBD) is a new way to think about the innovation process. Three key tenets define this approach (Ulwick, A. W., What customers want, 2005):
- Customers buy products and services to help them get jobs done. In our study of new and existing markets, we find that customers (both people and companies) have “jobs” with functional dimensions that arise regularly and need to get done. When customers become aware of such a job, they look around for a product or service to help them get the job done. We know, for example, that people buy mowers so they can cut their lawns; and they buy insurance to limit their financial risks; Corn farmers, for example, buy corn seed, herbicides, pesticides, and fertilizers to help them grow corn. Carpenters buy circular saws to cut wood. Virtually all products and services are acquired to help get a job done.
- Customers use a set of metrics (performance measures) to judge how well a job is getting done and how a product performs. Just as companies use metrics to measure the output quality of a business process, customers use metrics to measure success in getting a job done. Customers have these metrics in their minds, but they seldom articulate them, and companies rarely understand them. We call these metrics the customers’ desired outcomes. They are the fundamental measures of performance inherent to the execution of a specific job. When cutting wood with a circular saw, carpenters may judge products for their ability to minimize the likelihood of losing sight of the cut line, the time it takes to adjust the depth of the blade or the frequency of kickbacks. Only when all the metrics for a given job are satisfied are customers able to execute the job perfectly. Ironically, these metrics are overlooked in the customer-driven world because they are not revealed by listening to the “voice of the customer.
- These customer metrics make possible breakthrough products and services’ systematic and predictable creation. With the proper inputs, companies dramatically improve their ability to execute all other downstream activities in the innovation process, including identifying opportunities for growth, segmenting markets, conducting competitive analysis, generating and evaluating ideas, communicating value to customers, and measuring customer satisfaction.
At its core, the concept of JTBD is a straightforward focus on people’s objectives independent of the means used to accomplish them. Through this lens, JTBD offers a structured way of understanding customer needs, helping to predict better how customers might act in the future (Kalbach, J. Jobs to be Done Playbook, 2020).
Jobs to be Done as a Common Strategy Vocabulary for Managing by Outcomes
At its core, the concept of JTBD is straightforward and focus on people’s objectives independent of the means used to accomplish them. Through this lens, JTBD offers a structured way of understanding customer needs, helping to predict better how customers might act in the future (Kalbach, J. Jobs to be Done Playbook, 2020).
From that perspective, there are a few ways that my colleagues and I have been taking full benefit of the advantages of using Jobs to be Done for creating a shared understanding of product outcomes and customer outcomes:
- Discuss problems instead of solutions: because Jobs to be Done are a great way of describing Customer and/or User Needs independent of Solutions, they are — directly or indirectly — a great way to help create the mindset (that designers keep complaining about that lack in product teams) of understanding the problem space before jumping to solutions.
- Provide clarity and scope of product discovery activities: because uncovering Jobs to be Done requires clarity about who are the job performers we need to focus our research on, and what are the things these job performers are trying to get done, it makes it a lot easier for us to prepare for research, especially when it comes to recruiting research participants and what are the problems we need to probe.
- Provide a great way to synthesize discovery findings: if you know me, you know that my design and research philosophy is to get everyone involved during the processing of research data (instead of long reports for stakeholders to read out about research). But even when you do the processing of the research data together with the team, Jobs to be Done are great for synthesizing research outputs (e.g..: user needs, goals, success criteria) in a format that is easy to consume — for example — with Job Stories (coming up next)!
In another post, I described how I’ve been coaching Agile Teams (with the help of colleagues Ritesh Chopra, Kevin Simmons, and Rebecca Thieme-Reagan and the incredible facilitation skills of Martina William and Anton Fischer) to connect the dots between Product Vision, Product and Customer Outcomes, Roadmap Planning and Backlog Planning through combining Storytelling, Job Maps and User Story Mapping.
Product Outcomes
If you read my last post on Bringing Business Impact and User Needs together with Jobs to be Done, you understand why I think we should add user or customer outcomes to this list, and how product outcomes and customer outcomes are interconnected through Jobs-to-be-Done (JTBD).
As a general rule, product trios will progress more on a product outcome than a business outcome. Remember, product outcomes measure how well the product moves the business forward. By definition, a product outcome is within the product trio’s span of control. On the other hand, business outcomes often require coordination across many business functions. (Torres, T., Continuous Discovery Habits, 2021).
Bringing Business Impact and User Needs together with Jobs to be Done (JTBD)
Learn how Jobs to be Done (JTBD) work as a great “exchange” currency to facilitate strategy discussions around value between designers, business stakeholders and technology people that allows for managing by outcomes (Photo by Blue Bird on Pexels.com)
Outcomes and Metrics
Key performance indicators (KPIs) are metrics that measure your product’s performance. They help understand if the product meets its business goals and the product strategy works. Without KPIs, you end up guessing how well your product is performing. You may have a hunch or intuition, but how can you tell it’s right? Using KPIs and collecting the right data helps you balance opinions, beliefs, and gut feelings with empirical evidence, which increases the chances of making the right decisions and providing a successful product (Pichler, R., Strategize: Product strategy and product roadmap practices for the digital age, 2016).
When we assign traction metrics to product trios, we risk painting them into a corner by limiting the types of decisions they can make. Product outcomes generally give product trios far more latitude to explore and enable them to make the necessary decisions to drive business outcomes ultimately. However, there are two instances in which it is appropriate to assign traction metrics to your team (Torres, T., Continuous Discovery Habits, 2021):
- Assign traction metrics to more junior product trios. Improving a traction metric is more of an optimization challenge than a wide-open discovery challenge and is a great way for a junior team to get some experience with discovery methods before giving them more responsibility. For your more mature teams, however, stick with product outcomes.
- If you have a mature product and you have a traction metric that you know is critical to your company’s success, it makes sense to assign this traction metric to an optimization team. If the broader discovery questions have already been answered, then it’s perfectly fine to assign a traction metric to a team. The key is to use traction metrics only when you are optimizing a solution and not when the intent is to discover new solutions. In those instances, a product outcome is a better fit.
When you’re deciding which traction metrics the product team needs to track, here are a few things to keep in mind (Pichler, R., Strategize, 2016):
- Avoid vanity metrics, which measure that make your product look good but don’t add value.
- Don’t measure everything that can be measured, and don’t trust blindly an analytics tool to collect the right data. Instead, use the business goals to choose a small number of metrics that truly help you understand how your product performs. Otherwise, you risk wasting time and effort analyzing data that provides little or no value.
- Be aware that some metrics are sensitive to the product life cycle. For example, you usually cannot measure profit before your product enters the growth stage. Tracking adoption rates and referrals are very useful in the introduction and growth stages but are less so in the maturity and decline stages.
Metrics and OKRs
Identifying business objectives that tie to your vision is critical to making that vision a reality. It’s great to say, “we imagine a world where instant teleportation from one location to another makes travel effortless.” However, suppose you don’t have specific objectives to hit along the way. In that case, that vision will be challenging to implement because there will be too many different products, technology, and business directions to take to get there (Lombardo, C. T., McCarthy, B., Ryan, E., & Connors, M., Product Roadmaps Relaunched, 2017).
As I mentioned in a previous article, here is another advantage of using outcomes in the form of Jobs to be Done to frame our value proposition, especially with regards to product vision: because the product vision communicates why you are building something and what the value proposition is for the customer if we focus on the customers’ outcomes can deliver the greatest value we can be confident to push our vision horizon far out, knowing that the outcomes our vision are trying to address are stable over time.
There are a few ways of helping teams measure their success through outcomes, including John Doerr’s Management by Objectives using Objectives and Key Results (Doerr, J., Measure what matters, 2018).
The structure of OKRs is simple. You set a high-level inspiring goal like “Get real traction for our app.” This is your Objective. You then define 3-5 measures to tell you if you have succeeded. “Traction” might be measured in terms of users, revenue, conversion, or even renewals. These specific measures are your Key Results and they will depend on your particular company, your product, and what you and your team mean when you say “traction.” (McCarthy, B., How should product teams use OKRs?, 2019):
- Objective: Get real traction for our app
- Key Result 1: Increase paying customers to x per month
- Key Result 2: Increase monthly active users (MAU) by y%
- Key Result 3: Increase revenue to $z MRR
A team (or a whole organization) can have 3-5 top-level OKRs that together reflect the most important outcomes the company is pursuing. In larger organizations, OKRs may “cascade” from these top few down to child OKRs that divide up the focus across teams and levels of hierarchy. Key Result 1 from the above Objective could become a Parent Objective with its own KR children below (McCarthy, B., How should product teams use OKRs?, 2019):
- Objective: Increase paying customers to x per month
- Key Result 1: Increase self-sign-up to x per month
- Key Result 2: Increase free-trial conversion by y%
- Key Result 3: Reduce churn to < z% per month
Objectives and Outcomes
Often reffered as Objectives and Key Results (OKRs) are a great way to pair your business objectives with scuess criteria. The premise of the OKR framework is that objectives are especific qualitative goals, and key results are quantitative measures of progress towards those objectives (Lombardo, C. T., McCarthy, B., Ryan, E., & Connors, M., Product Roadmaps Relaunched, 2017).
Asking what is our product vision or team mission is also really helpful as a starting place for OKRs. When asked to write objectives, lots of executives default to “grow by x%” or “generate y revenue.” Then they generally run out of ideas for objectives or start quickly thinking in terms of deliverables like “ship the new thingie by Q1” or “launch the whackamole initiative.” (McCarthy, B., How should product teams use OKRs?, 2019).
Sure, you probably still have some growth metrics in mind, but you could start by measuring the improvement you are seeking in customers’ lives. Measuring the value you are generating as directly as you can is a great way to align your team on what matters (McCarthy, B., How should product teams use OKRs?, 2019).
Key Results and Outcomes
There are two fundamental principles for allows for teams to switch to managing by outcomes mindset (Cagan, M., Inspired: How to create tech products customers love, 2017):
- Never tell people how to do things. Tell them what to do, and they will surprise you with their ingenuity.
- Performance is measured by results. The idea here is that you can release all the features you want, but if it doesn’t solve the underlying business problem, you haven’t really solved anything.
The key here is — going back to Joshua Seiden’s “Outcomes” — to help the team connect the dots between the strategy and the outcome by asking the question, what are the measurable changes in human behavior that drive business results?
As I mentioned in a previous article, here is another advantage of communicating outcomes using jobs to be done: they become common exchange currency between leadership, designers, product managers, and developers while having managing by outcomes negotiations: jobs describe what users are trying to get done (the “why”s), which — in turn — provides a good framing to discuss objectives for the team (the “what”‘s the solution, not the “how”s).
OKRs can quickly drive you to a narrow fixation on business results, unethical behavior, and short-term outcomes that eventually undermine your very business. But if your OKRs begin with an inspiring vision of how awesome it will be for your customers when you succeed, then you can break that down into whatever is required to achieve that vision or carry out that mission (McCarthy, B., How should product teams use OKRs?, 2019).
OKRs at Scale
Many large organizations ask their teams to provide team-specific OKRs. If there are 200 teams in your company, managing 200 team-level goals can become overwhelming. Imagine if you had 500 teams! It can also cause one of the main anti-patterns of OKR implementation: hyperlocal optimization. One team may work hard to hit their key results but the work they’re doing inadvertently hampers another team’s progress toward their own goals (Gothelf, J., OKRs at scale, 2021).
One way to solve this is — instead of asking teams to provide team-level OKRs at first — we identify a set of teams who will be dedicated to the same goal and have them, as a team of teams, set their objectives and key results. As a unit, this team is now on the hook for these goals. Hyperlocal optimization is instantly communicated and dealt with because the measure of success is global for the entire group, not the individual components that make it up (Gothelf, J., OKRs at scale, 2021)
This doesn’t necessarily mean that each team doesn’t have its own team-specific goals to achieve. In fact, one of the first things to ask the component teams for is a set of key results to function as leading indicators of the group’s overall key result goals. In this way, each team is working towards a thing they can influence directly but they’re doing so with (Gothelf, J., Execs care about revenue. How do we get them to care about outcomes? 2017):
- A clear line of sight of the overall goal they’re trying to achieve
- A transparent view into how their work is impacting other teams in the group
- Awareness that if the key results they’ve chosen don’t have the impact they predicted on the group’s goals, they’ll need to adjust their goals
Understanding these relationships between Objectives and Key Results will be critical for what will be discussed next: assigning outcomes to product teams.
What makes a Good Metric?
Here are some rules of thumb for what makes a good metric — a number that will drive the changes you’re looking for (Croll, A., & Yoskovitz, B. Lean Analytics, 2013):
- A good metric is comparative. Being able to compare a metric to other time periods, groups of users, or competitors helps you understand which way things are moving. “Increased conversion from last week” is more meaningful than “2% conversion.”
- A good metric is understandable. If people can’t remember it and discuss it, it’s much harder to turn a change in the data into a change in the culture.
- A good metric is a ratio or a rate. Accountants and financial analysts have several ratios they look at to understand, at a glance, the fundamental health of a company. You need some, too.
- A good metric changes the way you behave. This is by far the most important criterion for a metric: what will you do differently based on changes in the metric?
- “Accounting” metrics like daily sales revenue, when entered into your spreadsheet, need to make your predictions more accurate. These metrics form the basis of Lean Startup’s innovation accounting, showing you how close you are to an ideal model and whether your actual results are converging on your business plan.
- “Experimental” metrics (like the results of a test) help you optimize the product, pricing, or market. Changes in these metrics will significantly change your behavior.
There are several reasons ratios tend to be the best metrics (Croll, A., & Yoskovitz, B. Lean Analytics, 2013):
- Ratios are easier to act on. Think about driving a car. Distance traveled is informational. But speed–distance per hour–is something you can act on, because it tells you about your current state, and whether you need to go faster or slower to get to your destination on time.
- Ratios are inherently comparative. If you compare a daily metric to the same metric over a month, you’ll see whether you’re looking at a sudden spike or a long-term trend. In a car, speed is one metric, but speed right now over average speed this hour shows you a lot about whether you’re accelerating or slowing down.
- Ratios are also good for comparing factors that are somehow opposed, or for which there’s an inherent tension. In a car, this might be the distance covered divided by traffic tickets. The faster you drive, the more distance you cover–but the more tickets you get. This ratio might suggest whether or not you should be breaking the speed limit.
Leading versus Lagging Metrics
Both leading and lagging metrics are useful, but they serve different purposes (Croll, A., & Yoskovitz, B. Lean Analytics, 2013):
- A leading metric (sometimes called a leading indicator) tries to predict the future. For example, the current number of prospects in your sales funnel gives you a sense of how many new customers you’ll acquire in the future. If the current number of prospects is very small, you’re not likely to add many new customers. You can increase the number of prospects and expect an increase in new customers.
- A lagging metric, such as churn (which is the number of customers who leave in a given time period) gives you an indication that there’s a problem–but by the time you’re able to collect the data and identify the problem, it’s too late. The customers who churned out aren’t coming back. That doesn’t mean you can’t act on a lagging metric (i.e., work to improve churn and then measure it again), but it’s akin to closing the barn door after the horses have left. New horses won’t leave, but you’ve already lost a few.
In some cases, a lagging metric for one group within a company is a leading metric for another. For example (Croll, A., & Yoskovitz, B. Lean Analytics, 2013)
- The number of quarterly bookings is a lagging metric for salespeople (the contracts are signed already)
- For the finance department (that’s focused on collecting payment), quarterly bookings is a leading indicator of expected revenue (since the revenue hasn’t yet been realized).
Be aware that indicators only make sense in the context of when they are captured. For example. Retention is a lagging indicator, which is impossible to act on immediately. It will be months before you have solid data to show that people stayed with you. We must also measure leading indicators like activation, happiness, and engagement. Leading indicators tell us whether we’re on our way to achieving those lagging indicators like retention. To determine the leading indicators for retention, you can qualify what keeps people retained- for example, happiness and usage of the product. The success metrics we set around options are leading indicators of outcomes we expect on our initiatives because options are strategies on a shorter time scale (Perri, M., Escaping the build trap, 2019).
Anti-patterns and Bad Metrics
When managing by outcomes, there are a couple of types of anti-patterns to avoid:
- Measuring Activities instead of Outcomes: Activities are bad measures because you can have good performance against them and still have unhappy customers who are not referring your product or willing to buy more.
- Lack of Focus: too many outcomes lead teams to spread themselves too thin; when teams keep ping-ponging from one outcome to another, they never reap the benefit of their learning curve; focusing on one metric at the cost of all else can quickly derail a team and company.
Measuring Activities Instead of Outcomes
The right measures are so important. If you can get it right, you can achieve the holy grail of being confident about progress without getting overly involved in tracking detail. But just like there are end goals masquerading as good strategy, there are often bad measures standing in for truly meaningful ones. And probably the worst kind of measures are activities and details (Azzarello, P., “Control Points” in Move: How decisive leaders execute strategy despite obstacles, setbacks, and stalls, 2017).
These are bad measures because they only relate to process steps and activities (Azzarello, P., “Control Points” in Move: How decisive leaders execute strategy despite obstacles, setbacks, and stalls, 2017):
- Number of closed customer calls is a process step (an activity). It does not convey a measure of happy customers (and actual outcome), only that the call was closed.
- Number of problems fixed is activity again. It does not offer any qualitative view or insight as to whether those were resolutions for key problems that important customers cared about. The most important issues from the most important customers are the important things you want to measure. Fixing any one issue for any customer is just an activity.
- Speed to fix it is another measure of a process step, not an outcome measure. It does not offer any insight into how effective the fix was, or if the fix made the customer happy.
There are three common problems with measuring activities, process steps, activities, or details (Azzarello, P., “Control Points” in Move: How decisive leaders execute strategy despite obstacles, setbacks, and stalls, 2017):
- Measuring Process Steps, Not Outcomes: once a leader defines the outcome, once that outcome is achieved, it doesn’t matter what needed fixing. The group needs to manage all the details, activities, and process steps to make it come true. As long as the outcome is true, the leader never needs to know all the details with another important implication being: We don’t have to drag details up and down the organization to get a good insight.
- Hiding in Complexity. Organizations get so busy measuring too many things at a detailed level that they have no insight at all about what the important things driving the business even are.When you get mired in detailed measures of activities and process steps, it’s tough to say how bad is bad, or how good is good. Measuring too many details obscures knowing the thing you truly need to know: What is the fundamental outcome you need, and are you getting it?
- Measuring the Paper, Not the Reality: having too many reviews and checklists creates a tendency to audit to records “on paper,” and you can lose track of what is actually happening in the real world. In my earlier example, all the paper reporting on “number of customer calls closed” could be great, and yet the customer can still be angry. We were measuring the details about the product basics and the process steps of taking and closing out calls. It all looked good on paper- -we passed the audit, but were not even looking for the limping cows. The customers were hurting. It could have been for one or many reasons, but we didn’t know because we didn’t look. We didn’t ask. We didn’t listen. We were too satisfied with our measure of process activities and details on paper
Many times we select bad measures simply because they are the easiest thing to measure. We placate ourselves with the fact that we are measuring something. But, in fact, we may be doing more harm than good (Azzarello, P., “Control Points” in Move: How decisive leaders execute strategy despite obstacles, setbacks, and stalls, 2017).
Lack of Focus
When setting product outcomes, avoid these common anti-patterns for managing by outcomes discussions (Torres, T., Continuous Discovery Habits, 2021):
- Pursuing too many outcomes at once. Most of us are overly optimistic about what we can achieve in a short time. No matter how hard we work, our companies will always ask more of us. Put these two together, and we often see product trios pursuing multiple outcomes at once. What happens when we do this is that we spread ourselves too thin. We make incremental progress (at best) on some of our outcomes but rarely have a big impact on any of our outcomes. Most teams will have more of an impact by focusing on one outcome at a time.
- Ping-ponging from one outcome to another. Because many businesses have developed fire-fighting cultures—where every customer complaint is treated like a crisis—it’s common for product trios to ping-pong from one outcome to the next, quarter to quarter. However, you’ve already learned that learning how to impact a new outcome takes time. When we ping-pong from outcome to outcome, we never reap the benefits of this learning curve. Instead, set an outcome for your team, and focus on it for a few quarters. You’ll be amazed at how much impact you have in the second and third quarters after you’ve had some time to learn and explore.
- Setting individual outcomes instead of product-trio outcomes. Because product managers, designers, and software engineers typically report up, to their respective departments, it’s not uncommon for a product trio to get pulled in three different directions, with each member tasked with a different goal. Perhaps the product manager is tasked with a business outcome, the designer is tasked with a usability outcome, and the engineer is tasked with a technical-performance outcome. This is most common in companies that tie outcomes to compensation. However, it has a detrimental effect. The goal is for the product trio to collaborate to achieve product outcomes that drive business outcomes. This isn’t possible if each member is focused on their own goal. Instead of setting individual outcomes, set team outcomes.
- Focusing on one outcome to the detriment of all else. In addition to your primary outcome, a team needs to monitor health metrics to ensure they aren’t causing detrimental effects elsewhere. For example, customer-acquisition goals are often paired with customer-satisfaction metrics to ensure that we aren’t acquiring unhappy customers. To be clear, this doesn’t mean one team is focused on both acquisition and satisfaction at the same time. It means their goal is to increase acquisition without negatively impacting satisfaction.
8 Mistakes to Avoid when Defining Product Outcomes
According to Hope Gurion and Teressa Torres, there are some common mistakes you should avoid when defining product outcomes (Defining Product Outcomes: The 8 Most Common Mistakes You Should Avoid, 2022):
- Disguising Outputs as Outcomes: This happens when a team defines an outcome as something that is easily delivered, but doesn’t add value to the business. For example, delivering an Android app is an output, but it is not an outcome. An outcome should be something that benefits the business, such as increasing customer satisfaction or revenue.
- Not Connecting Outcomes to Business Value: This can happen when a team focuses on a goal that seems important, but doesn’t align with the company’s strategy or how it makes money. For example, a team might focus on increasing the number of users of their product, but if this doesn’t lead to more paying customers, it is not a valuable outcome.
- Giving Teams Outcomes That Are Outside Their Span of Control: This can happen when a team is assigned a goal that relies on other departments or external factors for completion. For example, a product team might be assigned a goal of increasing customer satisfaction, but if customer satisfaction is also affected by the quality of customer support, then the product team cannot control the outcome on its own.
- Hyper-focusing on a Traction Metric: This can happen when a team focuses on a metric that shows user engagement but may not reflect true customer value. For example, a team might focus on the number of daily active users (DAUs) of their product, but if DAUs don’t translate into paying customers, then it is not a valuable outcome.
- Creating Too Many Dependencies Across Teams: This can happen when a team is assigned a goal that requires multiple teams to collaborate without clear direction or ownership. For example, a product team might be assigned a goal of increasing customer satisfaction, but if customer satisfaction is also affected by the quality of customer support and the sales process, then the product team cannot achieve the goal on its own.
- Measuring Actions Instead of the Value of Those Actions: This can happen when a team focuses on user actions, such as applying for a job, instead of the desired outcome, such as getting hired. For example, a team might focus on the number of users who apply for a job on their job board. However, this is not a valuable outcome if the number of users who get hired does not increase.
- Setting Sentiment Outcomes Without Any Further Direction: Customer satisfaction and customer sentiment are important, but when we have such a broad sentiment-based metric, it can be very challenging for teams to please everybody all the time, which is kind of where you end up when you just have a sentiment metric.
The Benefits of OverFocusing
As any project manager can tell you, metrics and data can be overwhelming. Between product data, KPIs, performance metrics, and user research, there are a lot of numbers to review, analyze, and manage on a regular basis. So — to avoid the Hiding in Complexity Anti-pattern and help teams stay focused — you should think of the minimal set of measures that helps you evaluate whether the strategy is valid. If the strategy moves the metric, you will know you are on the right path. If you fail to move the metric, move on to the next idea.
There are a couple of approaches to determine the minimal set of measures that will give focus: Control Points and North Star Metrics (or The One Metric that Matters).
Control Points
Think about your business. For each project, what are the right control points to measure? What are those few things that if they turn out right–everything turns out right? It’s important to pick something that is at a higher level than a detail or process step — e.g., how many calls did we make? — but is also not a big, vaguely defined end goal — e.g., more revenue (Azzarello, P., “Control Points” in Move: How decisive leaders execute strategy despite obstacles, setbacks, and stalls, 2017).
Imagine your goal is to improve the capability of the customer service reps in your organization, so you put them all through training. If you then have a success measure of “# of customer service reps who have gone through training”, that is a measure only of the activity or the process step — that they have gone through the training. It tells you literally nothing about the outcome — whether or not they have become better at their jobs. You would measure, “Did our service reps actually get better in their jobs in a way that is meaningful to our customers?” Once you get the hang of it, you can create truly meaningful measures that will move your business forward (Azzarello, P., “Control Points” in Move: How decisive leaders execute strategy despite obstacles, setbacks, and stalls, 2017).
Which brings us back full circle to Joshua Seiden’s definition of outcomes:
Sometimes this approach of defining control points leads you to an anecdotal metric versus a hard measure. It’s a mistake to dismiss an anecdotal measure because it is not a hard data measure. Control points, by definition, are a more broadly defined outcome than a detail or a process step. If you’ve got a good control point defined, ask yourself what would the genuine measure of success be. If it turns out to be a description of how something is working, that’s okay. Go with it (Azzarello, P., “Control Points” in Move: How decisive leaders execute strategy despite obstacles, setbacks, and stalls, 2017).
North Star Metrics
A north star metric is a key performance indicator (KPI) that you use to measure the progress of your business. It has one purpose: to keep you focused on what’s important. A metric shouldn’t be something obscure or abstract, like “more customers” or “higher engagement.” Those goals can be helpful and can be used as input into your north star metric, but they don’t make great KPIs themselves because they don’t provide any information about how well you’re meeting them (Gadvi, V., How to identify your North Star Metric, 2022).
Your north star metric is like your compass. It will guide you towards a destination and keep you focused on what matters most. However, it’s not just about figuring out what it is; you must also know how to use it. You can use the framework below to identify your north star metric and put it into practice to help guide your company through any changes or challenges that might come along the way — because they will (Gadvi, V., How to identify your North Star Metric, 2022).
Let’s look at four reasons why you should use North Star, or “the One Metric That Matters” OMTM (Croll, A., & Yoskovitz, B. Lean Analytics. 2013):
- It answers the most important question you have. At any given time, you’ll be trying to answer a hundred different questions and juggling a million things. You need to identify the riskiest areas of your business as quickly as possible, and that’s where the most important question lies. When you know what the right question is, you’ll know what metric to track in order to answer that question. That’s the OMTM.
- It forces you to draw a line in the sand and have clear goals. After you’ve identified the key problem on which you want to focus, you need to set goals. You need a way of defining success.
- It focuses the entire company. Avinash Kaushik has a name for trying to report too many things: data puking. Nobody likes puke. Use the OMTM as a way of focusing your entire company. Display your OMTM prominently through web dashboards, on TV screens, or in regular emails.
- It inspires a culture of experimentation. By now you should appreciate the importance of experimentation. It’s critical to move through the build-›measure-*learn cycle as quickly and as frequently as possible. To succeed at that, you need to actively encourage experimentation. It will lead to small-f failures, but you can’t punish that. Quite the opposite: failure that comes from planned, methodical testing is simply how you learn. It moves things forward in the end. It’s how you avoid the big-F failure. Everyone in your organization should be inspired and encouraged to experiment. When everyone rallies around the One Metric that Matter, it gives us the opportunity to experiment independently to improve it, it’s a powerful force.
Product managers working in established companies have this figured out, but if you’re a founding product manager or an entrepreneur, here’s what it means for you. The key to picking the right North Start / OMTM metrics is to find the one that appropriately aligns with your business model (check the table below). So, let’s say you were the founder of an online store selling vegan products. Your North Star Metric would be Average Order Value – defined as the total amount spent per order over a specific period. It is calculated using the following formula (Gadvi, V., How to identify your North Star Metric, 2022):
Business Model | Example | North Star Metrics |
User Generated Content + Ads | Facebook, Quora, Instagram, Youtube | Monthly Active Users (MAU), Time on Site(ToS) |
Freemium | Spotify, Mobile Games, Tinder | Monthly Active Users (MAU), % who upgrade to paid |
Enterprise SAAS | Slack, Asana | Monthly Active Users (MAU), % who upgrade to paid |
2-sided marketplace | Airbnb, Uber | Monthly Active Users (MAU), monthly active riders/drivers, Monthly Active Users (MAU) buyers/sellers |
E-commerce | Amazon, eBay, Flipkart | Average order value (AOV); basket size |
Your north star metric should also be accessible for all your team members to understand and communicate, even if they don’t work in data science or analytics. Having a clear north star metric helps everyone in the organization stay aligned around what matters most when making decisions about new features or products — which will ultimately make them more successful by bringing them closer to their users’ needs (Gadvi, V., How to identify your North Star Metric, 2022).
Each strategy we had at Netflix (from our personalization strategy to our theory that a more straightforward experience would improve retention) had a very specific metric that helped us evaluate whether the strategy was valid. If the strategy moved the metric, we knew we were on the right path. If we failed to move the metric, we moved on to the next idea. Identifying these metrics took a lot of the politics and ambiguity out of which strategies were succeeding or not.
Gibson Biddle in Solving Product (Garbugli, É., 2020)
Metrics, Goals, and Moving Targets
When picking a goal early on, you’re drawing a line in the sand–not carving it in stone. You’re chasing a moving target because you really don’t know how to define success (Croll, A., & Yoskovitz, B. Lean Analytics, 2013).
You might assume your product has to be used daily to succeed, only to find out that’s not so. In these situations, it’s reasonable to update your metrics accordingly, provided that you’re able to prove the value created. Here are some best practices when picking metrics (Croll, A., & Yoskovitz, B. Lean Analytics, 2013):
- Know your customer. There’s no substitute for engaging with customers and users directly. All the numbers in the world can’t explain why something is happening. Pick up the phone right now and call a customer, even one who’s disengaged.
- Make early assumptions and set targets for what you think success looks like, but don’t experiment yourself into oblivion. Lower the bar if necessary, but not for the sake of getting over it: that’s just cheating.
- Use qualitative data to understand what value you’re creating and adjust only if the new line in the sand reflects how customers (in specific segments) are using your product.
Managing by Outcomes at the Right Level of Altitude
As mentioned earlier, business outcomes often require coordination across many business functions. Coordination isn’t bad. In fact, most of the work that we do will require coordination across teams. However, we can increase the accountability of each team by assigning a metric that is relevant to their own work (Torres, T., Continuous Discovery Habits, 2021).
Coordination for Managing by Outcomes
As an example of coordination, we might ask the product team to increase the number of dogs who like the food (something within the product team’s span of control). In contrast, we might ask the marketing team to increase the pricing transparency after the trial ends, and we might ask the customer support team to decrease their average response times. All three groups contribute to the business outcome of increasing customer retention, but each is doing so in the way that they can best contribute (Torres, T., Continuous Discovery Habits, 2021).
When multiple teams are assigned the same outcome, it’s easy to shift blame for lack of progress (Torres, T., Continuous Discovery Habits, 2021).
Exploration for Managing by Outcomes
When setting product outcomes, we want to make sure that we are giving the product trio enough latitude to explore. This is where the distinction between product outcomes and traction metrics can be helpful. It’s also a key delineation between an outcome mindset and an output mindset (Torres, T., Continuous Discovery Habits, 2021).
Experimentation is at the heart of what software developers call agile development. Rather than planning all activities up-front and then sequentially, agile development emphasises running many experiments and learning from them (Mueller, S., & Dhar, J., The decision maker’s playbook, 2019)
It takes a certain level of maturity to run effective experiments. To avoid shipping experiments for the sake of shipping experiments, teams need to focus on delivering outcomes. They also need to be willing to embrace failure to make progress (Garbugli, É., Solving Product, 2020).
For a learning culture to thrive, your teams must feel safe to experiment. Experiments are how we learn, but experiments — by nature — fail frequently. In a good experiment, you learn as much from failure as from success. If failure is stigmatised, teams will take few risks (Gothelf, J., & Seiden, J., Sense and respond. 2017).
On average, 80% of experiments fail to deliver the expected outcomes but with the right method, 100% of experiments can help you learn and progress (Garbugli, É., Solving Product, 2020).
You should focus on one or two core goals at a time, aligning with your North Star metric or the AARRR steps that you’re focused on. Your goals should be big, your experiments small and nimble (Garbugli, É., Solving Product, 2020).
Teams will be more willing to experiment if they feel they are not being measured by the delivery of hard requirements, but appreciated by achieving great outcomes that create value.
Psychological Safety, Experimentation and Permission to Fail
Most change efforts fail, even when experienced people are involved, and even when the environment is relatively trusting and safe. We should approach improvement like we approach product—using thoughtful experiments and disciplined, intentional learning (Cutler, J. Making things better with enabling constraints, 2022).
For a learning culture to thrive, your teams must feel safe to experiment. Experiments are how we learn, but experiments — by nature — fail frequently. In a good experiment, you learn as much from failure as from success. If failure is stigmatized, teams will take few risks (Gothelf, J., & Seiden, J., Sense and respond. 2017).
Permission to Fail and Sandboxing
Sandboxing is a way to reduce the risk of experimentation. The idea is to create a set of procedures, rules, and constraints that your organization can live with and within which failure is acceptable. You will also need cultural permission to experiment. This means that your progress will not be linear and predictable and that you should not be judged by your delivery rate (the amount of stuff you ship) but by your learning rate, and by your overall progress towards strategic goals — in other words, by the extent to which you achieve the outcomes in question. A sandbox creates positive effects for both leaders and the team (Gothelf, J., & Seiden, J., Sense and respond. 2017):
- For leaders: there is a legitimate fear that their people will get creative in some way that will cause trouble, and for which a leader will be held responsible. Creating clear guidelines within which people can operate can ease that fear.
- For teams: the fear is about crossing some unstated line. If leaders make the lines clear, it creates space for creativity.
Teams can try to “be agile” as much as they like, but if their direction is not constructed correctly — if their freedom to act isn’t preserved, their goals are not defined correctly, and their constraints are not clearly understood — then there is little they will be able to do (Gothelf, J., & Seiden, J., Sense and respond, 2017)
Enabling Constraints
When thinking of creating a learning culture, it helps to focus on the collective behaviors of a system, and the “constraints” of that system inform and shape that behavior. Constraints shape a system by modifying its phase space (its range of possible actions) or the probability distribution (the likelihood) of events and movements within that space. Because constraints are both key actors and key indicators of a system, constraint mapping can be a highly productive first step in considering how to intervene (Juarrero, A., Dynamics in Action: Intentional Behavior as a Complex System, 1999)
Designing effective enabling constraints is an art. Many things feel intuitively correct but have potentially harmful consequences. For example (Cutler, J. Making things better with enabling constraints, 2022):
- In an effort to increase certainty about plans and commitments, the team undertakes a comprehensive annual planning effort. This feels good on the surface, but it forces premature convergence, encourages over-utilization of shared resources, and encourages big, inflexible projects.
- In an effort to centralize communication, the team adopts a single tool for documentation (a theoretically enabling constraint). This feels good on the surface—having documentation everywhere is painful—but since a large % of communication with external teams happens outside the central tool, you find a two or three (or more) tiered system of communication (e.g., executive communication happens in slides, not in the tool).
The trick, then, is designing effective enabling constraints. An additional layer to consider is the layer of trust, respect, empowerment, and psychological safety. Example: deadlines. In theory, deadlines can be enabling constraints. However, without empowerment and trust, deadlines become disabling. Teams cut the wrong corners, and optimize for low trust. These two points—the counterintuitive nature of constraints, and the trust/safety element—explain why most change efforts fail. High trust environments can easily pick the wrong constraints. Or they try too much at once and put people in a state of change overload (Cutler, J. Making things better with enabling constraints, 2022).
No enabling constraint is guaranteed to work, but some are better than others. What should someone designing an enabling constraint look out for? (Cutler, J. Making things better with enabling constraints, 2022):
- It is easy to know if you are doing it or not. For example, asking everyone to use a single document repository is a bit vague. People WILL need to use other systems to document things. Do those count? What goes in it? What doesn’t? An alternative might be to run an experiment where the team commits to putting ONE document type in the centralized repository or tool. Put another way, it is within reach and achievable.
- It has an expiration date and is treated as an experiment. The best enabling constraints are treated as an experiment. The team commits to giving it an honest try for a period of time. The team is promised an opportunity to weigh in on the experiment, before agreeing to extend it.
- It helps people go through the motions. If you have a future state in mind, it helps to help people go through the motions a bit and try things out. In a safe way.
- The world doesn’t end if it “fails”. Sometimes things don’t go as planned. That’s normal. The best enabling constraints fail gracefully. They are safe-to-fail probes.
- Fast feedback potential. The best enabling constraints will provide fast feedback. Experiments that last forever, with no sense if they are helping/hurting, are dangerous (or at a minimum draining, and encourage people to just work around them).
Overcoming Fear of Failure with Premortems
The scientist and decision-making expert Gary Klein is a proponent of using ” premortems”: doing a postmortem in advance to envision what a potential failure might look like so that you can then consider the possible reasons for that failure. To put the premortem into question form you might ask: If we were to fail, what might be the reasons for that failure? Decision researchers say using premortems can temper excessive optimism and encourage a more realistic assessment of risk (Berger, W., The book of beautiful questions, 2019).
While you’re envisioning the possibility of failure, be sure to consider the opposite, as well, by asking: What if we succeed — what would that look like? Jonathan Fields points out that this question is important because it can help counter the negativity bias. Fields recommends visualizing, in detail, what would be likely to happen in a best-case scenario (more on that in the Importance of Vision). The reality may not live up to that, but that vision can provide an incentive strong enough to encourage taking a risk (Berger, W., The book of beautiful questions, 2019).
Questions you can use to help overcome the fear of failure (Berger, W., The book of beautiful questions, 2019):
- What would we try if we knew we could not fail? start with this favorite Silicon Valley question to help identify bold possibilities.
- What is the worst that could happen? This may seem negative, but the question forces the team to confront hazy fears and consider them in a more specific way (which usually makes them less scary).
- If we did fail, what would be the likely causes? Try the premortem exercise I’ve mentioned earlier, listing some of the potential causes for failure. This should — at least — create a list of pitfalls for you to avoid.
- … and how would we recover from that failure? Just thinking about how we would pick up the pieces if we fail tends to lessen the fear of that possibility.
- What if we succeed — what would that look like? Now shift from the worst-case to the best-case scenario. Visualizing success breeds confidence — and provides motivation for moving forward.
- How can we take one small step into the breach? Consider whether there are “baby steps” that could lead up to taking a leap.
Blameless Postmortems
The blameless postmortem is one culture-building practice that organizations use to create permission to fail. This regularly occurring meeting provides an opportunity for the entire team to go through a recent time period (product release cycle, quarter, etc) or to review a specific incident and honestly examine what went well, what could be improved, and what should not be continued. Often these postmortems are facilitated by someone outside the team to avoid any bias or conflict of interest. The motivation for this process is to (Gothelf, J., & Seiden, J., Sense and respond. 2017):
- Treat failures as learning opportunities: Think of this activity as a continuous improvement but applied to the way the team works rather than the product it’s working on. In order to learn from failures, you need to accurately assess what happened, why it happened, and how it can be prevented next time.
- Protect from blame: It would be simple to treat this inquiry as a hunt for the person responsible so that this person can be disciplined. But if this is the outcome of the inquiry, the people involved will not be motivated to share the truth about what happened. Instead, they will cover it up to avoid punishment. So, in order for people to learn, the blameless postmortem process must include an ironclad guarantee that they can speak without fear of punishment. And that guarantee must be upheld each and every time.
Facilitating Two-way Negotiations
I mentioned in a previous post that when the team engages in endless discussions around which customer/user problems we should be focusing on, Jobs-to-be-Done becomes a unit of analysis that helps teams have facilitated discussions around finding ways to remove (or at least reduce) subjectivity while assessing value, especially while facilitating two-negotiations for managing by outcomes.
Facilitating Two-way Negotiations through Investment Discussions
I mentioned in a previous article that — when facilitating investment discussions — designers must engage with their business stakeholders to understand what objectives and unique positions they want their products to assume in the industry and their choices to achieve such objectives and positions.
As a result, designers will be better prepared to influence the business decisions that help create such an advantage and superior value to the competition.
That’s why it is important that designers engage with stakeholders early and often to make sure we’ve got the right framing of the problem space around the 3 vision-related questions (as per the Six Strategic Questions illustration above):
- What are our aspirations?
- What are our challenges?
- What will we focus on?
If you can answer the questions above by working with your stakeholder, all the discussions below will be much easier. In my experience, however, that’s not usually the case! Most stakeholders have a list of features in their minds, which — as I mentioned in a previous article — it’s not a cohesive strategy. So most of these investment discussions will start with asking good questions.
Here are seven questions you can ask yourself (and your team) before building a new feature (Croll, A., & Yoskovitz, B., Lean Analytics: Use Data to Build a Better Startup Faster, 2013):
- Why Will It Make Things Better? You can’t build a feature without having a reason for building it. In the Stickiness stage, your focus is retention. Look at your potential feature list and ask yourself, “Why do I think this will improve retention?” You’ll be tempted to copy what others are doing–using gamification to drive engagement (and, in turn, retention)-just because it looks like it’s working for the competition. Asking, “Why will it make it better?” forces you to write out (on paper!) a hypothesis. This naturally leads to a good experiment that will test that hypothesis. Feature experiments, if tied to a specific metric (such as retention), are usually straightforward: you believe feature X will improve retention by Y percent. The second part of that statement is as important as the first part; you need to draw that line in the sand.
- Can You Measure the Effect of the Feature? Feature experiments require that measure the impact of the feature. That impact has to be quantifiable. Too often, features get added to a product without any quantifiable validation – a direct path toward scope creep and feature bloat. If you cannot quantify the impact of a new feature, you can assess its value, and you won’t know what to do with the feature over time. If this is the case, leave it as is, iterate on it, or kill it.
- How Long Will the Feature Take to Build? Time is a precious resource you never get back. You have to compare the relative development time of each feature on your list. If something takes months to build, you need reasonable confidence that it will have a significant impact. Can you break it into smaller parts or test the inherent risk with a curated MVP or a prototype instead?
- Will the Feature Overcomplicate Things? Complexity kills products. It’s most apparent in the user experience of many web applications: they become so convoluted and confusing that users leave for a simpler alternative. “And” is the enemy of success. When discussing a feature with your team, pay attention to how it’s being described. “The feature will allow you to do this, and it’d be great if it did this other thing, and this other thing, and this other thing too.” Warning bells should be going off at this point. If you’re trying to justify a feature by saying it satisfies several needs a little bit, know that it’s almost always better to satisfy one need in an absolutely epic, remarkable way.
- How Much Risk Is There in This New Feature? Building new features always come with some amount of risk. There’s
technical risk related to how a feature may impact the code base.
There’s user risk regarding how people might respond to the feature. There’s also the risk regarding how a feature drives future developments, potentially setting you on a path you don’t want to pursue. Each feature you add creates an emotional commitment to your development team and sometimes to your customers. Analytics helps break that bond so you can measure things honestly and make the best decisions possible with the most available information. - How Innovative Is the New Feature? Not everything you do will be innovative. Most features aren’t innovative; they’re small tweaks to a product in the hope that the whole is more valuable than the individual parts. But consider innovation when prioritizing feature development; generally, the easiest things to do rarely have a big impact. You’re still in the Stickiness stage, trying to find the right product. Changing a submit button from red to blue may result in a good jump in signup conversions (a classic A/B test), but it’s probably not going to turn your business from a failure into a giant success; it’s also easy for others to copy. It’s better to make big bets, swing for the fences, try more radical experiments, and build more disruptive things, particularly since you have fewer user expectations than you will later on.
- What Do Users Say They Want? Your users are important. Their feedback is important. But relying on what they say is risky. Be careful about over-prioritizing based on user input alone. Users lie, and they don’t like hurting your feelings. Prioritizing feature development during an MVP isn’t an exact science. User actions speak louder than words. Aim for a genuinely testable hypothesis for every feature you build, and you’ll have a much better chance of quickly validating success or failure. Simply tracking how popular various features are within the application will reveal what’s working and what’s not. Looking at the features a user used before hitting “undo” or the back button will pinpoint the possible problems.
Facilitating Investment Discussions around Value
As I mentioned in a previous post, designers must become skilled facilitators that respond, prod, encourage, guide, coach, and teach as they guide individuals and groups to make decisions critical in the business world through effective processes. Few decisions are harder than deciding how to prioritize.
The mistake I’ve seen many designers make is to look at prioritization discussion as a zero-sum game: our user-centered design tools set may have focused too much on the needs of the user, at the expense of business needs and technological constraints.
To understand the risk and uncertainty of your idea you need to ask: “What are all the things that need to be true for this idea to work?” This will allow you to identify all three types of hypotheses underlying a business idea: desirability, feasibility, and viability (Bland, D. J., & Osterwalder, A., Testing business ideas, 2020):
- Desirability (do they want this?) relates to the risk that the market a business is targeting is too small; that too few customers want the value proposition; or that the company can’t reach, acquire, and retain targeted customers.
- Feasibility (Can we do this?) relates to the risk that a business can’t manage, scale, or get access to key resources (technology, IP, brand, etc.). This is isn’t just technical feasibility; we also look need to look at overall regulatory, policy, and governance that would prevent you from making your solution a success.
- Viability (Should we do this?) relates to the risk that a business cannot generate more revenue than costs (revenue stream and cost stream). While customers may want your solution (desirable) and you can build it (feasible), perhaps there’s not enough of a market for it or people won’t pay enough for it.
Design strategists should help the team find objective ways to value design ideas/ approaches/ solutions to justify their investment from desirability, feasibility, and viability.
Facilitating Investment Discussions in Strategy
Learn more about how to help teams with facilitating investment discussions by finding ways to reduce subjectivity when debating the value of ideas.
Facilitating Two-way Negotiations through Visual Thinking
Mark Dziersk is convinced that “design” and “strategy” traditionally reflect two disparate realms within the business world. He urges designers to communicate with those responsible for strategy by using their talent for visualization and storytelling — “languages” that can powerfully convey content in such areas as the DNA of the consumer experience, innovation options, and approaches to decision-making (Dziersk, M., “Visual Thinking: A leadership Strategy” in Building Design Strategy, Lockwood, T., & Walton, T., 2010).
The truth is that very few designers understand strategy, much less leverage in their work. But the design world is trying and is making inroads. Dealing with and converting ambiguity to clearly focused design strategy is key and gives design thinking the leverage for running a come-give business in the post-dot.com, post “distribution dictate direction” business world we live in (Dziersk, M., “Visual Thinking: A leadership Strategy” in Building Design Strategy, Lockwood, T., & Walton, T., 2010).
Visualizing thought processes can help break down complex problems. It empowers teams and staff to build on one another’s ideas, fosters collaboration, jump-starts co-creation, and boosts innovation.
Furthermore, to really have an impact during discussions and decision points so that they’ll be remembered forever, capture what’s been said (at least some of it) visually (Van Der Pijl, et al. Design a better business, 2016).
If designers want to influence the strategic decisions that drive product vision forward, I’ve found that using alignment diagrams can be a great way to get teams to create a shared understanding of the problems we are trying to solve and the solutions that will address those problems, helping teams transition between problem space and solution space at the correct times.
Visual Thinking and Alignment Diagrams
Misalignment impacts the entire enterprise: teams lack a common purpose, solutions are built that are detached from reality, and strategy is short-sighted (Kalbach, J., ”Visualizing Value: Aligning Outside-in” in Mapping Experiences, 2021).
Alignment Diagrams coordinate insights from the outside world with the teams inside an organization who create products and services to meet market needs.
In other words, Alignment diagrams or models serve as a hinge upon which we can pivot from the problem space to the solutions space.
Jim Kalbach uses the term alignment diagram to refer to any map, diagram, or visualization that reveals both sides of value creation in a single overview. They are a category of diagrams that illustrates the interaction between people and organizations.
Here is — yet again — another advantage of using Jobs to be Done to facilitate two-way negotiations that allow for Managing by Outcomes: because the customers buy your “why,” not your “what,” Jobs to be Done provide an objective way for us to visualize how our organization’s core values align with those of our customers/users.
I find a few alignment diagrams particularly helpful for facilitating discussions around managing by outcomes.
Lean UX Canvas
Lean UX Canvas codifies the Lean UX process to help teams frame their work as a business problem to solve (rather than a solution to implement) and then dissect that business problem into its core assumptions. We then weave those assumptions into hypotheses. Finally, we design experiments to test our riskiest hypotheses (Gothelf, J., & Seiden, J., Lean UX: Applying lean principles to improve user experience, 2021).
Opportunity-Solution Tree
Many teams generate a lot of ideas when they go through a journey-mapping or experience-mapping exercise. There are so many opportunities for improving things for the customer that they quickly become overwhelmed by a mass of problems, solutions, needs, and ideas without much structure or priority (“Opportunity-Solution Tree” in Product Roadmaps Relaunched, Lombardo, C. T., McCarthy, B., Ryan, E., & Connors, M., 2017).
Opportunity-Solution Trees (OST) are a simple way of visually representing the paths you might take to reach a desired outcome (Torres, T., Continuous Discovery Habits, 2021):
- The root of the tree is your desired outcome—the business need that reflects how your team can create business value.
- Below the opportunity space is the solution space. This is where we’ll visually depict the solutions we are exploring.
- Below the solution space are assumption tests. This is how we’ll evaluate which solutions will help us best create customer value in a way that drives business value.
Opportunity solution trees have a number of benefits. They help product trios (Torres, T., Continuous Discovery Habits, 2021):
- Resolve the tension between business needs and customer needs
- Build and maintain a shared understanding of how they might reach their desired outcome
- Adopt a continuous mindset
- Unlock better decision-making
- Unlock faster learning cycles
- Build confidence in knowing what to do next
- Unlock simpler stakeholder management
Here are the four steps to creating an Opportunity Solution Tree (ProductPlan, “Opportunity Solution Tree”, 2022):
Step 1: Identify the desired outcome: Narrow your goal to a single metric you want to improve (e.g., revenue, customer satisfaction, retention, etc.).
Step 2: Recognize opportunities that emerge from generative research: Dig in deep to understand the needs and pain points of your customers. Keep in mind that pain points are opportunities!
Step 3: Be open to solutions from everywhere: The caveat, however, is that a potential solution must directly link to an opportunity—otherwise, it’s just a distraction from the primary goal of your OST.
Step 4: Experiment to evaluate and evolve your solutions: Now, it’s time to test a single solution with sets of experiments.
Impact Mapping
Like highway maps that show towns and cities and the roads, connecting them, Impact Maps layout out what we will build and how these connect to ways we will assist the people who will use the solution. An impact map is a visualisation of the scope and underlying assumptions, created collaboratively by senior technical people and business people. It’s a mind-map grown during a discussion facilitated by answering four questions: WHY, WHO, HOW and WHAT of the problem the team is confronting (Adzic, G., Impact Mapping, 2012)
User Story Maps
User story mapping is a visual exercise that helps product managers and their development teams define the work that will create the most delightful user experience. User Story Mapping allows teams to create a dynamic outline of a set of representative user’s interactions with the product, evaluate which steps have the most benefit for the user, and prioritise what should be built next (Patton, J., User Story Mapping: Discover the whole story, build the right product, 2014).
Jeff Patton is one of the few who has been able to translate Agile into a User Centric practice. User Story Mapping is probably my favorite visualisation tool to create shared understanding around product, users, and context, and it helps with prioritization discussions.
Check also
Mental models are affinity diagrams of behaviors made from ethnographic data gathered from audience representatives. They give you a deep understanding of people’s motivations and thought processes, along with the emotional and philosophical landscape in which they operate (Young, I., Mental Models, 2008).
Service Blueprints are visual thinking artifacts that help to capture the big picture and interconnections. They are a way to plan projects and relate service design decisions to the original research insights. The blueprint is different from the service ecology in that it includes specific details about the elements, experiences, and delivery within the service (Polaine, A., Løvlie, L., & Reason, B., Service design: From insight to implementation, 2013).
Value Stream Mapping is a practical and highly effective way to lean to see and resolve disconnects, redundancies, and gaps in how work gets done (Martin, K., & Osterling, M., Value stream mapping, 2014)
Strategy Canvas help you compare how well competitors meet costumer buying criteria or desired outcomes. To create your own strategy canvas, list the 10-12 most important functional desired outcomes — or buying criteria — on the x-axis. On the y-ais, list the 3-5 most common competitors (direct, indirect, alternative solutions and multi-tools solutions) for the job. (Garbugli, É., Solving Product, 2020).
Strategy, Facilitation and Visual Thinking
Learn more about some of the visual thinking techniques that can be drawn from design thinking, data analytics, system thinking, gamestorming, and lean start-up to help facilitate alignment discussions (Photo by Christina Morillo on Pexels.com)
Creating Shared Understanding through Abstraction Laddering and Alignment Diagrams
As we mentioned in the previous section, Alignment Diagrams serve as a hinge that seamlessly bridges the gap between problem space and solution space. Employed in strategic discussions, these diagrams bring together designers, business stakeholders, and technology experts, fostering a shared understanding of objectives and creating a visual narrative of value creation.
Simultaneously, the introduction of Daniel Stillman’s Abstraction Laddering adds another layer to this collaborative foundation. Rooted in the philosophy that understanding shared goals is paramount for effective collaboration, Abstraction Laddering provides an interface for dissecting the layers of why, what, and how in goal-oriented discussions. It becomes a vital complement to Alignment Diagrams, providing a structured method to uncover deeper motives and intricacies in problem-solving.
While Jeff Patton’s User Story Maps, Karl Wiegers’ levels of granularity in requirements, and Jim Kalbach’s Job Hierarchy offer distinct frameworks, a striking similarity emerges when viewed through the lens of Daniel Stillman’s Abstraction Laddering:
- In both Patton, Wiegers, and Kalbach’s models, the higher levels, such as goals, scenarios at the kite level, or aspirations, correspond to the “whys” of the user journey.
- As one descends to lower levels, like details at the fish level, or microjobs, the focus shifts to the “hows” or the specific steps and actions necessary for task completion.
- This alignment underscores a shared principle across these frameworks: the importance of understanding the overarching goals (whys) at higher levels and the detailed execution (hows) at lower levels to create comprehensive and user-centric solutions, ensuring the teams are solving the “right” problems.
When contextualizing these tools within the Jobs-to-be-Done framework, they collectively offer a pragmatic approach to generating alignment in problem-solving discussions. Alignment Diagrams aid in visually articulating the alignment of objectives, while Abstraction Laddering becomes the scaffolding for exploring the layers of motivation behind these objectives. In the subsequent section, we’ll explore how this contextualization sets the stage for a unified framework, harmonizing the collaborative power of visual tools with the nuanced understanding of user needs and motivations encapsulated in JTBD.
Combining Abstraction Laddering with User Story Mapping and JTBD
In my practice of facilitating strategy discussions, the integration of Jeff Patton’s User Story Mapping, the hierarchical insight from Jim Kalbach’s Jobs-to-be-Done (JTBD), and the abstraction laddering methodology advocated by Daniel Stillman manifests a holistic framework for comprehensive product development.
As Patton’s User Story Map progresses from the broad strokes of “activities or scenarios” to the refined details of “details or user stories,” Kalbach’s JTBD hierarchy complements this journey with “functional jobs,” “social jobs,” and “emotional jobs.” Aligning these with Stillman’s abstraction laddering, jobs seamlessly substitute for the “whys,” encapsulating the core motives and user needs, while User Stories become adept substitutes for the “hows,” translating those motives into tangible functionalities.
tillman’s emphasis on shared goals, visualized through concentric circles and abstraction ladders, finds resonance in the collaborative aspects of User Story Mapping and the nuanced layers of JTBD, creating a unified framework that harmoniously blends the why, what, and how of product development. This synthesis provides a robust interface for stakeholders to collaboratively navigate the complexities of strategy and execution collaboratively, ensuring a shared vision and clarity in goals, thus propelling the product development journey forward.
Roadmap Planning through Alignment Diagrams Step by Step
In the process of creating truly meaningful alignment diagrams, there are practical considerations that need to be taken into account. This section will explore the hands-on experience of developing job boards and conducting user story mapping workshops. This will involve examining the details of facilitating these sessions and providing step-by-step guidance on revealing jobs, aligning them with the customer journey, and mapping them to tasks. With insights drawn from real-world experiences, we will translate abstract ideas into tangible diagrams that represent tasks and user stories and serve as powerful tools for promoting shared understanding within your teams. Join us on this pragmatic exploration of how to turn theory into actionable frameworks.
Crafting Customer-Centric Roadmaps: A Jobs-to-be-Done Approach to Roadmap Planning and Better Backlog Grooming
Learn how using JTBD injects user-centricity into every roadmap planning and backlog grooming activity, allowing you to prioritize effectively and craft impactful product experiences.
Managing by Outcomes through Making Collaboration Possible
Managers can create the conditions for collaboration to flourish by making a few changes in the way teams are composed, managed, and organized. These changes are simple enough to describe but can be difficult to implement, because they require coordination with other departments. Indeed, without the support of senior leadership, these changes may prove quite difficult to achieve, as simple as they may seem.
Here is a list of key changes that enable sense and respond teams to collaborate effectively (Gothelf, J., & Seiden, J., Sense and respond, 2017):
- Creating autonomous, mission-based teams
- Using cross-functional teams
- Building dedicated teams
- Managing Co-location
- Managing Remote Work
- Managing Outsourcing and Offshore Teams
- Holding Retrospectives
Creating autonomous, mission-based teams
The idea is that teams should not be ordered to create a specific output but instead should be asked to achieve a certain outcome-for example, “Figure out how to launch our new trading service.” To achieve their mission, teams need to decide what to make and have the resources they need to make it, the ability to release it, and the permission to learn and begin again. This means that the team must have capabilities covering these activities’ spectrum. They also need the authority to execute these activities without waiting for approval. They need freedom of action (Gothelf, J., & Seiden, J., Sense and respond, 2017).
With these capabilities, they are able to move quickly and learn their way forward. Without these capabilities, bad things happen: when teams wait for approval, they become dependent on outside decision-makers. They slow down. They can’t respond when the moment is appropriate. They limit their techniques. They limit their ability to learn. They limit their ability to deliver value (Gothelf, J., & Seiden, J., Sense and respond, 2017).
Using cross-functional teams
Autonomous teams should have the capabilities of monitoring and observing customers, creating experiments, understanding and interpreting data, deciding how to respond, and producing a response. These capabilities form the heart of the cross-functional teams we seek to create. In practice, we must build teams from cross-functional groups and ensure that the core team functions design, engineering, and product management–are dedicated to that team (Gothelf, J., & Seiden, J., Sense and respond, 2017)
Building dedicated teams
Once you have cross-functional teams, you need to ensure they are dedicated to a single mission at a time, and that the staff is dedicated to the team. If your experience is like mine, your team does not have teams probably dedicated resources. From the traditional project management perspective, sharing resources is only natural. The problem with assigning staff to multiple teams or multiple projects is that it creates dependencies among projects, and dependencies reduce flow (Gothelf, J., & Seiden, J., Sense and respond, 2017).
Although a single team can schedule tasks together and optimize its flow internally, it becomes much harder for two teams to schedule tasks together. If a designer has to produce a drawing for team A, then her work for team B is idled until that drawing is complete. And suppose two people on team A have responsibilities to other teams -for example. In that case, the designer owes work to team A and the developer owes work to team C_-then the scheduling problem suddenly multiplies in complexity and rapidly becomes unmanageable (Gothelf, J., & Seiden, J., Sense and respond, 2017)
Supporting New workflows
Perhaps the most important change you need to make in terms of collaboration is helping the team itself to reimagine its workflow. This kind of collaboration typically requires team members to change how they accomplish their work. Product managers may be accustomed to creating detailed plans and business cases; they need to change their approach to one of asking questions and running experiments. Designers may be good at working out each pixel in Photoshop; they need to become comfortable facilitating team design sessions on the whiteboard. Developers may be used to working from detailed requirements documents; they must get used to starting with much sketchier inputs. And everyone needs to get used to the idea that change and rework are valuable parts of the process, instead of costs to be avoided (Gothelf, J., & Seiden, J., Sense and respond, 2017).
Structure is needed to keep teams from devolving or losing focus, especially when facing complex challenges. By understanding how ideas develop and setting up cycles of effort that are timeboxed and iterative, you can help teams de-risk situations and learn to reduce or avoid negative consequences that come from their solutions (Anderson, G., Mastering Collaboration: Make Working Together Less Painful and More Productive, 2019).
The structure you create isn’t meant to govern exactly what teams do or function as a monolithic order, but rather to help the core team and their stakeholders to be explicit about where they are in their efforts and manage expectations. Plans should be made visible and revisited periodically to see what’s changed and whether the effort needs a different approach (Anderson, G., Mastering Collaboration: Make Working Together Less Painful and More Productive, 2019).
Strategy and the Need for Facilitation
Learn more about how to become a skilled facilitator (Photo by fauxels on Pexels.com)
Managing Co-location
The easiest way to get people working together is to put them in the same room. People who sit together will more naturally use conversation as a communication tool. It seems perhaps overly simple to say this, but in an age of text messaging, chat rooms, email, and videoconference, the power of face-to-face conversation is still hard to overstate. This isn’t to say that being in the same room is the only way to create good collaboration or that simply putting people in the same room automatically creates collaboration. But it gives it an important head start (Gothelf, J., & Seiden, J., Sense and respond, 2017).
Managing Remote Work
Sadly, many people still view telecommuting and geographically separated teams as a kiss of death to any hope of good collaboration or communication. But collaboration is a mindset, not a by-product of co-location. So long as that mindset is present, with a few tricks and the right tools, remote team members can contribute to collaborative activities just as well as the teammate sitting in the room with us (Connor, A., & Irizarry, A., Discussing Design, 2015).
Many tools are available for remote collaboration, and more are being released almost daily. Because of the reality that many coworkers aren’t actually co-located, many companies are looking to make collaboration from separate locations as much like being in the same room as possible. More important than the tools we use, however, is our approach (Connor, A., & Irizarry, A., Discussing Design, 2015).
Research by Harvard Business School published in 2018 found that in acknowledged work context — where there is discovery and innovation taking place — organizations that had everyone talking to everyone else all the time actually performed worse than in situations where teams or groups of people communicated and collaborated on a more occasional basis (Bernstein, E., Shore, J., & Lazer, D., 2018). This supports the idea that we actually need to create more purposeful interactions between teams (Skelton, M., & Pais, M. Remote team interactions workbook: Using team topologies patterns for remote working. 2022)
In a remote work context, we tend to see the opposite: intentional communications between teams can drastically diminish in favor of a “broadcasting approach to information sharing. Regardless of whether we’re over or under-communicating, we can enable effective teams by being more purposeful about the type of communication and the type of interactions that we have with other groups in the organization, and, therefore, we can achieve better outcomes. The key is to have well-defined interactions between each of the teams whether they are colocated or remote (Skelton, M., & Pais, M. Remote team interactions workbook: Using team topologies patterns for remote working. 2022).
Strategic Collaboration in Distributed or Remote Environments
Learn more about how to help improve strategic collaboration while working on Distributed, Remote or Global Teams (Photo by fauxels on Pexels.com)
Managing Outsourcing and Offshore Teams
Outsourcing is often associated with offshoring, the practice of moving work to other parts of the world to achieve efficiencies in hiring, diversity, cost, and schedule. Offshoring can happen with vendors, but it can happen within companies as well. Many companies locate business, sales, and marketing at headquarters in a major city and create engineering centers located many thousands of miles away. Outsourcing and offshoring create similar challenges. Both tactics create teams of workers who are separated from customers and stakeholders and are not capable of autonomous learning and who face obstacles in building good collaboration with their colleagues in the business and who cannot, by themselves, establish a two-way conversation with the market (Gothelf, J., & Seiden, J., Sense and respond, 2017).
One way we’ve seen this work is with so-called two-track agile approaches.
Dual-track Agile
The heart of dual-track agile (also called dual-track scrum or dual-track development) is about integrating discovery efforts (determining what to build) with delivery efforts (building) and fostering close collaboration across product, engineering, and design. All of this is to maximize the delivery of valuable outcomes to the business and customers (Lamborn, J, Dual-track agile and continuous discovery: What you need to know, 2022).
The discovery track focuses on producing, testing, and validating product ideas. The delivery track works on turning those ideas into an actual product. Dual-track agile provides a way of combining agile development and UX design goals. Both tracks operate in harmony and lead to excellent products (Productboard, Dual-track agile, 2020).
In two-track agile, two tracks of work act in coordination. The first track is the experiment track. This team uses all the sense and respond techniques described in this book to take on the high-uncertainty portions of the work and figure out what solution works best. From there, the solution can be passed off to a second track-the production track–and this team implements the solution in a robust way(Gothelf, J., & Seiden, J., Sense and respond, 2017)
Two-track agile allows one team to move quickly to discover market needs, and another team to work at a more measured pace to handle security, internationalization, scale, and other concerns. But doing it well is tricky. It risks generating some of the old problems we face with assembly line work: that information may not flow back to the product team in a timely fashion and that it will reduce the rate of change that’s possible. It also isolates the production track, costing it the opportunity to get a preview of (or the ability to participate in defining) what’s coming in the future. Therefore, it’s critical to establish some parameters if you’re adopting two-track agile. It must allow fast feedback from production systems to the experiment track and must allow the entire (Gothelf, J., & Seiden, J., Sense and respond, 2017).
Holding Retrospectives
One of the most valuable techniques for improving team proces is the retrospective meeting, in which periodically- usually every two or three weeks–the team members get together to discuss and improve their working process. There are many ways to run these meetings, but the point is to apply the sense and respond mindset to the team itself. What is working? What isn’t working? What can we change in order to make things better? (Gothelf, J., & Seiden, J., Sense and respond, 2017).
Retrospectives are supposed to be on-going; if the leaders wants to discuss or learn from a specific incident, go back an check blameless postmortens.
The Right Time for Managing by Outcomes Discussions
You might ask yourself, “These are all great, but when should I be doing what?”. Without knowing what kind of team setup you have and what kinds of processes you run in your organization, the best I can do is to map all of the techniques above the Double Diamond framework.
The Double Diamond Framework
Design Council’s Double Diamond conveys a design process to designers and non-designers. The two diamonds represent a process of exploring an issue more widely or deeply (divergent thinking) and then taking focused action (convergent thinking).
- Discover. The first diamond helps people understand, rather than simply assume, what the problem is. It involves speaking to and spending time with people who are affected by the issues.
- Define. The insights gathered from the discovery phase can help you to define the challenge in a different way.
- Develop. The second diamond encourages people to give different answers to the clearly defined problem, seeking inspiration from elsewhere and co-designing with a range of different people.
- Deliver. Delivery involves testing out different solutions at small-scale, rejecting those that will not work and improving the ones that will.
Map of Managing by Outcomes Activities and Methods
Process Awareness characterizes the degree to which the participants understand the process procedures, rules, requirements, workflow, and other details. The higher the process awareness, the more profoundly they will engage in the process, so the better results they deliver.
In my experience, the most significant disconnect between the work designers need to do, and the mindset of every other team member in a team is usually about how quickly we tend — when not facilitated — to jump to solutions instead of contemplating and exploring the problem space a little longer.
Knowing when the team should be diverging, when they should be exploring, and when they should closing will help ensure they get the best out of their collective brainstorming and multiple perspectives’ power and keep the team engaged.
My colleagues Edmund Azigi and Patrick Ashamalla have created a great set of questions and a cheat sheet that maps which questions are more appropriate for different phases of the product development lifecycle. So the following set of activities is inspired by their cheat sheet.
Managing by Outcomes Discussions during “Discover”
This phase has the highest level of ambiguity, so creating shared understanding is critical. While a degree of back and forth is expected and Managing by Outcomes discussions might be too early, you can still move to clarity faster by having a strong shared vision, good problem framing, and clear priorities defined through outcomes upfront.
Here are my recommendations for frameworks, methods, and activities to ensure you are solving the right problems and provide the insights that help you with managing by outcomes:
- User Research
- Hypothesis Writing
- Problem Framing
- Challenge Briefs
- Visioneering
- Value Proposition Design
- Jobs to be Done (JTBD)
- Testing Business Ideas
- A Value Opportunity Analysis (VOA)
- Desirability Testing
Managing by Outcomes Discussions during “Define”
In this phase, we should see the level of ambiguity diminishing, and facilitating investment discussions have the highest payoff in mitigating back-and-forth. Helping the team make good decisions by creating great choices is critical.
Here are my recommendations for frameworks, methods, and activities to answer the questions that help you create great choices and provide the insights that help you with managing by outcomes:
- User Story Mapping
- Stories/Epics
- Design Sprints / Studio
- Concept Validation
- Outcome-Driven Innovation / JTBD
- Importance vs. Satisfaction Framework
- Kano Model
- Objectives, Goals, Strategy & Measures (OGSM)
- Product Backlog & Sprint Planning
Managing by Outcomes Discussions during “Develop”
In this phase, we are going to a point where the cost of changing your mind increases rapidly as time passes. The team should focus on learning as cheap as possible (by capturing signals from the market), and discussions around investment should answer whether we should pivot, persevere, or stop.
Here are my recommendations for frameworks, methods and activities to decide if you should pivot, persevere, or stop, and provide the insights that help you with managing by outcomes:
- User Story Mapping
- Design Studio
- Specifications
- Collaborative Prototyping
- UXI Matrix (Pugh Matrix)
- Usability Testing
- Usefulness, Satisfaction, and Ease of Use (USE)
- American Customer Satisfaction Index (ACSI)
- System Usability Scale (SUS)
- Usability Metric for User Experience (UMUX)
- UMUX-Lite
Managing by Outcomes Discussions during “Deliver”
In this phase, the visibility and traceability systems should collect data from actual customer usage and make hard choices about pivoting, persevering or stopping on the product’s next iteration.
On the other hand — since the product is now in the hands of customers and users — we should be able to collect the richest data from live usage that can inform decisions about our viability hypothesis, enabling you to adjust strategic choices accordingly.
Here are my recommendations for frameworks, methods, and activities to help you trace back the outputs to outcomes and provide the insights that help you with managing by outcomes:
- Designer – Developer Pairing
- Fit-and-Finish
- Pirate Metrics (a.k.a. AARRR!)
- UXI Matrix (Pugh Matrix)
- Objectives, Goals, Strategy & Measures (OGSM)
Facilitating discussions for Managing by Outcomes
I think designers should facilitate the discussions and help others raise awareness around the creative and problem-solving processes instead of complaining that everyone else is jumping into solutions too quickly.
I’ll argue for the need for facilitation in the sense that — if designers want to influence the decisions that shape strategy — they must step up to the plate and become skilled facilitators that respond, prod, encourage, guide, coach, and teach as they guide individuals and groups to make decisions that are critical in the business world through effective processes.
That said, I’d argue that facilitation here does not only mean “facilitating workshops,” but facilitating the decisions regardless of the required activities.
Recommended Reading
Adzic, G. (2012). Impact Mapping: Making a big impact with software products and projects (M. Bisset, Ed.). Woking, England: Provoking Thoughts.
Anderson, G. (2019). Mastering Collaboration: Make Working Together Less Painful and More Productive. O’Reilly UK Ltd.
Azzarello, P. (2017). Move: How decisive leaders execute strategy despite obstacles, setbacks, and stalls. Nashville, TN: John Wiley & Sons.
Berger, W. (2019). The book of beautiful questions: The powerful questions that will help you decide, create, connect, and lead. New York, NY: Bloomsbury Publishing.
Bland, D. J., & Osterwalder, A. (2020). Testing business ideas: A field guide for rapid experimentation. Standards Information Network.
Brand, W. (2017). Visual thinking: Empowering people & organizations through visual collaboration. Amsterdam, Netherlands: BIS Publishers B.V.
Brown, T., & Katz, B. (2009). Change by design: how design thinking transforms organizations and inspires innovation. [New York]: Harper Business
Cagan, M. (2017). Inspired: How to create tech products customers love (2nd ed.). Nashville, TN: John Wiley & Sons.
Cagan, M. (2020). The origin of product discovery. Retrieved February 7, 2022, from Silicon Valley Product Group website: https://svpg.com/the-origin-of-product-discovery/
Calabretta, G., Gemser G., Karpen, I., (2016) “Strategic Design: 8 Essential Practices Every Strategic Designer Must Master“, 240 pages, BIS Publishers; 1st edition (22 Nov. 2016)
Connor, A., & Irizarry, A. (2015). Discussing Design (1st ed.). Sebastopol, CA: O’Reilly Media.
Cutler, J. (2022). Making things better (with enabling constraints and POPCORN). Retrieved March 27, 2022, from The Beautiful Mess website: https://cutlefish.substack.com/p/making-things-better-with-enabling?s=w
Design Council. (2015, March 17). What is the framework for innovation? Design Council’s evolved Double Diamond. Retrieved August 5, 2021, from designcouncil.ork.uk website: https://www.designcouncil.org.uk/news-opinion/what-framework-innovation-design-councils-evolved-double-diamond
Dziersk, M., (2010), “Visual Thinking: A leadership Strategy” in Building Design Strategy. Lockwood, T., & Walton, T., Allworth Press.
Doerr, J. (2018). Measure what matters: How Google, Bono, and the gates foundation rock the world with okrs. Portfolio.
Garbugli, É. (2020). Solving Product: Reveal Gaps, Ignite Growth, and Accelerate Any Tech Product with Customer Research. Wroclaw, Poland: Amazon.
Gothelf, J., & Seiden, J. (2017). Sense and respond: How successful organizations listen to customers and create new products continuously. Boston, MA: Harvard Business Review Press.
Gothelf, J. (2017). Execs care about revenue. How do we get them to care about outcomes? Retrieved July 23, 2022, from Jeff Gothelf website: https://jeffgothelf.com/blog/execs-care-about-revenue-how-do-we-get-them-to-care-about-outcomes/
Gothelf, J., & Seiden, J. (2021). Lean UX: Applying lean principles to improve user experience. Sebastopol, CA: O’Reilly Media.
Gothelf, J. (2021). OKRs at scale. Retrieved July 23, 2022, from Jeff Gothelf website: https://jeffgothelf.com/blog/okrs-at-scale/
Groen, B., Wilderom, C., & Wouters, M. (2017). High Job Performance Through Co-Developing Performance Measures With Employees. Human Resource Management, 56(1), 111–132.
Juarrero, Alicia. 1999. Dynamics in Action: Intentional Behavior as a Complex System. MIT Press
Kalbach, J. (2020). Jobs to be Done Playbook (1st Edition). Two Waves Books.
Kalbach, J. (2021). Mapping Experiences (2nd Edition). Sebastopol, CA: O’Reilly Media.
Lamborn, J. (2022). Dual-track agile and continuous discovery: What you need to know. Retrieved March 31, 2023, from LogRocket Blog website: https://blog.logrocket.com/product-management/dual-track-agile-continuous-discovery/
Lewrick, M., Link, P., & Leifer, L. (2018). The design thinking playbook: Mindful digital transformation of teams, products, services, businesses and ecosystems. Nashville, TN: John Wiley & Sons.
Lombardo, C. T., McCarthy, B., Ryan, E., & Connors, M. (2017). Product Roadmaps Relaunched. Sebastopol, CA: O’Reilly Media.
Martin, K., & Osterling, M. (2014). Value stream mapping: How to visualize work and align leadership for organizational transformation. New York, NY: McGraw-Hill Professional.
Martin, K. (2018). Clarity first: How smart leaders and organizations achieve outstanding performance. McGraw-Hill Education.
Matts, C. (2018). Constraints that enable. Retrieved March 27, 2022, from The IT Risk Manager website: https://theitriskmanager.com/2018/12/09/constraints-that-enable/
Maxwell, J. C. (2007). The 21 irrefutable laws of leadership: Follow them and people will follow you. Nashville, TN: Thomas Nelson.
McCarthy, B. (2019). How should product teams use OKRs? Retrieved July 23, 2022, from Product Culture website: https://www.productculture.org/articles/2019/6/1/how-should-product-teams-use-okrs
Melnick, L. (2020). How to avoid meetings about the trivial, aka bikeshedding. Retrieved July 24, 2023, from The Business of Social Games and Casino website: https://lloydmelnick.com/2020/06/17/how-to-avoid-meetings-about-the-trivial-aka-bikeshedding/
Mills-Scofield, D. (2012). It’s not just semantics: Managing outcomes vs. Outputs. Retrieved 26 December 2022 from Harvard Business Review. https://hbr.org/2012/11/its-not-just-semantics-managing-outcomes
Mueller, S., & Dhar, J. (2019). The decision maker’s playbook: 12 Mental tactics for thinking more clearly, navigating uncertainty, and making smarter choices. Harlow, England: FT Publishing International.
New South Walles Public Service Commission. (2020). Managing for outcomes with a remote, flexible workforce. Retrived 26 December 2022 from Gov.au. https://www.psc.nsw.gov.au/sites/default/files/2020-11/tipsheet-managing-for-outcomes-with-a-remote.pdf
Oberholzer-Gee, F. (2021). Eliminate Strategic Overload. Harvard Business Review, (May-June 2021), 11.
Osterwalder, A., Pigneur, Y., Papadakos, P., Bernarda, G., Papadakos, T., & Smith, A. (2014). Value proposition design: How to create products and services customers want. John Wiley & Sons.
Patton, J. (2014). User Story Mapping: Discover the whole story, build the right product (1st ed.). Sebastopol, CA: O’Reilly Media.
Perri, M. (2019). Escaping the build trap. Sebastopol, CA: O’Reilly Media.
Pfeffer, J., & Sutton, R. I. (1999). The Smart-Talk Trap. Harvard Business Review, (May–June 1999).
Pichler, R. (2016). “Choose the Right Key Performance Indicators” in Strategize: Product strategy and product roadmap practices for the digital age. Pichler Consulting.
Podeswa, H. (2021). The Agile Guide to Business Analysis and Planning: From Strategic Plan to Continuous Value Delivery. Boston, MA: Addison Wesley.
Polaine, A., Løvlie, L., & Reason, B. (2013). Service design: From insight to implementation. Rosenfeld Media.
ProductPlan. (2022). Opportunity Solution Tree. Retrieved May 19, 2022, from Productplan.com website: https://www.productplan.com/glossary/opportunity-solution-tree/
Productboard. (2020). Dual-track agile. Retrieved March 31, 2023, from Productboard website: https://www.productboard.com/glossary/dual-track-agile/
Risdon, C. (2020). Orchestrating experiences: Collaborative design for complexity. USA: Rosenfeld Media.
Seiden, J. (2019). Outcomes Over Output: Why customer behavior is the key metric for business success. Independently published (April 8, 2019).
Skelton, M., & Pais, M. (2022). Remote team interactions workbook: Using team topologies patterns for remote working. It Revolution Press.
Sommers, C. (2012). Think like a futurist: Know what changes, what doesn’t, and what’s next (1st ed.). Nashville, TN: John Wiley & Sons.
Spiek, C., & Moesta, B. (2014). The Jobs-to-be-Done Handbook: Practical techniques for improving your application of Jobs-to-be-Done. Createspace Independent Publishing Platform.
Stickdorn, M., Hormess, M. E., Lawrence, A., & Schneider, J. (2018). This is Service Design Doing. Sebastopol, CA: O’Reilly Media.
Sy, D. (2007). Adapting usability investigations for Agile user-centered design. Journal of User Experience, 2(3), 112–132. Retrieved from https://uxpajournal.org/wp-content/uploads/sites/7/pdf/JUS_Sy_May2007.pdf
Torres, T. (2021). Continuous Discovery Habits: Discover Products that Create Customer Value and Business Value. Product Talk LLC.
Torres, T, Gurion, H., “Defining Product Outcomes: The 8 Most Common Mistakes You Should Avoid.” Product Talk (blog), December 21, 2022. https://www.producttalk.org/2022/12/defining-product-outcomes/
Tullis, T., & Albert, W. (2013). Measuring the user experience: Collecting, analyzing, and presenting usability metrics (2nd edition). Morgan Kaufmann.
Ulwick, A. (2005). What customers want: Using outcome-driven innovation to create breakthrough products and services. Montigny-le-Bretonneux, France: McGraw-Hill.
Van Der Pijl, P., Lokitz, J., & Solomon, L. K. (2016). Design a better business: New tools, skills, and mindset for strategy and innovation. Nashville, TN: John Wiley & Sons.
Wodtke, C. R. (2021). Radical focus: Achieving your goals with objectives and key results (2nd ed.). Cucina Media.
Young, I. (2008). Mental models: Aligning design strategy with human behavior. Brookly, New York: Rosenfeld Media.
11 replies on “Managing by Outcomes and Jobs to be Done”
[…] More on Outcomes […]
I love this overview. I am writing a book on outcome drive innovation in Dutch
That’s great, Harry! I might visit the Netherlands this year (a couple of conferences coming up on my calendar)… maybe we could meet face to face!
[…] to understand each other. Here is why creating shared understanding, and facilitating two-way negotiations around outcomes become super […]
[…] Opportunity Solution Trees are a simple way of visually representing the paths you might take to reach a desired outcome (Torres, T., Continuous Discovery Habits: Discover Products that Create Customer Value and Business Value, 2021). Opportunity-Solution Trees work great as alignment diagrams for facilitating two-way negotiations while Managing by Outcomes. […]
[…] to be Done to facilitate two-way negotiations between leadership and product teams that allows for managing by outcomes (Photo by Sora Shimazaki on […]
[…] to be Done to facilitate two-way negotiations between leadership and product teams that allows for managing by outcomes (Photo by Sora Shimazaki on […]
[…] more about the relationship between product metrics and product outcomes and managing by outcomes (Photo by Sora Shimazaki on […]
[…] to be Done to facilitate two-way negotiations between leadership and product teams that allows for managing by outcomes (Photo by Sora Shimazaki on […]
[…] to be Done to facilitate two-way negotiations between leadership and product teams that allows for managing by outcomes (Photo by Sora Shimazaki on […]
[…] to be Done to facilitate two-way negotiations between leadership and product teams that allows for managing by outcomes (Photo by Sora Shimazaki on […]