In a previous post, I talked about the need for set of tools for quantifying and qualifying strategy that both empowers intuition and creativity, while also helping teams find objective ways to value design solutions to justify the experience investments in that bring us ever closer to our vision and goals.
In this post, I’ll focus on how to help teams with facilitating investment discussions by finding ways to remove (or at least reduce) subjectivity when we compare, contrast, or debate the value ideas, approaches, solutions to justify the investment on them.
- TL;DR;
- Quantifying and Qualifying Strategy
- Facilitating Investment Discussions around Shared Vision
- Facilitating Investment Discussions around Value
- Facilitating Investment Discussions through Clear Priorities
- The Right Time for Facilitating Investment Discussions
- Recommended Reading
TL;DR;
- We need objective ways to value design solutions to justify the experience investments to look at the different points in the strategic planning and execution and identify the discussions that strategists should facilitate investment discussions, pivot and risk mitigation, while tracking and tracing the implementation of strategy to ensure we are bringing value for our customers and out business.
- When facilitating investment discussions, designers must engage with their business stakeholders to understand what objectives and unique positions they want their products to assume in the industry and their choices to achieve such objectives and positions.
- I’ve seen too many teams that a lot of their decisions seem to be driven by the question “What can we implement with the least effort” or “What are we able to implement”, not by the question “what brings value to the user”.
- That said, I’ve also seen many designers make the mistake of focusing only on the user’s needs at the expense of business needs and technological constraints.
- Strategists must help the team identify all three types of hypotheses underlying a business idea: desirability, feasibility, and viability.
- The kinds of products we are trying to bring to the world today are complex, which makes discussions around desirability, feasibility, and viability very difficult, so the key to dealing with such complexity is to focus on having good conversations about assumptions.
- Once we acknowledge we are dealing with assumptions, we should frame discussions around work that needs to be done through building, measuring, and learning.
- When we’ve considered all our hypotheses, it’s essential to set priorities and remove distractions so that people can get on with providing service to customers, thus increasing profits and the value of the business.
- Ask the business stakeholders and the team: what is our prioritization policy, and how is it visualized? How does each and every item of work that has been prioritised help get us closer to our vision and achieve our goals?
- In my experience, I’ve found it helpful to come up with visualizations that help remove subjectivity while facilitating investment discussions.
- We should be aware that facilitating investment discussions at different phases of the development process means other things: reducing ambiguity through better problem framing, making good decisions by creating great choices, learning as fast and cheaply as possible if we should pivot, persevere or stop.
Quantifying and Qualifying Strategy
“What do people need?” is a critical question when building a product. Wasting your life’s savings and your investor’s money, risking your reputation, making false promises to employees and potential partners, and trashing months or work you can never get back is a shame. It’s also a shame to find out you were completely delusional when you thought that everyone needed the product you were working on (Sharon, T., Validating Product Ideas, 2016)
In a previous article, I mentioned that we need objective ways to value design solutions to justify the experience investments to look at the different points in the strategic planning and execution and identify the discussions that strategists should facilitate around what customers and users perceive as value while tracking and tracing the implementation of a strategy to ensure we are bringing value for both customers and business.
If you fail to establish management systems that support those choices, strategies can still fail- spectacularly. Without the supporting systems, structures, and measures for quantifying and qualifying outcomes, strategies remain a wish list, a set of goals that may or may not ever be achieved (“Manage What Matters” in Playing to Win: How Strategy Really Works (Lafley, A.G., Martin, R. L., 2013).
From that perspective, we need to find ways to:
- Explore (and preferably test) ideas early
- Facilitate investment discussions by objectively describing business and user value, establishing priorities
- Assess the risk of pursuing ideas, while capturing signals that indicate if/when to pivot if an idea “doesn’t work.”
- Capture and track the progress of strategy implementation.
In a previous article, I went deep into quantification and metrics, so I suggest looking at that if you’re engaging in measuring experiences.
When it comes to facilitating investment discussions, designers must engage with their business stakeholders to understand what objectives and unique positions they want their products to assume in the industry and their choices to achieve such objectives and positions.
As a result, designers will be better prepared to influence the business decisions that help create such an advantage and superior value to the competition.
That’s why it is important that designers engage with stakeholders early and often to make sure we’ve got the right framing of the problem space around the 3 vision-related questions (as per the Six Strategic Questions illustration above):
- What are our aspirations?
- What are our challenges?
- What will we focus on?
If you can answer the questions above by working with your stakeholder, all the discussions below will be much easier. In my experience, however, that’s not usually the case! Most stakeholders have a list of features in their minds, which — as I mentioned in a previous article — it’s not a cohesive strategy. So most of these investment discussions will start with asking good questions.
Here are seven questions you can ask yourself (and your team) before building a new feature (Croll, A., & Yoskovitz, B., Lean Analytics: Use Data to Build a Better Startup Faster, 2013):
- Why Will It Make Things Better? You can’t build a feature without having a reason for building it. In the Stickiness stage, your focus is retention. Look at your potential feature list and ask yourself, “Why do I think this will improve retention?” You’ll be tempted to copy what others are doing–say, using gamification to drive engagement (and, in turn, retention)-just because it looks like it’s working for the competition. Asking, “Why will it make it better?” forces you to write out (on paper!) a hypothesis. This naturally leads to a good experiment that will test that hypothesis. Feature experiments, if tied to a specific metric (such as retention), are usually straightforward: you believe feature X will improve retention by Y percent. The second part of that statement is as important as the first part; you need to draw that line in the sand.
- Can You Measure the Effect of the Feature? Feature experiments require that measure the impact of the feature. That impact has to be quantifiable. Too often, features get added to a product without any quantifiable validation – a direct path toward scope creep and feature bloat. If you cannot quantify the impact of a new feature, you can assess its value, and you won’t know what to do with the feature over time. If this is the case, leave it as is, iterate on it, or kill it.
- How Long Will the Feature Take to Build? Time is a precious resource you never get back. You have to compare the relative development time of each feature on your list. If something takes months to build, you need reasonable confidence that it will have a significant impact. Can you break it into smaller parts or test the inherent risk with a curated MVP or a prototype instead?
- Will the Feature Overcomplicate Things? Complexity kills products. It’s most apparent in the user experience of many web applications: they become so convoluted and confusing that users leave for a simpler alternative. “And” is the enemy of success. When discussing a feature with your team, pay attention to how it’s being described. “The feature will allow you to do this, and it’d be great if it did this other thing, and this other thing, and this other thing too.” Warning bells should be going off at this point. If you’re trying to justify a feature by saying it satisfies several needs a little bit, know that it’s almost always better to satisfy one need in an absolutely epic, remarkable way.
- How Much Risk Is There in This New Feature? Building new features always come with some amount of risk. There’s
technical risk related to how a feature may impact the code base.
There’s user risk regarding how people might respond to the feature. There’s also the risk regarding how a feature drives future developments, potentially setting you on a path you don’t want to pursue. Each feature you add creates an emotional commitment to your development team and sometimes to your customers. Analytics helps break that bond so you can measure things honestly and make the best decisions possible with the most available information. - How Innovative Is the New Feature? Not everything you do will be innovative. Most features aren’t innovative; they’re small tweaks to a product in the hope that the whole is more valuable than the individual parts. But consider innovation when prioritizing feature development; generally, the easiest things to do rarely have a big impact. You’re still in the Stickiness stage, trying to find the right product. Changing a submit button from red to blue may result in a good jump in signup conversions (a classic A/B test), but it’s probably not going to turn your business from a failure into a giant success; it’s also easy for others to copy. It’s better to make big bets, swing for the fences, try more radical experiments, and build more disruptive things, particularly since you have fewer user expectations than you will later on.
- What Do Users Say They Want? Your users are important. Their feedback is important. But relying on what they say is risky. Be careful about over-prioritizing based on user input alone. Users lie, and they don’t like hurting your feelings. Prioritizing feature development during an MVP isn’t an exact science. User actions speak louder than words. Aim for a genuinely testable hypothesis for every feature you build, and you’ll have a much better chance of quickly validating success or failure. Simply tracking how popular various features are within the application will reveal what’s working and what’s not. Looking at what feature a user was using before he hit ” undo” or the back button will pinpoint the possible problems.
Facilitating Investment Discussions around Shared Vision
Have you ever been part of a team that didn’t seem to make any progress? Maybe the group had plenty of talent, resources, and opportunities, and team members got along, but the group never went anywhere? If you have, there is a strong possibility that the situation was caused by a lack of vision (Maxwell, J. C., The 17 indisputable laws of teamwork, 2013).
In the second post of this series, I mentioned that I’ve found that — more often than not — is not for the lack of ideas that teams cannot innovate, but because of all the friction or drag created by not having a shared vision and understanding of what the problems they are trying to solve.
Just to make sure I’m not misunderstood — as my colleague Anton Fischer usually says — it doesn’t matter at that point if the team lacks a vision or the vision is just poorly communicated; the result is the same: the team will lack engagement and slowly drift apart.
The Importance of Vision
A global study conducted in 2012 involving 300,000 employees found that just over half did not really understand the basics of their organizations’ strategies (Zook, C., & Allen, J., Repeatability, 2012). Given the effort applied to strategy development, there is a massive disconnect here. The opportunity to reconnect a firm with its strategy lies in how the strategy is communicated and understood (Callahan, S., Putting Stories to Work, 2016).
The first thing most people do when they hear the word “vision” in a business context is a yawn. That’s because visions are vague, unclear, and – frankly – nothing to get excited about. Well-designed visions should be rally cries for action, invention, and innovation (Van Der Pijl, P., Lokitz, J., & Solomon, L. K., Design a better business: New tools, skills, and mindset for strategy and innovation, 2016)
Designers should advocate for the importance of vision and facilitate the creation of product visions that explain a strategy’s complex connection and express the product’s future intended destination. (Fish, L., Kiekbusch, S., “The State of the Designer” in The Designer’s Guide to Product Vision, 2020).
The beauty of a shared vision is that it motivates and unites people: it acts as the product’s true north, facilitates collaboration, and provides continuity in an ever-changing world (Kouzes, J. M., & Posner, B. Z., Leadership Challenge: How to Make Extraordinary Things Happen in Organizations, 2017).
Making sure organizations and designers share the same vision is crucial to the success of any design project. A “shared project vision” means (Calabretta, G., Erp, J. V., Hille, M., “Designing Transitions: Pivoting Complex Innovation” in Strategic Design, 2016):
- There is widespread clarity in the stakeholders’ and designers’ understanding of the project goals and direction.
- There is widespread clarity in the stakeholders’ and designers’ understanding of the approach taken during project implementation.
The design team needs to assess the extent to which the challenge at hand is driven by a vision that is shared by asking three questions (Calabretta, G., Erp, J. V., Hille, M., “Designing Transitions: Pivoting Complex Innovation” in Strategic Design, 2016):
- Is there a project vision? Does the company have a clear view of the project direction, and where it fits into the raison d’être (the “why”) of the company? How exactly will the project help the company fulfill its why? A satisfactory answer to this question should emerge during the early stages of a strategic project when the brief is formulated. A lack of clear-cut answers to these questions usually signals the absence of a strong, cohesive project vision.
- Is the project a good fit with the wider goals of the organization? Sometimes the project vision does not align with the KPIs or primary goals that the organization has expressed elsewhere. This usually happens – for example – when a trend emerges and organizations may act impulsively because they are afraid to miss out on what they see as an opportunity for growth.
- Is the vision shared across the company? If there is a clear project vision, is there widespread awareness and alignment within the company? Can various department move in the same direction during project setup and implementation?
In my experience — more often than not — lots of projects (and organizations for that matter) lack an inspiring vision.
As I mentioned earlier, it doesn’t matter at this point if the team lacks a vision or the vision is just poorly communicated, the result is the same: the team will lack engagement and slowly drift apart.
Product Vision
While goals provide a context about where you’re going, the vision paints a picture of the future, so people want to go there. Unfortunately, where goals come out as a list that’s easy to document and share, it takes more work to convert your concrete vision into something that’s easy to share (Govella, A., Collaborative Product Design. 2019).
A Vision Statement is a method for describing the result of an innovation project as an overview, showing how the organization implements the innovation offering. Part of the method is to express the innovation intent and its realization in only a minimum set of words or visuals, for example, a title statement as brief as, “We will eradicate breast cancer in the next twenty years.” It contains no specifics, but grounds all innovation efforts (Kumar, V., “Mindsets” in 101 design methods, 2013).
A vision statement expresses the value proposition, targeted users, key activities, performance, channels, resources, cost structure, revenue streams, strategy, and similar key factors, distilling all of the research, analysis, and synthesis into a concise expression that summarizes the fulfillment of the innovation intent in an easy to grasp format, especially making it clear to any stakeholder. The Vision Statement is often developed during the process of crafting a Strategy Plan (Kumar, V., “Mindsets” in 101 design methods, 2013).
Product Vision clarifies why are we bringing a product to market in the first place, and what its success will mean to the world and the organization. It’s the destination we want to reach. For example, Google Search’s product vision is “to provide access to the world’s information in one click” — which is derived from the company’s mission: “to organize the world’s information and make it universally accessible and useful” (Lombardo, C. T., McCarthy, B., Ryan, E., & Connors, M., Product Roadmaps Relaunched, 2017).
For: [target customer]
Who: [target customer’s needs]
The: [product name]
Is a: [product category]
That: [product benefit/reason to buy]
Unlike: [competitors]
Our product: [differentiators]
Here is an example from Microsoft Surface
For the business user who needs to be productive in the office and on the go, the Surface is a convertible tablet that is easy to carry and gives you full computing productivity no matter where you are.
Unlike laptops, Surface serves your on-the-go needs without having to carry an extra device.
280 Group LLC. What is a Product Vision? Methods and Examples (2020)
When you’ve formulated your point of view with an eye toward the future, it’s the vision that will guide you and your team towards that north star. A clear vision brings focus and provides an anchor point for making bold strategic decisions (Van Der Pijl, P., Lokitz, J., & Solomon, L. K., Design a better business: New tools, skills, and mindset for strategy and innovation, 2016)
Facilitating Investment Discussions around Value
As I mentioned in a previous post, designers must become skilled facilitators that respond, prod, encourage, guide, coach, and teach as they guide individuals and groups to make decisions critical in the business world through effective processes. Few decisions are harder than deciding how to prioritize.
I’ve seen too many teams that a lot of their decisions seem to be driven by the question “What can we implement with the least effort” or “What are we able to implement” not by the question “what brings value to the user?”
From a user-centered perspective, the most crucial pivot that needs to happen in the conversation between designers and business stakeholders is the framing of value:
- Business value
- User value
- Value to designers (sense of self-realization? Did I positively impact someone’s life?)
The mistake I’ve seen many designers make is to look at prioritization discussion as a zero-sum game: our user-centered design tools set may have focused too much on the needs of the user, at the expense of business needs and technological constraints.
That said, there is a case to be made that designers should worry about strategy because it helps shape the decisions that not only create value for users but value for employees.
Therefore, a strategic initiative is worthwhile only if it does one of the following (Oberholzer-Gee, F. (2021). Better, simpler strategy. 2021):
- Creates value for customers by raising their willingness to pay (WTP): If companies find ways to innovate or to improve existing products, people will be willing to pay more. In many product categories, Apple gets to charge a price premium because the company raises the customers’ WTP by designing beautiful products that are easy to use, for example. WTP is the most a customer would ever be willing to pay. Think of it as the customer’s walk-away point: Charge one cent more than someone’s WTP, and that person is better off not buying. Too often, managers focus on top-line growth rather than on increasing willingness to pay. A growth-focused manager asks, “What will help me sell more?” A person concerned with WTP wants to make her customers clap and cheer. A sales-centric manager analyzes purchase decisions and hopes to sway customers. In contrast, a value-focused manager searches for ways to increase WTP at every stage of the customer’s journey, earning the customer’s trust and loyalty. A value-focused company convinces its customers in every interaction that it has their best interests at heart.
- Creates value for employees by making work more appealing: When companies make work more enjoyable, motivating, and flexible, they can attract talent even if they do not offer industry-leading compensation. Paying employees more is often the right thing to do, of course. But remember that more-generous compensation does not create value in and of itself; it simply shifts resources from the business to the workforce. By contrast, offering better jobs not only creates value, it also lowers the minimum compensation that you have to offer to attract talent to your business, or what we call an employee’s willingness-to-sell (WTS) wage. Offer a prospective employee even a little less than her WTS, and she will reject your job offer; she is better off staying with her current firm. As with prices and WTP, value-focused organizations never confuse compensation and WTS. Value-focused businesses think holistically about the needs of their employees (or the factors that drive WTS).
- Creates value for suppliers by reducing their operating costs: Like employees, suppliers expect a minimum level of compensation for their products. A company creates value for its suppliers by helping them raise their productivity. As suppliers’ costs go down, the lowest price they would be willing to accept for their goods—what we call their willingness-to-sell (WTS) price—falls. When Nike, for example, created a training center in Sri Lanka to teach its Asian suppliers lean manufacturing, the improved production techniques helped suppliers reap better profits, which they then shared with Nike.
This idea is captured in a simple graph, called a value stick. WTP sits at the top, and WTS at the bottom. When companies find ways to increase customer delight and increase employee satisfaction, and supplier surplus (the difference between the price of goods and the lowest amount the supplier would be willing to accept for them), they expand the total amount of value created and position themselves for extraordinary financial performance.
Organizations that exemplify value-based strategy demonstrate some key behaviors (Oberholzer-Gee, F., “Eliminate Strategic Overload” in Harvard Business Review, 2021):
- They focus on value, not profit. Perhaps surprisingly, value-focused managers are not overly concerned with the immediate financial consequences of their decisions. They are confident that superior value creation will improve financial performance over time.
- They attract the employees and customers whom they serve best. As companies find ways to move WTP or WTS, they make themselves more appealing to customers and employees who particularly like how they add value.
- They create value for customers, employees, or suppliers (or some combination) simultaneously. Traditional thinking, informed by our early understanding of success in manufacturing, holds that costs for companies will rise if they boost consumers’ willingness to pay—that is, it takes more costly inputs to create a better product. But value-focused organizations find ways to defy that logic.
For such a conversation pivot to focus on value to happen, designers will need to get better at influencing the strategy of their design project. However, some designers lack the vocabulary, tools, and frameworks to influence it in ways that drive user experience vision forward. Advocating for how we can inform the decisions that increase our customer’s Willingness to Pay (WTS) by — for example — increasing customer delight.
To understand the risk and uncertainty of your idea you need to ask: “What are all the things that need to be true for this idea to work?” This will allow you to identify all three types of hypotheses underlying a business idea: desirability, feasibility, and viability (Bland, D. J., & Osterwalder, A., Testing business ideas, 2020):
- Desirability (do they want this?) relates to the risk that the market a business is targeting is too small; that too few customers want the value proposition; or that the company can’t reach, acquire, and retain targeted customers.
- Feasibility (Can we do this?) relates to the risk that a business can’t manage, scale, or get access to key resources (technology, IP, brand, etc.). This is isn’t just technical feasibility; we also look need to look at overall regulatory, policy, and governance that would prevent you from making your solution a success.
- Viability (Should we do this?) relates to the risk that a business cannot generate more revenue than costs (revenue stream and cost stream). While customers may want your solution (desirable) and you can build it (feasible), perhaps there’s not enough of a market for it or people won’t pay enough for it.
Design strategists should help team find objective ways to value design ideas/ approaches/ solutions to justify the investment on them from both desirability, feasibility and viability.
Many companies try to deal with complexity with analytical firepower and sophisticated mathematics. That is unfortunate, since the most essential elements of creating a hypothesis can typically be communicated through simple pencil-and-paper sketches (Govindarajan, V., & Trimble, C., The other side of innovation: Solving the execution challenge, 2010.)
This is why many teams are taking cues from the Lean playbook and framing discussions around work that needs to be done through building, measuring, and learning. As you sit down with your teams to plan out your next initiative, ask them these questions (Gothelf, J., & Seiden, J., Sense and respond. 2017):
- What is the most important thing (or things) we need to learn first?
- What is the fastest, most efficient way to learn that?
If you only have one hypothesis to test it’s clear where to spend the time you have to do discovery work. If you have many hypotheses, how do you decide where your precious discovery hours should be spent? Which hypotheses should be tested? Which ones should be de-prioritised or just thrown away? To help answer this question, Jeff Gothelf put together the Hypothesis Prioritisation Canvas (Gothelf, J., The hypothesis prioritization canvas, 2019):
Quantifying and Qualifying Desirability
As I mentioned earlier, it’s been my experience that many product decisions seem to be driven by the question What can we implement with least effort or What are we able to implement, not by the question what brings value to the user. What is missing in such investment discussions is a common definition of value.
In the previous article, I mentioned that — when customers evaluate a product or service — they weigh its perceived value against the asking price. Marketers have generally focused much of their time and energy on managing the price side of that equation since raising prices can immediately boost profits. But that’s the easy part: Pricing usually involves managing a relatively small set of numbers, and pricing analytics and tactics are highly evolved. What consumers truly value, however, can be difficult to pin down and psychologically complicated (Almquist, E., Senior, J., & Bloch, N., The Elements of Value, 2016).
How can leadership teams actively manage value or devise ways to deliver more, whether functional (saving time, reducing cost) or emotional (reducing anxiety, providing entertainment)? Discrete choice analysis—which simulates demand for different combinations of product features, pricing, and other components—and similar research techniques are powerful and useful tools, but they are designed to test consumer reactions to preconceived concepts of value—the concepts that managers are accustomed to judging (Almquist, E., Senior, J., & Bloch, N., The Elements of Value, 2016).
So how do you facilitate discussions that help teams clearly see value from different angles? I’ve found that alignment diagrams are really good to get the teams to have qualifying discussions around value. Here are some examples below:
Visualising Desirability through Alignment Diagrams
Alignment diagrams to refer to any map, diagram, or visualization that reveals both sides (Business and Users) of value creation in a single overview. They are a category of diagram that illustrates the interaction between people and organizations (Kalbach, J., ”Visualizing Value: Aligning Outside-in” in Mapping Experiences, 2021).
Customer Journey Maps are visual thinking artifacts that help you get insight into, track, and discuss how a customer experiences a problem you are trying to solve. How does this problem or opportunity show up in their lives? How do they experience it? How do they interact with you? (Lewrick, M., Link, P., & Leifer, L., The design thinking playbook. 2018).
Experience Maps look at a broader context of human behavior. They reverse the relationship and show how the organization fits into a person’s life (Kalbach, J., ”Visualizing Value: Aligning Outside-in” in Mapping Experiences, 2021).
User story mapping is a visual exercise that helps product managers and their development teams define the work that will create the most delightful user experience. User Story Mapping allows teams to create a dynamic outline of a set of representative user’s interactions with the product, evaluate which steps have the most benefit for the user, and prioritise what should be built next (Patton, J., User Story Mapping: Discover the whole story, build the right product, 2014).
Opportunity Solution Trees are a simple way of visually representing the paths you might take to reach a desired outcome (Torres, T., Continuous Discovery Habits: Discover Products that Create Customer Value and Business Value, 2021)
Service Blueprints are visual thinking artifacts that help to capture the big picture and interconnections. They are a way to plan projects and relate service design decisions to the original research insights. The blueprint differs from the service ecology in that it includes a specific detail about the elements, experiences, and delivery within the service itself (Polaine, A., Løvlie, L., & Reason, B., Service design: From insight to implementation, 2013).
Strategy Canvas helps you compare how well competitors meet customer buying criteria or desired outcomes. To create your strategy canvas, list the 10-12 most important functional desired outcomes — or buying criteria — on the x-axis. On the y-ais, list the 3-5 most common competitors (direct, indirect, alternative solutions and multi-tools solutions) for the job. (Garbugli, É., Solving Product, 2020).
While Alignment Diagrams are good for facilitating discussions around qualifying value by bringing both Business and Users perspectives together, there is still a need for quantifying value objectively. Let’s look why.
Quantifying Desirability, Value, and Satisfaction
When product managers, designers and strategists are crafting their strategy or working on discovery phase, the kind of user and customer insights they are looking for are really hard to acquire through quantitative metrics, either because we cannot derive insights from the existing analytics coming from the product, or because we are creating something new (so there are no numbers to refer to). Most of such insights (especially desirability and satisfaction) would come from preference data.
Just because preference data is more subjective, it doesn’t meant is less quantifiable: although design and several usability activities are certainly qualitative, the image of good and bad designs can easily be quantified through metrics like perceived satisfaction, recommendations, etc (Sauro, J., & Lewis, J. R., Quantifying the user experience: Practical statistics for user research. 2016).
Preference Data is typically collected via written, oral, or even online questionnaires or through the debriefing session of a test. A rating scale that measure how a participant feels about the product is an example of a preference measure (Rubin, J., & Chisnell, D., Handbook of usability testing, 2011).
You can find examples of preference data that design strategist can collect to inform strategic decisions in my previous post, so I’ll just mention the ones that I find to get more traction with business stakeholders.
Jobs To Be Done (JTBD) and Outcome-Driven Innovation
Outcome-Driven Innovation (ODI) is a strategy and innovation process built around the theory that people buy products and services to get jobs done. It links a company’s value creation activities to quantifying and qualifying customer-defined metrics. Ulwick found that previous innovation practices were ineffective because they were incomplete, overlapping, or unnecessary.
Clayton Christensen credits Ulwick and Richard Pedi of Gage Foods with the way of thinking about market structure used in the chapter “What Products Will Customers Want to Buy?” in his Innovator’s Solution and called “jobs to be done” (JTBD) or “outcomes that customers are seeking”.
Ulwick’s “opportunity algorithm” measures and ranks innovation opportunities. Standard gap analysis looks at the simple difference between importance and satisfaction metrics; Ulwick’s formula gives twice as much weight to importance as to satisfaction, where importance and satisfaction are the proportion of high survey responses.
You’re probably asking yourself “where these values come from?” That’s where User Research comes in handy: once you’ve got the List of Use Cases, you go back to your users and probe on how important each use case is, and how satisfied with the product they are with regards to each use case.
Once you’ve obtained the opportunity scores for each use case, what comes next? There are two complementary pieces of information that the scores reveal: where the market is underserved and where the it is overserved. We can use this information to make some important targeting and resource-related decisions.
The Importance versus Satisfaction Framework
Similar to Outcome-Driven Innovation, this framework proposes you should be quantifying and qualifying customers need that any particular feature of the product is going to address (Olsen, D. The lean product playbook, 2015):
- How important is that?
- Then how satisfied are people with the current alternatives that are out there?
What I like about Olsen’s approach to assessing opportunities is that he created a couple of variations of opportunities scores:
- Customer Value Delivered = Importance x Satisfaction
- Opportunity to Add Value = Importance x (1 – Satisfaction)
- Opportunity = Importance – Current Value Delivered
Underserved versus Overserved Markets
Jobs to be Done helps identify markets where customers’ needs are underserved (therefore, ripe for disruption) but also helps us find out where it is overserved: Jobs and outcomes that are unimportant or already satisfied represent little opportunity for improvement and consequently should not receive any resource allocation in most markets, it is not uncommon to find many outcomes that are overserved — and companies that are nevertheless continuing to allocate them development resources. We say that an outcome is overserved when its satisfaction rating exceeds its importance rating. When a company discovers these overserved outcomes, it should consider the following three avenues for possible action (Ulwick, A. W., What customers want, 2005):
- Those efforts should be halted if the company focuses on these overserved outcomes. Making additional improvements in already overserved areas is simply a waste of resources and is likely to add cost without adding additional value.
- If cost reduction is an important consideration in the market, then costs can be reduced by taking out costly function in areas that are overserved. For example, if a five-dollar feature can be redesigned so that it satisfies an outcome 80 percent as well as it does currently but for half the cost, then the company may want to make this trade-off.
- If many overserved outcomes are discovered in a market, then the company should consider the possibility of engaging in disruptive innovation. This would mean taking out cost along multiple dimensions and creating a lower-cost business model that existing competitors would be unable to match. The concept of a low-end disruptive innovation, as described in The Innovator’s Solution, is only possible when the customer population, or a segment of that population, is overserved.
Since they know which outcomes are underserved, they know where to make improvements, and, more importantly, they know doing so will result in products that customers want. This flips the innovation process on its head (Ulwick, A. W., What customers want, 2005).
Kano Model
The Kano Model, developed by Dr. Noriaki Kano, is a way of classifying customer expectations into three categories: expected needs, normal needs, exciting needs. This hierarchy can be used to help with our prioritization efforts by clearly identifying the value of solutions to the needs in each category (“Kano Model” in Product Roadmaps Relaunched, Lombardo, C. T., McCarthy, B., Ryan, E., & Connors, M., 2017):
- The customer’s expected needs are roughly equivalent to the critical path: if those needs are not met, they become dissatisfiers.
- If you meet the expected needs, customers will start articulating normal needs, or satisfiers — things they don’t normally need in the product but will satisfy them.
- When normal needs are largely met, then exciting needs (delighters or wows) go beyond the customers’ expectations.
The Kano methodology was initially adopted by operations researchers, who added statistical rigor to the question pair results analysis. Product managers have leveraged aspects of the Kano approach in Quality Function Deployment (QFD). More recently, this methodology has been used by Agile teams and in market research (Moorman, J., “Leveraging the Kano Model for Optimal Results” in UX Magazine, 2012).
Quantifying and Qualifying Viability
Similarly to Feasibility, we need to validate business viability of our ideas during discovery, not after (Cagan, M., Inspired: How to create tech products customers love, 2017).
Once you have sufficient evidence that you’ve found the right opportunity to address AND you have a solution that helps your target audience do something they couldn’t before, you then need to prove you can get paid enough for this product or service to have a commercially viable business that can sustain itself over time (Wong, R., Lean business scorecard: Viability, 2021)
Viability Hypothesis
If we’re going to do the hard work to realise the full value of a business, I like to know that it’s worth putting the time, energy and potentially money into the venture. It’s good to set your sights on a prize worth winning at the end (Wong, R., Lean business scorecard: Viability, 2021):
- Have we got a large enough group of people we can do business with? A group that has the same job to be done and challenges that we identified in the previous section. A group that may be served by existing competitors that we may need to entice away.
- How much value will that group of people provide to us?When starting out, it’s often useful to validate a business idea by seeing if someone is willing to give you something small like their email address in exchange for something they want before asking them for money. Does this idea have any evidence that someone has exchanged something of value from them in return for a product or service? By definition, something has the potential for Viability if a value exchange takes place, but whether it’s profitable or sustainable for a business is a whole other question.
- What does the customer get in exchange? Another way to do something relatively simple to test the Viability of your business idea is to set up a website landing page that promises some sort of valuable product or service in the future, in return for an email address now. This is a good way to test viability on a small scale, as the value exchange is small for everyone. But as the perceived value of a product or service increases, so should the willingness to pay and the willingness to work harder to get that value.
Quantifying and Qualifying Feasibility
Maybe I’m an idealist, but I believe everything is feasible — given enough time and resources. The task of strategists then becomes understanding stakeholders’ expectations, facilitating the discussions necessary to identify the gap between vision and the current state, and working out what must be true to achieve that vision.
With that being said, the gap between the current state and vision can only be filled by the people that are actually going to do the work, which is why I think a lot of projects fail: if decisions are made (e.g., roadmaps, release plans, investment priorities, etc.) without involving those people that actually going to do the work.
We need to ensure feasibility before we decide, not after. Not only does this end up saving a lot of wasted time, but it turns out that getting the engineers’ perspective earlier also tends to improve the solution itself, and it’s critical to shared learning (Cagan, M., Inspired: How to create tech products customers love, 2017).
My dad was a civil engineer with the Brazilian Air Force Corp of Engineers and was responsible for building major infrastructure projects in the Amazon Basin. Even though Engineering and architecture are disciplines thousands of years old, he used to say there was a degree of “art” coming with project costs and timelines. That’s why he works (based on his previous experiences) with estimates.
Different organizations, industries, and sectors employ different models or formulae to estimate time. At first sight, they always seem mathematical. Still, in most cases, their effectiveness is psychological — either overcoming aversion to estimating, or encouraging more careful thought in those who tend to rush in. Perhaps the most widely known is the Project Evaluation and Review Technique (PERT) formula. To use PERT, you need three estimates of the time it could take to complete a task or activity (Baron, E., The Book of Management: the ten essential skills for achieving high performance, 2010):
- The most likely time required (Tm)
- The most optimistic time assessment (To)
- The most pessimistic time assessment (Tp)
Using the following formula to estimate the most probable duration for that activity (Te)
Te = (To + 4Tm + Tp)/6
No matter what the type, size, or budget of a project is, estimating can be a daunting task. Every project request comes with a set of unknowns or a gray area that makes the team or individual nervous about expectations concerning cost, timelines, and level of effort. Because the gray area changes from project to project, there is no simple way of saying, “It always takes us this long to do this thing,” without qualifying it with some additional factors (“with these people, on this project, in this place, at this time, etc.”). (Harned, B, Project Management for Humans, 2017).
That said, even the most experienced engineer (or designer, or architect, etc.) needs to deal with uncertainty by acknowledging that even our best estimates can be based only on assumptions.
With regards to estimations and assumptions, here are a few wise words from my mentors Karsten Hess, Jon Innes, and Richard Howard:
- Estimations are more useful for representing agreement, not reality: put the numbers in, compare and learn from other estimators, assess what can we agree on, commit and move on.
- One only can only estimate on work he/she has actually done: if you’ve never done this kind of work, acknowledge it, produce our best guesses, monitor the progress of work, compare estimations with the reality once implementation happens (that’s when the traceability and visibility aspect of the Quantifying and Qualifying Strategy framework comes in), and you’ll will get better with practice.
- We should never commit to estimations without consulting the person responsible or affected by the work: when assessing things like “T-Shirt sizes”, “Business Value”, or “Effort”, have your first pass but keep in mind that you’ll need to confirm and agree on the estimates with whoever are responsible or affected by the work before communicating you decisions.
Feasibility Hypothesis
Suppose you have taken the necessary steps to focus first on validating the Desirability and Viability of a business. In that case, you should have a clear understanding of what your killer feature is, and you should know why it’s so valuable for your customers (Wong, R., Lean business scorecard: Feasibility, 2021):
- Your Key Partners — the people and organisations who are essential to providing your business with the leanest set of capabilities to run your business and drive growth. If you’re thinking of flying solo on a business venture, consider again your limited time and your choice. You will need to make decisions and trade-offs on what to buy, borrow or build to operate your business, and it may not always make sense to have all your capabilities in-house from day one or ever.
- Key Capabilities — the capabilities you need to ideate, create, release and operate. Typically capabilities are things that allow a business to do something and can be in the form of people, processes, information or technology
- Key Activities — what you do on a day-to-day basis with all your capabilities to drive growth and win your customers’ business
Facilitating Investment Discussions through Clear Priorities
It’s essential to set priorities and remove distractions so that people can get on with providing service to customers, thus increasing profits and the value of the business (Kourdi, J., Business Strategy: A guide to effective decision-making, 2015).
As a design manager, I’ve always found that — while defining and shaping the Product Design vision to ensure cohesive product narratives through sound strategy and design principles — the way priorities are defined can potentially create a disconnect from vision, especially when tough choices around scope needs to be made. It’s important that we facilitated discussions around priorities, so the hard choices that needs to be made take in account not just feasibility, but also viability and desirability.
The challenge — though — is to have the team to clearly connect goals to priorities.
What slows progress and wastes the most time on projects is confusion about what the goals are or which things should come before which other things. Many miscommunications and missteps happen because person A assumed one priority (make it faster), and person B assumed another (make it more stable). This is true for programmers, testers, marketers, and entire teams of people. If these conflicts can be avoided, more time can be spent actually progressing toward the project goals (Berkun, S., Making things happen: Mastering project management, 2008).
If you have priorities in place, you can always ask questions in any discussion that reframe the argument around a more useful primary consideration. This refreshes everyone’s sense of what success is, visibly dividing the universe into two piles: things that are important and things that are nice, but not important. Here are some sample questions (Berkun, S., Making things happen: Mastering project management, 2008):
- What problem are we trying to solve?
- If there are multiple problems, which one is most important?
- How does this problem relate to or impact our goals?
- What is the simplest way to fix this that will allow us to meet our goals?
Clarity of Priorities through Visualisations
There are a few things you should ask yourself and/or the team when we keep coming revisiting and renegotiating the scope of work (DeGrandis, D., Making work visible: Exposing time theft to optimize workflow, 2017):
- What is your prioritisation policy and how is it visualised? How does each and every item of work that has prioritised helps get us closer to our vision and achieve our goals?
- How will you signal when work has been prioritised and is ready to be worked on? In other words — where is your line of commitment? How do people know which work to pull?
- How will we visually distinguish between higher priorities and lower priority work?
From that perspective, I find important to come up with visualisations that help remove subjectivity off investment discussions. Let’s talk about a few examples.
Opportunity-Solution Tree
Many teams generate a lot of ideas when they go through a journey-mapping or experience-mapping exercise. There are so many opportunities for improving things for the customer that they quickly become overwhelmed by a mass of problems, solutions, needs, and ideas without much structure or priority (“Opportunity-Solution Tree” in Product Roadmaps Relaunched, Lombardo, C. T., McCarthy, B., Ryan, E., & Connors, M., 2017).
Opportunity solution trees are a simple way of visually representing the paths you might take to reach a desired outcome (Torres, T., Continuous Discovery Habits, 2021):
- The root of the tree is your desired outcome—the business need that reflects how your team can create business value.
- Below the opportunity space is the solution space. This is where we’ll visually depict the solutions we are exploring.
- Below the solution space are assumption tests. This is how we’ll evaluate which solutions will help us best create customer value in a way that drives business value.
Opportunity solution trees have a number of benefits. They help product trios (Torres, T., Continuous Discovery Habits: Discover Products that Create Customer Value and Business Value, 2021):
- Resolve the tension between business needs and customer needs
- Build and maintain a shared understanding of how they might reach their desired outcome
- Adopt a continuous mindset
- Unlock better decision-making
- Unlock faster learning cycles
- Build confidence in knowing what to do next
- Unlock simpler stakeholder management
Impact Mapping
Like highway maps that show towns and cities and the roads, connecting them, Impact Maps layout out what we will build and how these connect to ways we will assist the people who will use the solution. An impact map is a visualisation of the scope and underlying assumptions, created collaboratively by senior technical people and business people. It’s a mind-map grown during a discussion facilitated by answering four questions: WHY, WHO, HOW and WHAT of the problem the team is confronting (Adzic, G., Impact Mapping, 2012)
Prioritisation Grids
We answer these questions by figuring out what the tradeoffs are between the product’s importance and its feasibility/viability (Natoli, J., Think first, 2015).
Furthermore, we can adapt these axises in these prioritisation grids to suit the discussion at hand (value to business and time to market, number of customers impacted and speed to adoption, importance and urgency, etc.) as long as all the stakeholders involved agree on the which criterion are more useful to the decision being discussed, and if there is enough expertise and data available for the team making the prioritisation exercise.
Use Cases Lists: Pugh Matrix
The UXI Matrix is a simple, flexible, tool that extends the concept of the product backlog to include UX factors normally not tracked by agile teams. To create a UX Integration Matrix, you add several UX-related data points to your user stories (Innes, J., Pugh Matrix in Integrating UX into the product backlog, 2012)
The UXI Matrix helps teams integrate UX best practices and user-centered design by inserting UX at every level of the agile process:
- Groom the backlog: During release and sprint planning you can sort, group, and filter user stories in Excel.
- Reduce design overhead: if a story shares several personas with another story in a multi-user system, then that story may be a duplicate. Grouping by themes can also help here.
- Facilitate Collaboration: You can share it with remote team members. Listing assigned staff provides visibility into who’s doing what (see the columns under the heading Staffing). Then team members can figure out who’s working on related stories and check on what’s complete, especially if you create a hyperlink to the design or research materials right there in the matrix.
- Track user involvement and other UX metrics: It makes it easier to convince the team to revisit previous designs when metrics show users cannot use a proposed design, or are unsatisfied with the current product or service. Furthermore, it can be useful to track satisfaction by user story (or story specific stats from multivariate testing) in a column right next to the story.
I’ve created Use Cases Lists (or Pugh Matrix), which is decision matrix to help evaluate and prioritize a list of options while working with Product Management and Software Architecture teams in both AutoCAD Map3D and AutoCAD Utility Design projects to first establish a list of weighted criteria, and then evaluates each use case against those criteria, trying to take the input from the different stakeholders of the team into account (user experience, business values, etc).
Using the Outcome-driven Innovation Framework above, you can prioritize the Use Cases based on their Opportunities Scores.
The Right Time for Facilitating Investment Discussions
You might be asking yourself “These are all great, but when should I be doing what?”. Without knowing what kind of team set up you have, and what kinds of processes you run in your organization, the best I can do is to map all of the techniques above the the Double Diamond framework.
The Double Diamond Framework
Design Council’s Double Diamond clearly conveys a design process to designers and non-designers alike. The two diamonds represent a process of exploring an issue more widely or deeply (divergent thinking) and then taking focused action (convergent thinking).
- Discover. The first diamond helps people understand, rather than simply assume, what the problem is. It involves speaking to and spending time with people who are affected by the issues.
- Define. The insights gathered from the discovery phase can help you to define the challenge in a different way.
- Develop. The second diamond encourages people to give different answers to the clearly defined problem, seeking inspiration from elsewhere and co-designing with a range of different people.
- Deliver. Delivery involves testing out different solutions at small-scale, rejecting those that will not work and improving the ones that will.
Map of Facilitating Investment Discussions Activities and Methods
Process Awareness characterises a degree to which the participants are informed about the process procedures, rules, requirements, workflow and other details. The higher is process awareness, the more profoundly the participants are engaged into a process, and so the better results they deliver.
In my experience, the biggest disconnect between the work designers need to do and the mindset of every other team member in a team is usually about how quickly we tend — when not facilitated — to jump to solutions instead of contemplate and explore the problem space a little longer.
Knowing when team should be diverging, when they should be exploring, and when they should closing will help ensure they get the best out of their collective brainstorming and multiple perspectives’ power and keep the team engaged.
My colleagues Edmund Azigi and Patrick Ashamalla have created a great set of questions and a cheatsheet that maps which questions are more appropriate for different phases of the product development lifecycle. So the following set of activities is inspired in their cheat sheet.
Facilitating Investment Discussions during “Discover”
This phase has the highest level of ambiguity, so creating shared understanding is really critical. While a degree of back and forth is expected and facilitating investment discussions might be too early, you can still move to clarity faster by having a strong shared vision, good problem framing, clear priorities defined through outcomes upfront.
Here are my recommendations for suggested quantifying and qualifying activities and methods:
- User Research
- Hypothesis Writing
- Problem Framing
- Challenge Briefs
- Visioneering
- Value Proposition Design
- Jobs to be Done (JTBD)
- Testing Business Ideas
- A Value Opportunity Analysis (VOA)
- Desirability Testing
Facilitating Investment Discussions during “Define”
This phase we should see the level of ambiguity diminishing, and facilitating investment discussions have the highest pay off in mitigating back-and-forth. Helping the team make good decisions by creating great choices is critical. Here are my recommendations for suggested quantifying and qualifying activities and methods:
- User Story Mapping
- Stories/Epics
- Design Sprints / Studio
- Concept Validation
- Outcome-Driven Innovation / JTBD
- Importance vs. Satisfaction Framework
- Kano Model
- Objectives, Goals, Strategy & Measures (OGSM)
- Product Backlog & Sprint Planning
Facilitating Investment Discussions during “Develop”
In this phase we are going to a point that the cost of changing your mind increases rapidly as time passes. So the team should be focusing on learning as cheap as possible (by capturing signals from the market) and discussions around investment should answer the questions if we should pivot, persevere, or stop.
Here are my recommendations for suggested quantifying and qualifying activities and methods:
- User Story Mapping
- Design Studio
- Specifications
- Collaborative Prototyping
- UXI Matrix (Pugh Matrix)
- Usability Testing
- Usefulness, Satisfaction, and Ease of Use (USE)
- American Customer Satisfaction Index (ACSI)
- System Usability Scale (SUS)
- Usability Metric for User Experience (UMUX)
- UMUX-Lite
Facilitating Investment Discussions during “Deliver”
In this phase is too late to facilitate investment discussions. The best you can do is to collect data from real customer usage for visibility and traceability, and make hard choices about pivot, persevere, or stop the next iteration of the product.
Here are my recommendations for suggested quantifying and qualifying activities and methods:
- Designer – Developer Pairing
- Fit-and-Finish
- Pirate Metrics (a.k.a. AARRR!)
- UXI Matrix (Pugh Matrix)
- Objectives, Goals, Strategy & Measures (OGSM)
Facilitating Quantifying and Qualifying Discussions
I’m of the opinion that designers — instead of complaining that everyone else is jumping too quickly into solutions — should facilitate the discussions and help others raise the awareness around the creative and problem solving process.
I’ll argue for the Need of Facilitation in the sense that — if designers want to influence the decisions that shape strategy — they must step up to the plate and become skilled facilitators that respond, prod, encourage, guide, coach and teach as they guide individuals and groups to make decisions that are critical in the business world though effective processes.
That said, my opinion is that facilitation here does not only means “facilitate workshops”, but facilitate the decisions regardless of what kinds of activities are required.
Recommended Reading
Adzic, G. (2012). Impact Mapping: Making a big impact with software products and projects (M. Bisset, Ed.). Woking, England: Provoking Thoughts.
Almquist, E., Senior, J., & Bloch, N. (2016). The Elements of Value: Measuring—and delivering— what consumers really want. Harvard Business Review, (September 2016), 46–53.
Baron, E. (2010). The Book of Management: the ten essential skills for achieving high performance. London, UK: Dorling Kindersley.
Berkun, S. (2008). Making things happen: Mastering project management. Sebastopol, CA: O’Reilly Media.
Bland, D. J., & Osterwalder, A. (2020). Testing business ideas: A field guide for rapid experimentation. Standards Information Network.
Brown, T., & Katz, B. (2009). Change by design: how design thinking transforms organizations and inspires innovation. [New York]: Harper Business
Cagan, M. (2017). Inspired: How to create tech products customers love (2nd ed.). Nashville, TN: John Wiley & Sons.
Calabretta, G., Gemser G., Karpen, I., (2016) “Strategic Design: 8 Essential Practices Every Strategic Designer Must Master“, 240 pages, BIS Publishers; 1st edition (22 Nov. 2016)
Callahan, S. (2016). Putting Stories to Work: Mastering Business Storytelling. Melbourne, Australia: Pepperberg Press (18 Mar. 2016).
Croll, A., & Yoskovitz, B. (2013). Lean Analytics: Use Data to Build a Better Startup Faster. O’Reilly Media.
DeGrandis, D. (2017). Making work visible: Exposing time theft to optimize workflow. Portland, OR: IT Revolution Press.
Design Council. (2015, March 17). What is the framework for innovation? Design Council’s evolved Double Diamond. Retrieved August 5, 2021, from designcouncil.ork.uk website: https://www.designcouncil.org.uk/news-opinion/what-framework-innovation-design-councils-evolved-double-diamond
Fish, L., Kiekbusch, S., (2020), “The State of the Designer” in The Designer’s Guide to Product Vision, 288 pages, New Riders; 1st edition (August 2, 2020)
Garbugli, É. (2020). Solving Product: Reveal Gaps, Ignite Growth, and Accelerate Any Tech Product with Customer Research. Wroclaw, Poland: Amazon.
Gothelf, J. (2019, November 8). The hypothesis prioritization canvas. Retrieved April 25, 2021, from Jeffgothelf.com website: https://jeffgothelf.com/blog/the-hypothesis-prioritization-canvas/
Gothelf, J., & Seiden, J. (2017). Sense and respond: How successful organizations listen to customers and create new products continuously. Boston, MA: Harvard Business Review Press.
Govella, A. (2019). Collaborative Product Design: Help any team build a better experience. Sebastopol, CA: O’Reilly Media.
Govindarajan, V., & Trimble, C. (2010). The other side of innovation: Solving the execution challenge. Boston, MA: Harvard Business Review Press.
Hanington, B., & Martin, B. (2012). Universal methods of design: 100 Ways to research complex problems, develop innovative ideas, and design effective solutions. Beverly, MA: Rockport.
Harned, B. (2017). Project Management for Humans: Helping People Get Things Done (1st edition). Brooklyn, New York USA: Rosenfeld Media.
Innes, J. (2012, February 3). Integrating UX into the product backlog. Retrieved July 28, 2021, from Boxesandarrows.com website: https://boxesandarrows.com/integrating-ux-into-the-product-backlog/
Kalbach, J. (2020), “Mapping Experiences: A Guide to Creating Value through Journeys, Blueprints, and Diagrams“, 440 pages, O’Reilly Media; 2nd edition (15 December 2020)
Kourdi, J. (2015). Business Strategy: A guide to effective decision-making. New York, NY: PublicAffairs
Kortum, P., & Acemyan, C. Z. (2013). How Low Can You Go? Is the System Usability Scale Range Restricted? Journal of Usability Studies, 9(1), 14–24. https://uxpajournal.org/wp-content/uploads/sites/7/pdf/JUS_Kortum_November_2013.pdf
Lafley, A.G., Martin, R. L., (2013), “Playing to Win: How Strategy Really Works”, 272 pages, Publisher: Harvard Business Review Press (5 Feb 2013)
Lewis, J. R., Utesch, B. S., & Maher, D. E. (2015). Measuring perceived usability: The SUS, UMUX-LITE, and AltUsability. International Journal of Human-Computer Interaction, 31(8), 496–505.
Lewrick, M., Link, P., & Leifer, L. (2018). The design thinking playbook: Mindful digital transformation of teams, products, services, businesses and ecosystems. Nashville, TN: John Wiley & Sons
Lombardo, C. T., McCarthy, B., Ryan, E., & Connors, M. (2017). Product Roadmaps Relaunched. Sebastopol, CA: O’Reilly Media.
Lund, A. M. (2001). Measuring usability with the USE questionnaire. Usability Interface, 8(2), 3-6 (www.stcsig.org/usability/newsletter/index.html).
Moorman, J., (2012), “Leveraging the Kano Model for Optimal Results” in UX Magazine, captured 11 Feb 2021 from https://uxmag.com/articles/leveraging-the-kano-model-for-optimal-results
Oberholzer-Gee, F. (2021). Better, simpler strategy: A value-based guide to exceptional performance. Boston, MA: Harvard Business Review Press.
Oberholzer-Gee, F. (2021). Eliminate Strategic Overload. Harvard Business Review, (May-June 2021), 11.
Olsen, D. (2015). The lean product playbook: How to innovate with minimum viable products and rapid customer feedback (1st ed.). Nashville, TN: John Wiley & Sons.
Patton, J. (2014). User Story Mapping: Discover the whole story, build the right product (1st ed.). Sebastopol, CA: O’Reilly Media.
Perri, M. (2019). Escaping the build trap. Sebastopol, CA: O’Reilly Media.
Pichler, R. (2016). Strategize: Product strategy and product roadmap practices for the digital age. Pichler Consulting.
Podeswa, H. (2021). The Agile Guide to Business Analysis and Planning: From strategic plan to continuous value delivery. Boston, MA: Addison Wesley.
Polaine, A., Løvlie, L., & Reason, B. (2013). Service design: From insight to implementation. Rosenfeld Media.
Rubin, J., & Chisnell, D. (2011). Handbook of usability testing: How to plan, design, and conduct effective tests (2nd ed.). Chichester, England: John Wiley & Sons.
Sauro, J., & Lewis, J. R. (2016). Quantifying the user experience: Practical statistics for user research (2nd Edition). Oxford, England: Morgan Kaufmann.
Sharon, T. (2016). Validating Product Ideas (1st Edition). Brooklyn, New York: Rosenfeld Media.
Torres, T. (2021). Continuous Discovery Habits: Discover Products that Create Customer Value and Business Value. Product Talk LLC.
Tullis, T., & Albert, W. (2013). Measuring the user experience: Collecting, analyzing, and presenting usability metrics (2nd edition). Morgan Kaufmann.
Ulwick, A. (2005). What customers want: Using outcome-driven innovation to create breakthrough products and services. Montigny-le-Bretonneux, France: McGraw-Hill.
Van Der Pijl, P., Lokitz, J., & Solomon, L. K. (2016). Design a better business: New tools, skills, and mindset for strategy and innovation. Nashville, TN: John Wiley & Sons.
Wong, R. (2021). Lean business scorecard: Desirability. Retrieved February 25, 2022, from Medium website: https://robinow.medium.com/lean-business-scorecard-desirability-ede59c82da78
Wong, R. (2021). Lean business scorecard: Feasibility. Retrieved February 25, 2022, from Medium website: https://robinow.medium.com/lean-business-scorecard-feasibility-aa36810ae779
Wong, R. (2021). Lean business scorecard: Viability. Retrieved February 25, 2022, from Medium website: https://robinow.medium.com/lean-business-scorecard-viability-de989a59aa74
9 replies on “Facilitating Investment Discussions in Strategy”
[…] More on Investment Discussions […]
[…] More on Investment Discussions […]
[…] more about facilitating investment discussions by finding objective ways to value ideas, approaches, solutions to justify the investment on […]
[…] more about facilitating investment discussions by finding objective ways to value ideas, approaches, solutions for managing by outcomes […]
[…] need to be facilitated while quantifying and qualifying strategy, namely: Testing Business Ideas, Facilitating Investment Discussions, Pivot and Risk Mitigation, Visibility and […]
[…] to be facilitated while quantifying and qualifying strategy, namely: Pivot and Risk Mitigation, Facilitating Investment Discussions, and Visibility and […]
[…] more about facilitating investment discussions by finding objective ways to value ideas, approaches, solutions to justify the investment on […]
[…] the previous post, I talked how to help teams with facilitating investment discussions by finding ways to remove (or […]
[…] be facilitated while quantifying and qualifying strategy, namely: Pivot and Risk Mitigation, Facilitating Investment Discussions, Visibility and Traceability, and Validating / Testing Business […]