Value Target Analysis — Part 4

Apply the Methodology

Download the Entire Paper (PDF)

Overview of Method

The first step in performing Value Target Analysis is to delineate the logic of progress for a given job via a Progress Map. This entails defining all success outcome customer value metrics (CVMs) for a job to be done, delineating the ideal process for generating those success outcomes via job action steps, capturing all CVMs associated with these job action steps, and understanding the predictive relationships between job action CVMs and success outcomes CVMs. During the development of a Progress Map, a target customer segment(s) is identified using the Job Segmentor tool, a qualitative method that groups customers based on similar job circumstance.

Assuming the group interview method, small groups of individuals belonging to the same job segment are then asked to rate each CVM on the Progress Map for importance and satisfaction. An analysis of these ratings results in a set of value targets for the customer segment that filter into one of four categories — undershot value targets, overshot value targets, must-be value targets, and indifferent value. For the Web-survey method, the rating process will be done with statistical software. As value targets are identified, one by one, the following is done concurrently for each value target before rating the next CVM:

  • When an undershot, overshot, and must-be value target is identified, some additional data is needed to make these value targets actionable for design/engineering teams. Using your product/service or a competitive offering as a solution-in-use benchmark, a current baseline value is determined for each undershot, overshot, and must-be value target; this is the current performance of the solution-in-use along that dimension of value. A future value is then set for each undershot and overshot value target, which is the minimum future performance of the solution-in-use along that undershot and overshot dimension of value. A future value is not required for must-be value targets, since the appropriate action is to maintain current value along these aspects at least cost.
  • For each undershot value target, a facilitated discussion is held with the group to determine the job circumstance that triggers the moment of struggle (MoS) indicated by the undershot value target. Individuals are first asked to describe the MoS as they experience it via a solution-in-use. The team then investigates the root cause of the MoS. Understanding the job circumstance driving undershot value enables a demand creation team to find the best means possible to effectively solve for that MoS at the lowest cost.
  • For each indifferent CVM, a facilitated discussion is held with the group that rates it as such to explore how indifferent value can be turned into customer delight and then to undershot value — to be satisfied by your company before competitors are aware. When individuals elaborate on the aspects of job execution related to an indifferent CVM, the innovation team may have creative insight into a combination of technologies, design methods, and/or business models that would enable customers to get the job done better in ways they hadn’t considered before. Eureka!

STEP 1: Create a Progress Map for a Job to Be Done

The Progress Map template represents a generic format and is intended as a guide only. It’s best to create a Progress Map on a large white board or wall using different colors and sizes of sticky notes, which are useful for keeping information limited, mobile, and to indicate relationships. Create your Progress Map in a place where it can be kept safely displayed until Value Target Analysis is complete. Ideally, this location will be suitable to hold customer job interviews and to facilitate discussion with other collaborators. Revise the Progress Map as needed when new information comes to light.

If the aim is to increase the customer value of an existing product/service (value enhancement), a Progress Map can be created from a small number of customer interviews because the logic of progress for the job is already familiar. You can often create a pro forma Progress Map before talking to customers and then conduct 5 or 6 customer job interviews to refine the Progress Map. Start by asking, “What job are you trying to get done by using our product/service?”

If the aim is to create a new product/service, the logic of progress for the job to be done is less familiar. In this case, the Progress Map is developed concurrently while conducting customer job interviews. Thirty individual customer job interviews are recommended for a value innovation project.

The reason for interviewing customers individually is that we want divergence in the data collected from customers for the purpose of eliciting all customer value metrics (CVMs) for the Progress Map. In contrast, group interviews facilitate convergence through discussion and interaction. Think about the story of 10 blind men trying to describe an elephant. Individually, none of them have the whole picture, but collectively they can describe the whole elephant. We want diverse perspectives from those interviewed to surface all the CVMs that operationalize the Progress Map. The purpose of interviewing 30 individuals is not to generalize a statistically significant effect of some phenomenon to a larger population. Empirical research studies have established that it takes 30 customers to surface virtually all “customer needs” for a solution (Griffin & Hauser, 1993). In this case, we are not focused on needs or directly on a solution at this point, but rather dimensions of value that define the ideal process for executing a job.

A Progress Map is an effective way to focus customer job discussions with team members, your company at large, business partners, customers, and non-customers on the topics that matter for innovation purposes. Innovation teams are able to quickly elicit the information they need to guide their innovation efforts because they know what to look for and what to ask about. They avoid “fishing” and taking time-consuming detours through the “swamp” of customer job complexity.

General Progress Map Procedure:

  1. Define all functional, emotional and social success outcome CVMs (for all job executors).
  2. Define the core action step on the Progress Map required to generate the success outcomes.
  3. Delineate the job steps that must precede the core action step (else core action cannot be accomplished). Then delineate the job steps that must follow the core action step to ensure that the core step is accomplished and can be repeated the next time the job is executed.
  4. Define the job action CVMs associated with each job step. Start with the core action step, then the job steps preceding the core step, and, finally, the job steps that follow the core step. As you are doing this, indicate which job action metrics predict what success outcomes. Indicate these relationships by using the same color sticky notes or colored dots.

Keep in mind that your goal is to create a universal Progress Map that is free of customer job circumstance and job solutions. As such, the Progress Map will apply to all people and organizations trying to execute the job. Also note that in reality a customer job typically has between 10 and 15 job action steps, not 8 steps as depicted on the Progress Map template. Typically, there are between 2 and 5 job action CVMs for each job step. The core step can have well over 5 job action CVMs depending on the number of success outcomes involved. There are typically between 6 and 20 success outcomes for any job.

Identify a Job Segment(s)

Use the Job Segmentor tool concurrently while creating a Progress Map to identify a customer job segment(s). Job Segmentor uses multi-directional affinity grouping to reveal job executors who share the same or similar circumstances with respect to getting a job done. This is a straightforward yet highly effective way to segment job executors without using sophisticated quantitative segmentation methods that are time consuming and expensive.

Download Job Segmentor Template (PDF)

If an innovation effort is focused on improving/extending the value of an existing product/service, then you may not need to perform job segmentation. Your current customers have already self-segmented by hiring and using your product/service over and over to get a job done. However, if this is a particularly large customer segment and there seems to be a lot of job diversity indicators with the group or if the product/service is losing customers to a competitive solution, then you may want to consider interviewing 30 individual customers and performing this segmentation procedure as though you were creating a new product/service. If job segmentation results in two or more customer groups, then a Value Target Analysis for each group could reveal some lucrative opportunities.

To begin, re-create the Job Segmentor template on the wall next to the Progress Map. You will need to assign a unique customer number to every individual interviewed — usually customer 1, customer 2, customer 3, and so forth. As you interview each individual to elicit customer value metrics for the Progress Map, ask them some additional questions for segmentation purposes:

  1. Duplicate the success outcomes for each individual on sticky notes and then place these sticky notes inside the “Job Executor Success Outcomes” box at the top of the Job Segmentor template. Ask each individual to rate the importance of each of their success outcomes from 1 to 9, where 1 is not important and 9 is extremely important; circle these rating numbers. Write the unique customer number in the upper right corner of each sticky note.
  2. Ask each individual about the situational factors that are related to the job-to-be-done and the context(s) that the job is executed. Reference the Job Segmentor template for specific questions to uncover these two aspects of job circumstance. Write this information on sticky notes and place them in the “Situational Factors” and “Job Contexts” boxes where they belong. Write the unique customer number on each sticky note.
  3. Profile each individual based on demographic, psychographic, and behavioral data. The trick is to use a minimum combination of these data to create a job executor persona that can be used to easily identify these individuals out in the world. Record this information on sticky notes and place them in the “Job Executor Personas” box. Write the unique customer number on each sticky note.
  4. After all individuals have been interviewed, group the sticky notes by similarity within each of the four boxes. Sticky notes that have the same customer number on them must move together as a block because they represent a whole individual. Clump these sticky notes together. Individuals that have similarities within each box are grouped to the left. Individuals that are not similar to others are outliers and are moved to the right. The same success outcomes are grouped together based on the following 3-point rating spreads: 1–3 or 4–6 or 7–9.
  5. Now, look for patterns where the unique customer numbers on the sticky notes line up vertically with horizontal groups containing the same unique customer numbers; this is the vertical affinity of similar groups. The customer number is used as a key to vertically group the groups within the four boxes. Because you’ll never get a perfect fit, you will need to do some interpreting. Further, expect that this exercise will be messy because that’s the nature of job segmentation, especially with a small sample size. What you are looking for is enough vertical similarity among the groups within the horizontal boxes to identify a job segment(s). That is, if certain groups line up vertically, then you have identified a job segment(s).

STEP 2: Rate the Customer Value Metrics

A first round of customer interviews was held for the purpose of creating a Progress Map. In Step 2, a different group of customers is asked to rate each customer value metric on the Progress Map for importance and satisfaction.

There are two ways to engage customers: group interviews or a Web-based survey. We prefer group interviews because the customer ratings are more accurate and related job circumstance is more easily surfaced. Ultimately, the best choice depends on factors such as available time, cost considerations, access to customers, complexity of the job, and the type of innovation opportunity being pursued (value enhancement versus value innovation). Group interviews can also be combined with a Web-based survey to cross-validate the customer ratings obtained from each method.

Group Interviews

The group interview approach works well for capturing importance and satisfaction ratings and other data from small samples of individuals (recall that customers were interviewed individually when creating a Progress Map). That’s because group discussion and interaction helps individuals make reality connections between the CVMs they are asked to rate and a solution-in-use via the convergence of experiences among individuals. This is important because individuals are not accustomed to thinking abstractly about value; rather their experiences center on solutions. When this reality connection is made, they can more accurately rate CVMs. Otherwise, the rating process can become an academic or theoretical exercise. When this happens, the data can become arbitrary, making the conclusions drawn from the data weak or even spurious.

We recommend interviewing 30 customers, split into three groups consisting of 10 customers per group. The advantages of interviewing groups of customers is that:

  • It’s more time and cost effective than interviewing 30 individuals.
  • Interactions with and among customers in a group generates the solution context that provides a reality basis for rating the customer value metrics.
  • Multiple viewpoints among individuals in a group effectively surfaces the job circumstance causing undershot value targets.
  • It is desirable to have group consensus on the current baseline and future values for undershot, overshot, and must-be value targets because these values can vary widely among individuals.
  • Group discussion can generate insights on what may be possible for indifferent value.

Web-Based Survey

The second approach for prioritizing customer value metrics is a Web-based survey. The advantage of a survey is that it increases the sample size of customers interviewed, but this comes with trade-offs. Survey respondents do not have access to the solution context that provides a reality basis for rating the customer value metrics. Without this context, rating customer value metrics can be seen by survey respondents as a theoretical exercise, which can reduce the quality of the ratings. This can be offset somewhat by increasing the sample size to 180 or more customers and then scrubbing the data, but some of the data may still be arbitrary. Further, as the number of survey respondents increase, so does the cost of the survey, since respondents are normally compensated for their participation.

Another drawback of a survey is that related job circumstance for undershot value targets cannot be captured directly from customers. There’s simply no practical way to structure this in a survey. The alternative is to use the logic of the Progress Map with customer empathy to ascertain related job circumstance. However, this is not preferred because innovation teams can unwittingly introduce flawed assumptions and biases that can skew or distort this information. Further, a survey does not provide the learning opportunities that are generated when innovation teams interact directly with customers around a job to be done — for example, the ways that indifferent value can be turned into customer delight (as Apple Computer has so often done). Such interactions can lead to new insights and directions for innovation.

Analysis of Customer Ratings

The following discussion assumes that the group interview method is used. However, the actual rating method itself applies equally to a Web-based survey. We recommend interviewing the groups in the same room that hosts the Progress Map. Begin by explaining the job to be done and how it is conceptualized on the Progress Map. Clarify the intention of the session and what you’re asking individuals to do.

Customers rate each CVM using Likert-type scales — one scale for importance and one scale for satisfaction. Each scale has seven scale items consisting of three “negative” items on the left side and three “positive” items on the right side, with a neutral item in the middle, resulting in a balanced scale.

For the importance scale, 1 is “not at all important” and 7 is “extremely important.”

For the satisfaction scale, 1 is “completely dissatisfied” and 7 is “completely satisfied.”

Although there are different ways to analyze ordinal data from Likert-type scales, the top box, bottom box, and middle box scoring method is used for the purpose of Value Target Analysis. This method involves the sum of the percentages of interviewed customers who choose the top two items (or boxes), the bottom two items, and the percentage of customers who choose middle items on the rating scales.

The goal is to filter all rated CVMs into value target categories, not to analyze an effect size via a comparison of groups as is typically done in a research project. Further, using mean scores, somewhat controversial for ordinal data since averaging is a mathematical operation, yields poor discrimination among customer ratings. The box scoring method is the simplest and most effective way to accomplish the purpose of Value Target Analysis.

Provide each individual with generic rating sheets to record his or her responses to the customer value metrics (CVMs). Read aloud each CVM and indicate the job step that the metric is associated with. Ask individuals to rate each CVM for importance and satisfaction using one rating sheet. Collect the rating sheets and enter the rating data into Excel to determine which category the CVM filters into undershot, overshot, must-be, or indifferent. In Excel, simple logic formulas (not calculations) filter the data. Entering the rating sheet data for a single CVM into Excel and identifying the value target category takes 2 minutes or less. If using the survey method, data from the online survey is exported into a statistical software program like SPSS for analysis. Unlike the Web-based survey, the group interview method does not require a significant knowledge of statistics, nor does it require the use of sophisticated statistical software.

Once an undershot, overshot, and must-be value target is identified, some additional data is needed to make these value targets actionable for a design/engineering team; we will discuss the additional information needed for indifferent CVMs later. Using your product/service or a competitive offering as a solution-in-use benchmark, a current baseline value is ascertained for each undershot, overshot, and must-be value target. This is the current performance of the solution-in-use along that undershot, overshot, and must-be dimension of value.

Current baseline values and future values must be quantitative data or numbers that measure an amount or number of something (how many, how much or how often). Units of measure include time, distance, size, weight, speed, currency, attempts, cycles, errors, calls, hang ups, and frequency — to name just a few. Note that the probability or likelihood of a certain result cannot be used as a performance value on the customer side because this inherently assumes prediction.

These kinds of metrics belong on the value producer side since they measure product and service efficacy — something that customers have no control over. On the customer side, success outcome CVMs quantitatively measure actual results. If a result falls short of the customers’ expectation, then the value producer needs to take a close look at how the job action value targets are instantiated in the solution design and how a revised solution design can more effectively generate that success outcome.

Next, we discuss the underlying method used for identifying value targets. Although Excel can automatically identify these when analyzing CVMs one at a time in a group interview setting, you may at times want to perform this analysis manually, which is actually easy to do. Regardless, it is important that practitioners understand the logic of how this method works so that it can be explained to customers, managers, and others involved in an innovation effort. The method is actually quite simple and intuitive.

Identifying Undershot Value Targets

For a customer value metric, determine the sum percentage of all customers who rate the metric as “very important” (item 6) and “extremely important” (item 7). This is called the top-two box score for the scale. The sum percentage is then placed along 100 points of discrimination for importance — the higher the points, the greater the importance of the customer value metric. A filter criterion (i.e., above 60, 70, 80, etc.) is used on the 100-points of discrimination to determine if the customer value metric is undershot (along with the dissatisfaction criterion).

Perform the same procedure for the satisfaction rating. Determine the sum percentage of all customers who rate the customer value metric as “completely dissatisfied” (item 1) and “mostly dissatisfied” (item 2). This is called the bottom-two box score for the scale. The sum percentage is then placed along 100 points of discrimination for dissatisfaction — the higher the points, the greater the dissatisfaction with the customer value metric. A filter criterion (i.e., above 60, 70, 80, etc.) is used on the 100-points of discrimination to determine if the customer value metric is undershot (along with the importance criterion).

We consider a customer value metric to be undershot if it is 70 or greater on the 100-points of discrimination for importance and 70 or greater on the 100-points of discrimination for dissatisfaction. This indicates that a vast majority of customers want more value from a solution along this dimension of value to get a job done better. Although we recommend the 70/70 undershot filter criteria, they can be set based on your own sensitivity preferences. Further, the filter criteria do not have to be the same for importance and satisfaction. There are times when you may want to set importance higher than satisfaction (80/60 for example).

Once an undershot value target has been identified, a current baseline value and a future value are required to make that value target actionable by a design/engineering team. Individuals in the group who rated the CVM as undershot are asked to suggest a current baseline value based on their experience with the solution-in-use benchmark. Individuals who did not rate the CVM undershot are excluded (they are non-qualified). Recall that the solution-in-use benchmark is either a current product/service belonging to your company or a competitor.

In either case, the current baseline value pertains to the solution-in-use benchmark, not multiple solutions that may be used by individuals in the group. So, individuals who are using a different solution are excluded as well. The remaining qualified individuals will likely suggest some different numbers for the current baseline value. The best way to get one value is to force a consensus with voting/agreement or simply take the average of the numbers suggested. This number is then averaged with the final numbers suggested by the other groups to get a final current baseline value.

The same qualified individuals are asked to suggest a future value. Start with the following question: “What does the performance level need to be for the solution-in-use along this dimension of value that removes the moment of struggle?” There are surely design/cost trade-off realities, but these are not considered at this point. All that is needed now is the minimum performance level required to remove the moment of struggle. To get one value, force a consensus with voting/agreement or simply take the average of the numbers suggested. This number is then averaged with the final numbers suggested by the other groups to get a final future value.

Having both of these 100-point discrimination scales move in the same direction, regardless of how they are oriented for importance and satisfaction on either end, makes it possible to logically filter the ratings into value target categories without applying a mathematical algorithm.

Example:

Eighty-five percent of customers interviewed rate the importance of a particular customer value metric as “very important” (item 6) or “extremely important” (item 7). Therefore, the overall importance rating for this metric is 85 along 100-points of discrimination for importance with a filter criterion of 70.

Seventy-eight percent of customers interviewed rate the satisfaction of this customer value metric as “completely dissatisfied” (item 1) or “mostly dissatisfied” (item 2). Therefore, the overall satisfaction rating for this metric is 78 along 100-points of discrimination for dissatisfaction with a filter criterion of 70.

Based on our 70/70 criteria preference, a customer value metric that is 85 for importance and 78 for dissatisfaction along 100-points of discrimination for importance and dissatisfaction represents a very good opportunity to increase the customer value of a new or existing solution along this dimension of value because both are above the 70/70 criterion.

The same general procedure is applied to identify secondary undershot value targets. Secondary undershot value targets are moderately important and highly dissatisfied. Specifically, we look at the percentage of respondents that rate the importance of a customer value metric as “moderately important” (item 5) while rating the satisfaction for the metric as “completely dissatisfied” (item 1) and “mostly dissatisfied” (item 2).

We ask clarification questions to determine if these CVMs are actually secondary undershot value targets or if they should be categorized as indifferent value targets because these CVMs are “on the line,” so to speak. We ask questions like, “Would you de-value a product/service if there are no features or benefits that address this dimension of value?” and “Would you rather trade-off features and benefits that address this dimension of value for a lower selling price?” A strong “yes” answer to the first question indicates a secondary undershot value target. A “yes” answer on the second question indicates that customers are somewhat indifferent about the customer value metric.

Identifying the Trigger for a Moment of Struggle

An undershot value target indicates a moment of struggle (MoS). Before proceeding to rate the next CVM, identify the job circumstance that triggers the MoS. First, ask an individual to describe the moment of struggle as he or she experiences it, then investigate the root cause of the MoS. Recall that a MoS can be triggered by the drop in the performance of a solution-in-use (the simplest and most obvious trigger). A MoS can also be triggered by situational factors that prevent customers from obtaining or using a solution to get a job done well or at all. These factors include job constraints (obstacles, barriers, compensating behaviors), unsatisfactory trade-offs (time, effort, resources, values, risk, success outcomes), and macro factors (events/occurrences, internal conditions/states, policies, compliance).

A MoS can also be triggered when a solution-in-use does not perform well in a particular job context — the time, the place, with or without whom/what the job is executed. For instance, a pair of ear buds for a cell phone works well at a desk or in a car but works poorly on a treadmill. In another example, an online video chat service works well when used with only one person but works poorly when trying to connect multiple people.

Understanding the job circumstance driving an undershot CVM enables a demand creation team to find the best means possible to effectively solve for that MoS at the lowest cost. A good starting point is to ask the group, “What is the circumstance that makes this aspect of job execution time consuming, arduous, and resource intensive?” or “What circumstance makes this aspect of job execution go off track?” If individuals reference certain features and benefits of a solution-in-use, ask them why these are important or unimportant to them. Asking “why” a few times usually connects with job circumstance, shifting the focus back to the job to be done.

Identifying Overshot Value Targets

The same general procedure is used for identifying overshot value targets, except that the focus is on customer value metrics that have little importance and are highly satisfied. To identify these value targets, we look at how customers rate the opposite ends of the importance and satisfaction scales. In this case we have 100-points of discrimination for unimportance and 100-points of discrimination for satisfaction moving in the same direction, for the reasons discussed earlier.

Once an overshot value target has been identified, a current baseline value and a future value are required to make that value target actionable for a design/engineering team. The same procedure is used as was described for undershot value targets. Qualified individuals in the group are asked to suggest a current baseline value based on their experience with the solution-in-use. These individuals are then asked to suggest a future value. Start with the following question: “What is a satisfactory performance level for the solution-in-use along this dimension of value?” The goal is to convert the overshot value target to a must-be value target. All that is needed now is the minimum performance level required to maintain satisfaction for this aspect of job execution.

Example:

Seventy-two percent of customers interviewed rate the importance of a particular customer value metric as “not at all important” (item 1) or low importance” (item 2). Therefore, the overall rating for this metric is 72 along 100-points of discrimination for unimportance (the higher the points, the greater the unimportance) with a filter criterion of 70.

Eighty-four percent of customers interviewed rate the satisfaction of this customer value metric as a “mostly satisfied” (item 6) or “completely satisfied” (item 7). Therefore, the overall rating for this metric is 84 along 100-points of discrimination for satisfaction (the higher the points, the greater the satisfaction) with a filter criterion of 70.

We use the same 70/70 filter criteria here as we do for identifying undershot value targets. A customer value metric that is 72 for unimportance and 84 for satisfaction along 100-points of discrimination for unimportance and satisfaction represents a very good opportunity to scale-down value along this dimension. By doing so, it may be possible to lower the cost structure of a product/service. Again, you can set these overshot filtering criteria based on your sensitivity preferences.

The same general procedure is used to identify secondary overshot value targets. Secondary overshot value targets are little important and highly satisfied. Specifically, we look at the percentage of respondents that rate the importance of a customer value metric as “slightly important” (item 3) and rate satisfaction as “completely satisfied” (item 7) or “mostly satisfied” (item 6).

Because these are on the line, they can also be interpreted as secondary must-be value targets. That’s because the only thing that separates secondary overshot value targets and secondary must-be value targets is a small degree of importance. The lack of discrimination between “slightly important” and “moderately important” can make the distinction between the two somewhat hazy.

To sort out the difference, we ask customers a few clarification questions such as, “Do you think that current solutions have overdone features that address this dimension of value?” and “Does a product/service have to include feature(s) and benefits that address this dimension of value before you even consider buying/using it?” A strong “yes” answer to the first question indicates a secondary overshot value target. A “yes” answer to the second question indicates a secondary must-be value target.

Identifying Must-Be Value Targets

Customer value metrics that are rated by customers as highly important and highly satisfied are must-be value targets. Customers expect this value in all competing job solutions, which means that they do not differentiate products/services based on must-be value. Over producing value beyond what is expected does not increase the value of a solution, but producing less value than what is expected will cause customers to devalue a product/service. The appropriate action is to maintain current levels of satisfaction on these dimensions of value at the least cost possible. In our experience, 50–60% of value targets are in this category.

To identify must-be value targets, we look at how customers rate the same ends of the importance and satisfaction scales. In this case we have 100-points of discrimination for importance and 100-points of discrimination for satisfaction moving in the same direction.

Once a must-be value target has been identified, a current baseline value is required to make that value target actionable for a design/engineering team. Recall that the appropriate action for must-be value targets is to maintain the current level of performance along those dimensions of value at the least cost. Therefore, only a current baseline value is needed for must-be value targets. The same general procedure is used as was described for undershot value targets. Qualified individuals in the group are asked to suggest a current baseline value based on their experience with the solution-in-use. Start with the following question: “What level of performance for the solution-in-use should be maintained along this dimension of value?” All that is needed now is the minimum performance level required to maintain satisfaction for this aspect of job execution.

Example:

Seventy-five percent of customers interviewed rate the importance of a particular customer value metric as “very important” (item 6) or “extremely important” (item 7). Therefore, the overall rating for this metric is 75 along 100-points of discrimination for importance (the higher the points, the greater the importance) with a filter criterion of 70.

Seventy-three percent of customers interviewed rate the satisfaction of this customer value metric as a “mostly satisfied” (item 6) or “completely satisfied” (item 7). Therefore, the overall rating for this metric is 73 along 100-points of discrimination for satisfaction (the higher the points, the greater the satisfaction) with a filter criterion of 70.

Based on our 70/70 filter criteria preference, a customer value metric that is 75 for importance and 73 for satisfaction along 100-points of discrimination for importance and satisfaction represents a dimension of value that must be satisfied at current levels.

The same general procedure is used to identify secondary must-be value targets. Specifically, we look at the percentage of respondents that rate the importance of a customer value metric as “moderately important” (item 5) and rate satisfaction as “mostly satisfied” (item 6) or “completely satisfied” (item 7). Because these are on the line, they can also be interpreted as undershot value targets, since the only thing that separates the two is a small degree of importance. The lack of discrimination among customers between “slightly important” and “moderately important” can make the distinction ambiguous.

To sort out the difference, we ask customers a few clarification questions such as, “Does a product/service have to include feature(s) and benefits that address this dimension of value before you even consider buying/using it?” and “Do you think that current solutions have overdone features that address this dimension of value?” A strong “yes” answer to the first question indicates a secondary must-be value target. A “yes” answer to the second question indicates a secondary overshot value target.

Identifying Indifferent Value

The focus here is on customer value metrics that are unimportant and very dissatisfied. For each customer value metric, determine the percentage of all customers who rate importance as “not at all important” (item 1) and “low importance” (item 2). Then determine the percentage of all customers who rate the satisfaction of the metric as “completely dissatisfied” (item 1) or “mostly dissatisfied” (item 2).

Example:

Sixty-four percent of customers interviewed rate the importance of a particular customer value metric as “not at all important” (item 1) or “low importance” (item 2). Therefore, the overall rating for this metric is 64 along 100-points of discrimination for unimportance (the higher the points, the greater the unimportance) with a filter criterion of 60.

Sixty-eight percent of customers interviewed rate the satisfaction of this customer value metric as “completely dissatisfied” (item 1) or “mostly dissatisfied” (item 2). Therefore, the overall rating for this metric is 68 along 100-points of discrimination for dissatisfaction with a filter criterion of 60.

Based on our 60/60 criteria preference, an overall unimportance rating of 60 or greater and an overall dissatisfaction rating of 60 or greater represents indifferent value. In this example, a customer value metric that is 64 for unimportance and 68 for dissatisfaction indicates indifferent value.

Exploring the Possibilities for Indifferent Value

For each indifferent CVM as identified by a group, facilitate a discussion to explore how indifferent value can be turned into customer delight and then to undershot value (to be satisfied by your company before competitors are aware). Having individuals elaborate on the aspects of job execution related to an indifferent CVM can trigger creative insights among individuals and the demand creation team. Specifically, the team may know of a combination of technologies, design methods and/or business models that would enable customers to get the job done better in ways no one had considered before. Eureka!

It’s best to audio record these discussions rather than trying to write everything down. We recommend making a separate recording for each indifferent CVM discussion (we use Adobe Audition for this with a good quality table microphone). We then transcribe each discussion into text. The transcripts are imported into a software program called NVIVO for qualitative analysis (certain key text is highlighted, coded, and then rolled up into possible insights). It’s amazing how many insights can be generated by cross-referencing group discussions. For instance, if the same insight comes up in all three groups (not prompted by a team facilitator), then this could be a real opportunity to turn indifferent value into customer delight.

Next Steps — Using Value Targets in the Design Stage

When Value Target Analysis is complete, the following value target package is then handed off to those involved in the design stage of an innovation project:

  • Job segment(s) data.
  • Progress Map indicating predictive relationships among CVMs (Visio or OmniGraffle file).
  • Undershot value targets with current baseline values and future values, the job circumstance that triggers each moment of struggle indicated by an undershot value target.
  • Overshot value targets with current baseline values and future values.
  • Must-be value targets with current baseline values.
  • Creative insights for turning indifferent CVMs into customer delight (NVIVO file).

If the goal is to create a new product or service, the next step for a design team is to determine the best solution characteristics — the features, functions, and benefits — that will produce the value that customers want in a new solution while generating a sufficient profit for the company. Solution characteristics are often referred to as “engineering characteristics” or “technical quality characteristics” in the product context. For simplicity, we use the term “solution characteristics” to refer to both the product and service contexts. Each solution characteristic is associated with a performance measure that quantifies it in some way (these are called “Critical to Quality” measures in Six Sigma).

Traditionally, finding the best solution characteristics and the appropriate level of performance for each characteristic that can satisfy a set of customer “needs” is the biggest time/effort bottleneck in the new product development process. One reason for this is that a typical customer need statement does not provide enough information to make it clear and actionable for a design team. For example, a customer need statement for a healthcare company’s patient portal might be, “I want to be able to find a qualified doctor quickly.” But questions arise due to the ambiguity of the need statement. What does “qualified” mean in this case? And how quickly? Such a need statement leaves much to interpretation.

A corresponding undershot value target would be, “Reduce the time it takes to find an available doctor that matches my preferences.” For a vast majority of customers interviewed, completing this activity currently takes 3 hours on the average. Customers expect to do this in 15 minutes. The job circumstance information that accompanies this value target (moment of struggle) indicates that a big reason why the doctor search takes so long is that the database is not current. Customers choose a doctor, then call to make an appointment only to be told that the doctor is not accepting new patients or is no longer working at that medical facility. Customers then have to start all over, sometimes repeating several cycles — a frustrating experience! With this information in hand, the moment of struggle can easily be solved by putting a process in place to keep the physician database current.

Having value targets on the front end of an innovation effort streamlines the innovation process. The design stage in particular is where most of the time and costs are incurred. Because value targets give a design team a blueprint for customer value, the team is able to efficiently focus time and resources on finding the best means possible to produce that value at the least cost. No time is lost debating who has the best idea. No resources are wasted producing value that customers don’t want. The value target package provides all the information needed to find the best combination of solution characteristics that will maximize customer value at the least cost. The current baseline values and future values inform a design team where the solution characteristic performance measures need to be set to meet customer expectations.

Value Target Analysis enables companies to produce superior value and get that value to the market faster and less expensively than competitors. Maintaining up-to-date value targets for existing products/services enables companies to maximize the customer value and the profitability of these offerings throughout their value lifecycles. In summary, Value Target Analysis is a practical tool that:

  • Can be performed in a few weeks, not months.
  • Does not require sophisticated analytical skills and software.
  • Is not expensive to perform relative to alternative methods like ethnographic research.
  • Increases customer demand and the profitability of offerings.
  • Enables offerings to remain viable in the market for longer periods of time.

References

Bettencourt, L. A., & Ulwick, A. W. (2008). The Customer-Centered Innovation Map. Harvard Business Review, 86(5), 109–114.

Christensen, C. M., Cook, S., & Hall, T. (2005). MARKETING MALPRACTICE: The Cause and the Cure. Harvard Business Review, 83(12), 74–83.

Christensen, C. M., Hall, T., Dillon, K., & Duncan, D. S. (2016a). Competing Against Luck: the Story of Innovation and Customer Choice. Harper Business.

Christensen, C. M., Hall, T., Dillon, K., & Duncan, D. S. (2016b). Know Your Customers’ “Jobs to Be Done”. Harvard Business Review, 94(9), 54–60.

Christensen, C. M., & Raynor, M. E. (2003). The innovator’s solution: creating and sustaining successful growth (1st ed.). Harvard Business Review Press.

Griffin, A., & Hauser, J. R. (1993). The Voice of the Customer. Marketing Science, 12(1).

Kano, N. (2003). Life Cycle and Creation of Attractive Quality. Working Paper.

Klein, G. (1998). Sources of power: how people make decisions. Cambridge, Mass: MIT Press.

Levitt, T. (1969). The Marketing Mode. New York: McGraw-Hill.

Moesta, B., & Spiek, C. (2014). Jobs-to-be-done: practical techniques for improving your application of jobs-to-be-done. The Re-Wired Group.

Norman, D. A. (1988). The design of everyday things. New York, Doubleday.

Ulwick, A. W. (2005). What customers want: Using Outcome-Driven Innovation to Create Breakthrough Products and Services. New York: McGraw-Hill.

Ulwick, A. W. (2016). Jobs To Be Done: Theory To Practice. Idea Bite Press.

Ulwick, A. W. (2017). The 5 Tenets of Jobs-to-be-Done Theory. Retrieved from https://jobs-to-be-done.com/

Ulwick, A. W., & Bettencourt, L. A. (2008). Giving Customers a Fair Hearing. MIT Sloan Management Review, 49(3), 62–68.

Wasson, C. (1978). Dynamic competitive strategy & product life cycles. Austin, Tex.: Austin Press.

Wasson, C. R., & McConaughy, D. H. (1968). Buying behavior and marketing decisions. New York: Appleton-Century-Crofts.

Witell, L., Löfgren, M., & Dahlgaard, J. J. (2013). Theory of attractive quality and the Kano methodology — the past, the present, and the future. Total Quality Management & Business Excellence, 24(11/12), 1241–1252.

Woodruff, R. B., & Gardial, S. F. (2008). Know your customer: new approaches to understanding customer value and satisfaction. Malden: Blackwell.

One clap, two clap, three clap, forty?

By clapping more or less, you can signal to us which stories really stand out.