Quantifying JTBD with Dom Ricchetti
Dominic Ricchetti is a seasoned market researcher and customer insights specialist. With a background in engineering, computer science, and operations research, he’s an expert in applying advanced analytics (i.e., conjoint, choice modeling, segmentation, driver analysis) and a range of quantitative methods.
Dom has worked at large companies, including Intuit, Dell, Microsoft, and ServiceNow. He also served as a Senior VP at Radius Global Market Research, working with clients like Google Pixel, YouTube, Windows, Office, Nordstrom, and REI.
Dom shared his extensive knowledge of quantifying JTBD research in a recent JTBD Untangled episode. Check out the full recording on YouTube, below.
We caught up with him afterwards to chat more about it. Here’s Jim’s interview with Dom.
— — — — — — — — — — -
JIM: You have an impressive career in customer insights. When did you get into JTBD and what attracted you to the field?
DOM: I recognized the importance of not just building great technology but also addressing customer needs, leading to my focus on market research and methodologies like JTBD. At Microsoft and ServiceNow, I developed research methodologies and customer research systems for product innovation, particularly JTBD quantitative methods.
JIM: What are the key benefits of quantifying Jobs-to-be-Done?
DOM: By utilizing Opportunity Scoring, teams can focus on developing product features that align with high importance and low satisfaction. This approach to quantifying the Jobs ensures that resources are allocated efficiently to areas that will yield the most significant improvements for users — and a compelling differentiation for your product.
JIM: How should qualitative JTBD practitioners start benefiting from quantitative methods?
DOM: Quantifying means measuring how much and how many — and that is what you need to prioritize what problems to solve for customers. Doing 50 interviews is different than even just 50 survey responses, because in a survey you are consistently presenting all of the Jobs and getting the Importance and Satisfaction ratings. It gives us not only a data set but also confidence that the ratings are comparable — and every customer rated every Job. Even better, get 150 responses from a broader and more representative audience. Then you can have much more confidence on the Jobs to prioritize to solve.
JIM: What are the most critical adjustments in a modern approach to quantifying Jobs versus the traditional approaches?
DOM: There are several things to consider:
- Using larger scales or scaled MaxDiff to get more differentiation in Job ratings.
- Calculating individual level scores — that is, not using a percent of the audience or a sub-group.
- Interpreting results with a more effective scale and comparing to “known winners” or competitors’ key Jobs.
The biggest opportunities are the jobs with high importance and low satisfaction, indicating unmet market needs. So, the modern adjustments help us discern the Big Unmet Needs.
JIM: Could you share details about a JTBD project (or the types of projects) you worked on at Microsoft or ServiceNow? What was the context and the purpose of the project?
DOM: At ServiceNow we regularly used interviews and workshops with customers, design, product management and engineering to discover and build a comprehensive JTBD canvas. I taught the JTBD concepts, and we used your JTBD Playbook as a reference. I led the quant work for prioritizing Jobs, which we used in the PM strategy roadmap, as well as what design and engineering would solve in Sprints.
The factor analysis and segmentation were powerful to drive themes that carried into marketing and built our JTBD-based personas. We used these methods for projects on our major customer categories — including developers, service agents, and platform administrators.
At Microsoft we did many large Opportunity Scoring projects where we continued to refine the algorithms, use of Scaled MaxDiff, and advanced analysis. I worked on quantifying JTBD for Windows consumer and commercial versions, Office, and Windows Mobile phones and tablets.
JIM: What are some of the impacts you’ve seen from quantifying JTBD? How do teams respond and what value does it add to projects?
DOM: One of the biggest consistent impacts I see is that the team becomes energized and focused. ServiceNow struggled with creating a solid 2-year roadmap for admin features that went beyond addressing technical debt. The admin JTBD quant research sparked leadership backing and aligned the team on what to solve and where to innovate.
At Microsoft, the large Windows product management team was stuck in internal debates about many potential directions. After quantifying JTBD, the team moved forward with confidence to develop innovative new apps, tools, and menu features that resonated with customers.
JIM: We’ve heard that when more people take the survey, the scores go to the middle. Do you see that and what could we do to fix it?
DOM: Sure — they all average out! But with a larger sample, you will have more confidence in what the average importance and satisfaction ratings are on each job. You will also have better representation across the potential target market, and you might get a broader distribution of answers. A larger sample enables us to statistically compare different subgroups based on their “Job Differentiators.” This analysis reveals more specific target audiences where the need is most important or least satisfied. Definitely use the 9-point scales instead of just 5-point — or even use the 11-point scales.
Another solution is to use the Scaled MaxDiff technique, which forces respondents to make trade-offs in an easy “gamified” exercise. It will maximize the distribution of their importance and satisfaction ratings to make prioritizing easier.
JIM: Alignment, focus, buy-in, confidence: those are pretty big wins for the team. But can you say more about resonating with customers? How did JTBD play a role there?
DOM: Certainly!
From the clusters of Jobs and Personas, we found that admins operate in two different modes: their daily tactical operations and their monthly strategic planning. This insight enabled us to restructure the user interface to be simpler and more efficient for each mode.
Also, we presented the Jobs and Personas from the study to our customers. We got a lot of feedback that the customer admin teams were using the results to refine their roles internally, and with their management to explain their responsibilities and why they needed staffing. We also included insights from the study in our Customer Success materials and training.
JIM: What advice do you have for people looking to get into quantifying jobs to be done research? Where should they get started, particularly if they don’t have a solid quant background?
DOM: Creating, running and analyzing JTBD surveys are easy as far as quant surveys and methods go. Think of your survey in three sections and watch out for a few key things.
Screening
- Ask the minimal, well-crafted, key questions to find people who do (or want to do) the primary big Job. It’s better to use behaviors than labels or titles.
- Get a broad audience so that you can compare preferences and needs. You can always filter for certain people or types during analysis.
- Don’t just survey your current customers. Often those who use your competition or who have not bought yet have important Jobs that are not being satisfied.
Opportunity Scoring
- Get to asking about Importance and Satisfaction early in the survey.
- For more than 15 Jobs, break the list into a few blocks of ~10–15 related Jobs.
- Give them encouragement and guidance to keep them engaged (e.g., This is set 2 of 4 sets, Now we will ask about Satisfaction).
Profiling
- To wrap up, ask your other questions based on your hypotheses about who you are solving for, what would make them answer differently, and your Job Differentiators.
- Don’t try to cover all these in the Screening section — because you’ll lose people and make it harder to get enough completions.
- Use the answers to these Profiling questions to filter and compare audience groups (e.g., crosstabs, pivot table).
Data cleaning and validation are a necessity. Don’t make the answers to your screen-in questions obvious to qualify. Look for people who straight-line by answering the same rating scale point for many Jobs. Also, ask one required open-ended question so that you can review if their response makes sense.
For survey and analysis tools, here are some recommendations.
- Use what you have: Some UX tools can do surveys. Ask other departments if they already subscribe/license any tools. You can use Google or Microsoft Forms for surveys, and Sheets or Excel for analysis. If you are familiar with Excel Pivot Tables, that works well for the basic analysis of JTBD Opp Scoring surveys.
- For low-cost professional tools, SurveyMonkey is popular and easy to learn — with free trials and individual plans. It can also do crosstabs.
- For advanced analysis, I’m a fan of DisplayR (their cloud version, or Q-Research is their download/local-app version). They have great help, video tutorials and support.
JIM: Great advice. That’s a lot to dig into. Thanks Dom!
— — — — — — — — — —