Welcome to The Overlap, a newsletter somewhere between product and organization design. You normally receive this every other Wednesday, but because this Wednesday is Inauguration Day in the US, I thought I’d release this one earlier.
This Inauguration Day is different to say the least. If you feel like giving yourself the space tomorrow to pause, breathe, and take care of yourself and those around you, I’m right there with you.
Main takeaway: Resist the urge to solve the problem. See the problem first.
I recently discovered the XY problem from an episode of Back to Work. The XY problem named a pitfall that I think a lot of teams unintentionally find themselves in.
The XY problem is asking about your attempted solution rather than your actual problem. This leads to enormous amounts of wasted time and energy, both on the part of people asking for help, and on the part of those providing help.
User wants to do X.
User doesn’t know how to do X, but thinks they can fumble their way to a solution if they can just manage to do Y.
User doesn’t know how to do Y either.
User asks for help with Y.
Others try to help user with Y, but are confused because Y seems like a strange problem to want to solve.
After much interaction and wasted time, it finally becomes clear that the user really wants help with X, and that Y wasn’t even a suitable solution for X.
The problem occurs when people get stuck on what they believe is the solution and are unable to step back and explain the issue in full.
The XY problem is when people waste time on the wrong problem.
You blow the Nintendo game when your game freezes.
You’re about to play Legend of Zelda on your Nintendo Entertainment System. It froze. You want your game to not freeze in the future. You turn off the console, pull the game out, blow the game cartridge, and start the game again. The game works! The next day, you stick the game in the console, turn on your NES, and your game still freezes. You blow the game and load it. It’s still frozen.
You get replies on Stack Overflow that doesn’t solve your challenge.
You post a challenge you’re encountering on Stack Overflow (a forum for developers), and get an answer that solves an entirely different issue. The answer is logical, factual, and well-written. But it doesn’t solve your challenge.
Your team spends too much time on a solution that solves a low-priority problem.
You’re a product manager. Customer Success shares with you that customers are confused about what the app does. You decide to make it dead easy for users to onboard, so you ask a designer to redesign the onboarding flow and a few developers to implement it.
After six weeks, the new flow is deployed. You ask a data analyst to show you the % of folks who completed the flow after signing up: 74%. Phew!
Two months later, you find another statistic: 2% of those who completed the onboarding flow logged in a second time. You realize that onboarding wasn’t the problem. People aren’t using the product a second time. You do user interviews to understand why. “I’m not exactly sure why I should use this product.” You learn that it’s not clear to users what benefit they get from your product. Time to figure out why that is.
Even statistics has a name for the XY problem: the Type III error.
…Type III errors occur when researchers provide the right answer to the wrong question.
You’d think that if we gave this problem a name, it would go away. But we still see ourselves do it. Why do we solve the wrong problem often?
One big reason: Humans are overconfident.
The overconfidence effect
Humans love knowing the answer. Knowing the answer reassures our egos. It makes us look good. It relieves our anxieties.
This is the overconfidence effect: where our confidence in our judgments is higher than how accurate those judgments are.
Research shows that we’re more confident in our beliefs than we are accurate. One study asked 37 college students in an honors program how long they think it’d take them to finish their thesis.
Interviewer: yo ideally how long would it take for you to finish your thesis? Student: 27 days, tops I: what about if everything went poorly? How many days? S: 49 I: ok hit me w ur best estimate S: 34 days I: imma hold u to that S: do it! ~ 27 days later (ideal case) ~ I: u done wit ur thesis? S: I got my idea down just need to sit my ass in a chair and write I: ok carry on ~ 34 days in (best estimate) ~ I: u done? S: im like 95% there but still waiting on my professor to gimme feedback. they mad busy I: ok nw ~ 49 days in (worst estimate) ~ I: u done? S: hold on im playing Valorant ~ 55 days in ~ S: YO I’m done!!! 🍻 I: about time!! let’s hit the pub
To make the results of the study clear:
27 days = the average ideal estimate
49 days = the average worst-case estimate
34 days = the average best estimate
55 days = the average actual completion time
Students’ estimates were higher than their worst case estimate. The actual time it took them to finish their thesis exceeded their worst-case scenario.
That’s like saying to a friend that you’ll be at their place between 30-50 minutes and you get there in 2 hours. Sorry, lots of traffic!
Granted, there might be an anchoring effect in the study. If the interviewer asked students for their ideal estimate first, the following answers would be “anchored” toward their ideal estimate. Thus, each participant is unconsciously lowering every estimate.
I can’t say for sure that anchoring is happening, the study doesn’t say the order the interviewers asked the questions. But imagine how different these results would be if the interviewer opened the interview asking for the students’ worst-case scenario.
Interviewer: yo worst case, how long will your thesis take? Student: worst case? 60 days I: how many days would you ideally like to spend? S: 40 I: ok, best prediction? S: 54 days I: ding ding ding you won
Though this study looked at college students, I think we can all relate as product people. Especially if your organization does annual planning (btw, how’d those 2020 plans go?). We’ve all given estimates that were drastically off.
VP of Product: How long will your team need to do a good job on this feature? Junior PM: 30 days I gotchu Senior PM: Actually, let us talk to our developers and get back to you!
~ after VP meeting ~ Senior PM: yo why’d u say 30 days Junior PM: chill we got this Senior PM: can you just write user stories
If you’re a PM, designer, or developer, you’re in a good position to get your team to avoid the XY problem. Here are three ways that I think might help.
Three practices that could help your team solve the XY problem
1. Bring “solutioning” discussions back to the problem.
Almost every team loves talking through the solution before talking through the problem. Teams love convergence. Convergence feels productive! But it isn’t always productive, especially when you’re wrong about the problem. Your job is to help your team see the problem before they solve it.
Three cues you can use to bring it back to the problem:
a) You hear: “How do we implement [this solution]?”
Before saying: “Here are a few ideas”
Ask: “What problem is [this solution] solving?”
b) You hear: “We should do [B solution] instead of [A solution]”
Before saying: “How is B better than A?”
Ask: “How does B move us toward our outcomes more than A?”
c) You hear: “Oooh what about [this idea]?”
Before saying: “Good idea” / “I’m not sure” / “Here’s the problem I see with it”
Ask: “What impact do you want [this idea] to make?”
2. Frame features/solutions as bets.
Help your team understand that features and solutions are bets: we’re betting our money, time, and energy towards some awesome outcome. Not all bets succeed. But the ones that do really do.
Your team must minimize the downside of unlucky bets and maximize the upside of lucky bets. Ask your team these questions:
From 0-100%, what’s the chance this feature will achieve this outcome?
If you were given $100,000, how much of your money would you put towards this effort? (Gather the average of everyone’s answers and discuss it.)
How much of our team’s time are we willing to invest in this feature?
How much money spent is too much money?
How much time is too much time?
Project kickoffs are a great time to introduce the idea of framing problems as bets. If you already started the work, you can clarify the bet and adjust it based on feedback.
3. Clarify your survival metrics.
Before starting sprint planning and standups with your team, discuss two things with them:
What are signs, behaviors, and patterns that indicate this effort is working?
What are signs, behaviors, and patterns that indicate this effort isn’t working and therefore should be paused or pivoted?
Ask folks to answer in three ways:
“We should continue this effort if ____.”
“We should stop this effort if ____.”
“We should change directions if ____.”
Documenting the indicators for both success and failure helps your team mitigate the overconfidence effect.
By answering “We should stop/change this effort if ____”, your team acknowledges that this effort can fail. Failing is okay—it’s not about avoiding failure, it’s about changing direction asap to mitigate outcomes we don’t want.
More on how to facilitate this conversation here.
Again, you can clarify your indicators after the work has started. But if you’re about to kick off a project, bring this conversation to your team in your kickoff.
Be kind to yourself and your team
You yourself perpetuate the XY problem. Resist judging, criticizing, or blaming yourself. Be kind to yourself when it shows up. Notice how you feel. Think about what you want to do instead. Then act.
Your team will perpetuate the XY problem. Resist judging, criticizing, or blaming them. Be kind when it shows up. Notice how you feel. Generously ask questions that guide the conversation toward the problem.
Your leaders and clients will perpetuate the XY problem. Be kind. Generously ask them what they hope to achieve when they have a strong opinion on the solution/idea/concept/roadmap.
It’s impossible to know the answer all the time. Because the search for truth is endless.
As long as you’re willing to be in the discomfort of not knowing, your estimates, judgments, and beliefs improve over time.
I’ve written before about how framing problems is an underrated skill. Perhaps because it’s less tangible than design, writing, and code.
I’m glad to see Coda write about why they value this as a skill.
Know the saying “there are no dumb questions”? I understand the intention behind it: don’t hesitate to ask for help when you’re stuck. But I think there’s a difference between asking for help and asking questions that can be answered by Googling it or looking at a doc someone wrote when your specific question comes up. Is that callous?
The first thing to understand is that hackers actually like hard problems and good, thought-provoking questions about them. If we didn’t, we wouldn’t be here. If you give us an interesting question to chew on we’ll be grateful to you; good questions are a stimulus and a gift. Good questions help us develop our understanding, and often reveal problems we might not have noticed or thought about otherwise. Among hackers, “Good question!” is a strong and sincere compliment.
Despite this, hackers have a reputation for meeting simple questions with what looks like hostility or arrogance. It sometimes looks like we’re reflexively rude to newbies and the ignorant. But this isn’t really true.
What we are, unapologetically, is hostile to people who seem to be unwilling to think or to do their own homework before asking questions. People like that are time sinks — they take without giving back, and they waste time we could have spent on another question more interesting and another person more worthy of an answer.
3. Ryan Singer on using Pattern Languages for Basecamp
I love when product designers/strategists share their process. Ryan takes the concepts from Christopher Alexander’s A Pattern Language and applies them to his work at Basecamp.
See you in two weeks.