How User Interviews Go Wrong

How User Interviews Go Wrong

User interviews are one of the most important parts of user research. If you want to build a product that succeeds, you need to know how people currently use the products that already succeed in the market.

If you don’t have consistent product engagement and retention data, you need user interviews to figure out how people use your product to nail your product’s positioning and scale your growth.

Just because user interviews are important doesn’t mean administering a solid user interview process is easy or straight forward. Like most kinds of user research, the user interview process and its results easily get distorted by the influence of ordinary human incentives.

Human incentives such as wanting to look good in social situations and get validation from our peers distort interview subjects’ answers while the career incentives of employees can shape how the results of user interviews are interpreted and communicated to leadership. In many scenarios, user research is affected by bias at both ends of its collection.

The problem of course is solving these biases isn't easy.

Your approach to collecting and interpreting the results of user research process needs to be strategic. Since user research informs your business and product, any fundamental misunderstanding of it costs you what you invest into engineering.

Whatever investment you make into engineering is meaningless if the product you build with it isn't something people want and are willing to pay for.

Common User Interviewing Problems

The following are just some of the problems we solve within the user interviewing process.

Not Qualifying Your User Interview Subjects

One of the first mistake companies make in the user interview process happens before user interviews even begin. Companies often fail to qualify and vet their user interview subjects.

In the interest of collecting as much data and feedback as possible, companies sabotage their research process by collecting data from unreliable sources. These companies understand there are minimum amounts of feedback they need to collect to generalize their research results to larger audiences, but don't understand how easy and dangerous collecting feedback from wrong fit users really is.

Unfortunately, most of the people you can grab for a user interview at a moment's notice are not a good fit for user interviews. That's because you need to first qualify interview subjects for a number of criteria before you're sure their feedback is worth your time and the huge potential cost of implementation.

The cost of the time you spend in the user interviewing process is tiny compared to the cost and time you spend creating product specs, experimenting with design, and building your product.

Investing in research is saves you money.

Qualifying user interview subjects based on prospective interest instead of past behavior

When it comes to measuring the reliability of user research, prospective customer interest doesn't count for much.

Sure the person who reached out to you after posting your new venture on LinkedIn might have responded back excitingly saying they'd love to use your product and may have even seemed well qualified to buy it. They might be a business owner with a number of employees, but you might just as quickly find out they don't have any interest in paying for solutions and the existing products you're competing with and that they currently use, isn't something they currently pay for.

Excitement doesn't equal interest and interest must be qualified.

In my experience performing hundreds of user interviews, many of the smartest and most accomplished people I knew didn't have past behaviors that qualified them enough for their smart and interesting opinions to be used and generalized back on my larger customer audiences.

Some of the smartest and most accomplished people you could interview won't offer useful feedback because their situation's not reflective of your current or prospective customers.

Not using disqualifying interview questions

Just as many companies fail to vet their user interview subjects, companies also fail to do the hard but important work of disqualifying user interview subjects during the interview.

For example, imagine you have a prospective customer who has the problem you solve, is in the market looking to buy, has shown evidence of this behavior, has a budget, and they're ready to buy.

Now many people would say that person is a great user interview subject, but hold on a sec. Many people who work in sales know positive prospective interest is only part of the sales process and that we don't yet know enough about them to determine whether they might be willing or able to buy our solution.

Are they in a position of authority to make that purchasing decision? Are there other stakeholders in the purchasing process? Is the amount of money they're looking to spend much higher than you charge and therefore a dramatic difference of expectations? Do they require certain invoicing procedures?

There are lots of different reasons why an otherwise great interview subject might be a false positive indicator of interest. This does not mean that you should not interview them, but you need to find out. Solid interview processes cover the most likely problems to look out for.

Being too friendly in the user interview process

Being friendly is one of the easiest ways to get people to open up about their current and existing habits, but it often comes at a cost.

The same people who you're looking to collect evidence from will often change the tone and moderation of their answers to either agree and amplify their responses, given you false positive answers, or avoid the mention of perceptibly negative emotions you need to orient your business and product.

Worse than accidentally shifting the tone of interview conversations, is going on conversational tangents. User interviewers often do this to make their user interview subjects feel more comfortable, but this behavior distracts your interview subjects away from the important conversations. It may be easy to open up your user interviewing conversations with small talk such as questions about their day or quick chit chat, but conversational tangents often become traps.

Our main goal in the user research process is to collect evidence of negative emotion. Whether we're using the evidence of emotional pain to qualify prospective customer interest, to identify existing or unsolved problems, or to identify the most pressing issues users currently face with our product, our customers' and users' emotional pain points are something we need a painstakingly accurate assessment of.

Being too friendly in the user interviewing process ensures our users don't accurately share and reflect the evidence we need to make business, product, and engineering decisions.

Start designing world class product experiences
© 2022 Perfecting Product, Inc