Navigating Critical Pitfalls in the User Interview Process

User interviews are arguably the most important activity in user research. If you want to build a successful software business, you need to know how people use the products already in the market and understand why they’re succeeding.

If you don’t have consistent product engagement and retention, you need to invest in user interviews to figure out how people currently use your product so that you can nail your product’s positioning and scale your growth.

But just because most software teams understand the concepts behind conducting user interviews doesn’t mean that administering a solid user interview process is easy or straightforward. Like most kinds of user research, the user interview process and its results easily get distorted when we don’t account for ordinary human incentives.

Natural human incentives such as wanting to look good in social situations or get validation from our peers distort the answers we get from user interview subjects and career incentives of employee peers can shape how the results of user interviews are interpreted and communicated to leadership. In both scenarios, user research is hurt by bias at both ends of its collection.

Solving user research biases isn’t easy. Your approach to collecting and interpreting user research results needs to be strategic.

Since user research informs both the business and product, any fundamental misunderstanding of user research conclusions creates costs throughout product development and engineering.

Investments in engineering don’t pay off if the product you build with it isn’t something people want and are willing to pay for.

Common User Interviewing Problems

The following are just some of the problems we solve within the user interviewing process.

Not qualifying user interview subjects

One of the first mistakes companies make in the user interviewing process happens before user interviews even begin. Companies often fail to qualify and vet their user interview subjects.

In the interest of collecting as much data and feedback as possible, companies sabotage their user research process by collecting data from unreliable sources. These companies understand there are minimum amounts of feedback they need to collect to generalize their research results to larger audiences but don’t understand the danger of collecting feedback from wrong fit users and customers.

Unfortunately, most of the people you can grab to interview at a moment’s notice are usually not a good fit for being your user interviewing subject. That’s because you need to first qualify interview subjects for a number of criteria before you’re sure their feedback is worth your time and the huge potential cost of implementation.

The cost of time spent interviewing users is tiny compared to the cost of time spent creating product specs, experimenting with designs, and building your product, so investing in user research saves you money.

Qualifying user interview subjects based on prospective interest instead of past behavior

When it comes to measuring the reliability of user research, prospective customer interest doesn’t count for much.

Sure the person who reached out to you after posting your new venture on LinkedIn might have responded back excitingly saying they’d love to use your product and may have even seemed well qualified to buy it. They might be a business owner with a number of employees, but you might just as quickly find out they don’t have any interest in paying for solutions and the existing products you’re competing with and that they currently use, isn’t something they currently pay for.

Excitement doesn’t equal interest and interest must be qualified.

In my experience performing hundreds of user interviews, many of the smartest and most accomplished people I knew didn’t have past behaviors that qualified them enough for their smart and interesting opinions to be used and generalized back on my larger customer audiences.

Some of the smartest and most accomplished people you could interview won’t offer useful feedback because their situation’s not reflective of your current or prospective customers.

Not actively disqualifying interview subjects during the user interview

Just as many companies fail to vet their user interview subjects, companies also fail to do the hard but important work of disqualifying user interview subjects during the interview.

For example, imagine you have a prospective customer who has the problem you solve, is in the market looking to buy, has shown evidence of this behavior, has a budget, and they’re ready to buy.

Now many people would say that person is a great user interview subject, but hold on a sec. Many people who work in sales know positive prospective interest is only part of the sales process and that we don’t yet know enough about them to determine whether they might be willing or able to buy our solution.

Are they in a position of authority to make that purchasing decision? Are there other stakeholders in the purchasing process? Is the amount of money they’re looking to spend much higher than you charge and therefore a dramatic difference of expectations? Do they require certain invoicing procedures?

There are lots of different reasons why an otherwise great interview subject might be a false positive indicator of interest. This does not mean that you should not interview them, but you need to find out. Solid interview processes cover the most likely problems to look out for.

Being too friendly in the user interview process

Being friendly is one of the easiest ways to get people to open up about their current and existing habits, but it often comes at a cost.

The same people who you’re looking to collect evidence from will often change the tone and moderation of their answers to either agree and amplify their responses, give you false positive answers, or avoid the mention of perceptibly negative emotions you need to orient your business and product.

Worse than accidentally shifting the tone of interview conversations, is going on conversational tangents. User interviewers often do this to make their user interview subjects feel more comfortable, but talking about other topics often distracts interview subjects from the important things we want them thinking about. It may be easy to open up your user interviewing conversations with small talk such as questions about their day or quick chit chat, but conversational tangents often become traps.

Our main goal in the user research process is to collect evidence of negative emotion. Whether we’re using the evidence of emotional pain to qualify prospective customer interest, to identify existing or unsolved problems, or to identify the most pressing issues users currently face with our product, our customers’ and users’ emotional pain points are something we need a painstakingly accurate assessment of.

Being too friendly in the user interviewing process ensures our users don’t accurately share and reflect the evidence we need to make business, product, and engineering decisions.

Perfecting Product
© 2024 Perfecting Product, LLC