UX Research Methods I Use Daily
A practical guide to user research methods that help me make informed design decisions and validate assumptions.
Research Is Not Optional
Early in my career, I treated UX research as something that happened before the "real" design work. A phase to get through. I have since learned that research is not a phase — it is a continuous practice woven into every stage of product development.
Here are the methods I use most frequently, when I reach for each one, and the practical details that make them effective.
User Interviews
User interviews are my go-to for understanding the why behind behavior. When I need to explore a problem space, validate assumptions about user needs, or understand the context in which people use a product, interviews are where I start.
How I Run Them
- Recruit 5-8 participants per round. Research consistently shows that 5 users uncover roughly 80% of usability issues. I aim for 6-8 to account for no-shows and outliers.
- Semi-structured format. I prepare 8-10 open-ended questions but let the conversation flow naturally. The best insights come from follow-up questions I did not plan.
- Record everything (with permission). I use the recordings to pull exact quotes for stakeholder presentations. Nothing persuades a product manager like hearing a real user say, "I have no idea what this button does."
- Debrief within 24 hours. I synthesize notes while the conversation is fresh, tagging key themes and surprising moments.
Common Pitfalls
Avoid leading questions. "Don't you think this feature is useful?" will get you a yes. "Tell me about the last time you tried to accomplish this task" will get you the truth.
Do not interview only power users. They have adapted to your product's quirks. New or infrequent users reveal the friction you have gone blind to.
Usability Testing
If interviews tell me why, usability testing tells me where. I run usability tests whenever we have a prototype or existing flow that needs validation.
My Setup
- Task-based scenarios. I write 4-6 realistic tasks: "You want to invite a teammate to your project. Show me how you would do that."
- Think-aloud protocol. I ask participants to narrate their thought process as they work through tasks. This surfaces confusion in real time.
- Moderated for complex flows, unmoderated for simple validation. If the feature is nuanced, I moderate live so I can probe deeper. For straightforward tasks, I use unmoderated tools to scale up participant numbers.
- Measure completion rate, time on task, and error rate. These quantitative signals complement the qualitative observations.
What I Look For
I pay close attention to moments of hesitation. When a user pauses, hovers over multiple options, or says "I think it's this one..." — that is a design problem, even if they eventually succeed. Success that requires guessing is not real success.
I also watch for workarounds. If users find an alternative path to accomplish a task, the intended path has a problem.
A/B Testing
A/B testing is how I settle design debates with data instead of opinions. I use it for targeted questions where both options are reasonable and the right answer depends on user behavior at scale.
When I Use It
- Copy variations. Does "Start free trial" outperform "Get started"? A/B test it.
- Layout changes. Does moving the CTA above the fold increase conversion? Test it.
- Feature presentation. Does showing pricing upfront reduce sign-ups or improve lead quality? Test it.
Principles I Follow
Test one variable at a time. If you change the button color, the copy, and the position simultaneously, you cannot attribute the result to any single change.
Run the test long enough. I wait for statistical significance, which usually means at least 1,000 visitors per variation. Cutting a test short because early results look promising is a recipe for false positives.
Define the success metric before launching. If you decide what "winning" means after seeing the data, you are just confirming your bias.
Card Sorting
Card sorting is my tool for information architecture. When I need to organize navigation, structure a settings page, or categorize content, I run a card sort.
Open vs. Closed
- Open card sort: Participants group items and create their own category labels. I use this when I have no existing structure and want to understand how users naturally think about the content.
- Closed card sort: Participants sort items into predefined categories. I use this to validate a proposed structure.
Practical Tips
- Use 30-50 cards maximum. More than that causes fatigue and noisy data.
- Run with 15-20 participants to get reliable patterns.
- Analyze with a similarity matrix. This shows which items participants consistently grouped together, revealing natural clusters.
I recently used card sorting to restructure a product's settings page. The existing layout was organized by technical function (API, webhooks, authentication). The card sort revealed that users thought in terms of workflow stages (setup, daily use, administration). Restructuring around that mental model reduced support tickets for settings-related questions by 35%.
Heuristic Evaluation
Heuristic evaluation is the method I use when time or budget constraints prevent full user testing. It is a structured expert review based on established usability principles.
The Heuristics I Evaluate Against
I use Nielsen's 10 heuristics as a starting framework, but the ones I find most actionable in product design are:
- Visibility of system status. Does the interface tell users what is happening? Are loading states, success confirmations, and error messages clear?
- Match between system and the real world. Does the product use language and concepts familiar to the target user, or internal jargon?
- User control and freedom. Can users undo actions? Is there a clear way to go back or exit a flow?
- Consistency and standards. Does the interface follow platform conventions? Are similar actions presented consistently?
- Error prevention. Does the design prevent errors before they occur, or just report them after the fact?
How I Structure the Review
I walk through each key user flow and score it against each heuristic on a severity scale (0 = not a problem, 4 = usability catastrophe). This produces a prioritized list of issues that I can present to the team with clear severity rankings.
Heuristic evaluation is fast — I can review a feature in 2-3 hours — but it is no substitute for real user testing. I use it as a complement, not a replacement. It catches the obvious problems so that user testing can surface the subtle ones.
Bringing It All Together
No single method gives you the full picture. I typically combine methods within a project:
- Interviews to understand the problem space
- Card sorting if information architecture is involved
- Usability testing on prototypes to validate solutions
- A/B testing post-launch to optimize
- Heuristic evaluation as a quick quality check throughout
The key is matching the method to the question. "What do users need?" is an interview question. "Can users find this feature?" is a usability test question. "Which version performs better?" is an A/B test question. Asking the right question with the right method is half the work.
Salman Alfariesh
Product Designer specializing in web & mobile experiences