Measuring Product Experience at Scale
Product
Enterprise Cloud Product (Scorecard Benchmark)
Timeline
6 weeks
Methods
Unmoderated usability testing
Survey (UMUX-Lite, SUS, PMF)
Context
To support ongoing product improvement, I contributed to a recurring benchmarking program that combined behavioral and attitudinal data. I designed and executed scalable usability studies while partnering with quantitative research to connect task performance with broader product metrics - enabling more informed, long-term product decisions.
The Problem
The product team needed a reliable way to:
- Measure user experience over time
- Identify areas for improvement
- Track the impact of product changes
A recurring UX scorecard was established to combine behavioral and attitudinal data.
Research Goals
- Measure usability, usefulness, and product-market fit
- Evaluate performance on key user tasks
- Identify barriers to task completion
- Track improvements across releases
My Role
I led the usability component of the scorecard, including:
- Designing task scenarios
- Recruiting participants
- Analyzing usability results
- Collaborating with a quantitative researcher to connect usability and survey data
Methodology
Participants completed unmoderated usability sessions focused on key workflows. This was paired with survey data to measure task success, efficiency, satisfaction, and overall product perception.
Scorecard studies typically recruited 50+ survey respondents and 20+ usability participants per cycle.
Impact
Over time, this program helped drive measurable improvements, including an increase in SUS from 14 to 36 and PMF from 19 to 23.
The scorecard became a key input for product prioritization, performance improvements, and UX optimization efforts.
Reflection
This work marked an early effort to integrate qualitative usability testing with quantitative survey data into a unified benchmarking program. While it provided a more complete picture, it also surfaced challenges in aligning methods, timing, and outputs.
We worked through coordination across timelines, time differences (global), ensuring alignment between questions, synthesizing findings across data types, and translating everything into a clear narrative for stakeholders.
Although the quantitative researcher and I were able to navigate the various challenges involved when integrating quantitative & qualitative research, working styles, and schedules, these dual studies nonetheless highlighted the need for strong upfront coordination when combining methods to ensure the insights truly work together.
Takeaway: Mixed-methods research is most effective when it’s intentionally designed to connect from the start.