
A cross-feature competitive evaluation series for UX patterns
TIMELINE
4 weeks (average)
ROLE
UX Researcher
TEAM
1-2 Researchers
Various stakeholders
SKILLS
Competitive Evaluation
Research Socialization
Information Design
Storytelling
β οΈ
This page contains only a high level view into these research studies. Please get in touch for a deeper look!
01 β OVERVIEW
A research ripple effect for driving strategic decision making
The IBM watsonx.ai product team includes many workstreams tackling different core features. As the product UX research team we support all teams and accept research requests from product managers, developers, and designers.
TASK
Stakeholders across the watsonx.ai product team were looking for low-effort, high-impact research to support upcoming design workstreams for new features and initiatives.
They wanted to validate their current product strategies, set up designers for success by providing industry standard practices, and understand opportunities for differentiation.
OUTCOME
I led our team's first UX design-oriented competitive evaluation. Socializing its results across our wider product team led to 4+ requests for similar work by other stakeholders and interest by 2+ teams outside our own.
Each study led to long term research engagements as we conducted follow-up evaluative research.
My insights accelerated the production of key areas across the product including:
A new feature set to debut at IBM Think Conference 2025 β
Updated AI product documentation for watsonx β

02 β PROBLEM SPACE
How might we deliver impactful insights to fast moving teams β without slowing them down?
Product teams are often in a rush to make design and feature decisions, without a clear understanding of how competitors are solving similar problems or what UX patterns users have come to expect.
This lack of visibility can lead to missed opportunities for innovation, inconsistent user experiences, or solutions that donβt align with industry standards. Without a structured evaluation of competitor UX patterns, teams risk reinventing the wheel or worse, falling behind.
π‘ Why competitive research?
Competitive evaluations can be scoped and executed quickly since they donβt require participant recruitment, scheduling, or moderated sessions.
It is a flexible method that can be scaled to accommodate any timeframe, making it a common option for explorations of new features.
03 β THE APPROACH
Selecting the competitor products
While some stakeholders had ideas on the competitors they were interested in, I used a guided framework to ensure that the selections were appropriate.
This process required desk research that began to rank products both short-listed from our stakeholders and newly discovered in the process.
The final competitor list was finalized in stakeholder discussions where this preliminary research was evidence of our decision process. These discussions in parallel refined our study goals and clarified the key questions.
π§© Comparable
Are the product features relevant?
Is the target customer segment similar?
Is the value proposition aligned?
π― Relevant
Is the product a market leader?
Is the product a market disruptor?
Is the product considered to be of high quality?
π Accessible
Is the product publicly available?
Is information readily available?
Can it be accessed at low/no cost?

04 β EXECUTION
Executing the studies
Gathering our resources
Resources leveraged during the execution of these studies included:
existing secondary research at IBM
marketing websites
product documentation
YouTube video content
Presentations and events
Synthesizing in Mural
With an emphasis on design patterns, I collected all data and product images into the same Mural board, tagging all information as I visited different resources.
As I noticed patterns and themes I began to form findings that eventually led to insights.
π‘ Why collect data in this way?
Our design stakeholders voiced an interest in seeing the raw data with all visuals. The board became a tool in their design process.

Arriving at our key findings and recommendations
Each study was different based on the high level study goals, but key findings were often persona-centric conclusions I could interpret based on our competitors' approaches.
These high level insights paired with more tactical recommendations applicable for the design team.
05 β DELIVERY
Presenting the key findings
Visualizing data and frameworks
In addition to product screenshots, whenever possible I leveraged visuals to communicate large groups of comparison data and high level concepts. They would serve as a storytelling device to guide the audience through the playback.
π‘ Why produce visuals?
Stakeholders are always overwhelmed with the amount of data brought to them during a playback. Swapping paragraphs of text for diagrams makes the same content digestible and engaging.

The additional artifacts
To complement the deck, I provided deliverables including one-pager executive summaries, insight/recommendation documents, Mural boards of raw data collection.
π‘ Why have a variety of deliverables?
Multiple artifacts create opportunities for visibility, allowing teams to engage with the research in the way that fits their role.

Socializing our research
A key part of wrapping up the research study with the immediate stakeholders was socializing the content through relevant channels. I always encourage our stakeholders to share the main presentation link to those who may be interested.
π‘ Why share our artifacts?
Getting research seen is a key part of maximizing its impact. In a large organization it is guaranteed to support an adjacent effort and we always want to minimize repetitive research efforts.
06 β IMPACT
The impact of the research
Quick wins for the team
Immediately, each competitive evaluation made an impact by clarifying each stakeholder's current assumptions and current approaches. Many were able to produce user stories, GitHub issues, and tactical next steps.
π
10 user stories were added in the Product Requirements Document designated Critical priority.
π
βI just opened up the Mural and was also blown away. Thereβs so much in there, some really, really fantastic resources for us to take a look at as we get started. So very excited.β
β Product Design Lead
π
An MVP design was produced within a few weeks following the research playback


Infusing research into all parts of our product⦠and beyond!
The visibility of the insights prompted new research requests from other teams, who saw the value in using competitive analysis as a starting point for exploration. I proactively shared each batch of insights in relevant channels and conversations to continue the momentum of research enthusiasm we were seeing in our product team.
π
Multiple product managers reached out learn more about how to bring competitive evaluation into their own workstreams.
π
βAfter seeing that playback I was wondering who I could talk to about next steps to get a similar view on [topic]. If we could get some research around [topic], that would significant accelerate and also help validate the strategy Iβve been working onβ
β Product Manager
π
Our research team has since taken in 4+ competitive evaluation requests that center around UX patterns.
π
Each originating evaluation request has led to at least 1 follow-up evaluative study for new deliverables produced.

07 β RETROSPECTIVE
What did I learn?
Don't skip steps
Even though repeating the same type of study suggested we could skip upfront scoping, having foundational conversations with stakeholders was essential to confirm that a competitive evaluation aligned with their goals and research needs.
Change is the only constant
In the fast-evolving AI space, public resources change frequently. I often saw competitor UIs update mid-study. I learned that capturing and archiving visuals in real-time was essential to avoid losing key data.
Visualize and simplify
I noticed that visual frameworks and diagrams were the most well-received since they made dense insights easier to digest. I started creating one-pager executive summaries for easy sharing, which also gained traction. I plan to keep pairing detailed findings with standalone artifacts that are simple to circulate.