UX Research

Research that
changes
direction.

UX researcher with a background in ad tech and healthcare. I work across qualitative and quantitative methods, and I've been embedding AI into my research practice since 2023.

Kudzai Mushonga
Currently UX Researcher at The Trade Desk
View selected work ↓
01 — Approach

How I work

Good research starts before anyone asks for it. I spend a lot of time thinking about what a team doesn't know yet, and what it would cost them not to find out.

Proactive

I tend to propose studies before they're requested. If I can see a gap, I'd rather fill it than wait for someone else to notice it too.

Resourceful

I've never had everything I wanted for a study. The constraints change; the approach adapts. I make do and keep moving.

Beyond the feature

The most useful research I've done didn't answer the question I was given. It reframed the question entirely. That's what I aim for.

AI in practice

I started using AI in my research workflow in 2023, which also happened to be when I was studying an AI product at TTD. The two things informed each other.

Product-literate

I use the products I research. Not just to test them, but to form real opinions about what they get right and what they don't.

Method-agnostic

Qual, quant, or both. I pick based on what the question actually needs, not what's most comfortable or most available.

Tools and platforms

Dovetail  ·  Qualtrics  ·  UserZoom  ·  Pendo  ·  Miro  ·  Tableau  ·  Databricks  ·  SPSS  ·  R  ·  Pencil  ·  Figma  ·  Claude

02 — Selected work

Case studies

Study 01

Penn Medicine: Improving Sleep Quality for Neurology Patients

Healthcare Design research Mixed methods

A semester-long collaborative research project with Penn Medicine's Center for Health Care Innovation, Drexel University, and the Thomas R. Kline School of Law.

Research question

How do we
make hospitals
restful?

A multi-stakeholder design research initiative addressing sleep deprivation in neurology inpatient care

Context

Hospitals have many beds, but they are far from restful. Neurology patients in particular face chronic sleep disruption during inpatient stays, with direct consequences for recovery outcomes. This project explored how to increase rest while maintaining patient safety and care efficacy, through changing when and where care is delivered, environmental nudges, sensory interventions, and other approaches.

I served as Graduate Researcher and UX Designer on a cross-disciplinary team spanning graduate research, medicine, and pre-law, supervised by Professor Diana Nicholas, Director of MS Design Research at Drexel's Department of Architecture, Design and Urbanism.

Stakeholder landscape

Before any solutions could be explored, I mapped the full ecosystem of people whose needs and constraints shaped what was possible.

Patients Nurses Doctors Patient Advocates MAs Orderlies Immediate Family Insurance Companies Hospital Design Team Hospital Legal Team Hospital Admin

Research phases

Sep — Oct Problem deep dive and secondary research
Oct Stakeholder mapping and interviews
Oct — Nov Primary interviews and additional secondary research
Nov Affinity mapping and solution concept development
Nov — Dec Expert interviews
Dec Data synthesis and Phase 1 report

The design direction

Research synthesis pointed to a multi-sensory intervention approach. Rather than a single fix, the solution aimed to help patients reconnect with all five senses before sleep, each one addressed with a targeted, low-cost intervention.

Sight

Calming images, natural scenes, family photos. Ambient lighting adjustments.

Smell

Essential oils (lavender, rosemary, pine) or scent pouches to aid sleep and clear the olfactory nerves.

Touch

Hand lotion, comfortable sleep clothing, natural fibre and soft linens.

Hearing

Ambient sounds, calming music or audio books with soothing voices. Synced lighting and audio.

Taste

Relaxing teas (chamomile, mint) or a healthy snack before bed.

The concept

A sensory care basket, personalised per patient. Each item maps to one sense. Paired with a patient sleep diary.

What we tested and why it might fail

The prototype was a sensory care basket paired with a patient sleep diary, used as a cultural probe. Patients would evaluate their sleep quality and indicate what they'd keep, remove, or want more of. This gave us behavioral signal, not just self-reported preference.

We named the risks directly: distribution and monitoring overhead for clinical staff, hospital budget constraints, the need for patient diligence to complete the diary, and potential allergic reactions to sensory products requiring alternative options.

"Naming failure modes upfront is part of designing a credible study. If you can't say how it might not work, you haven't thought hard enough about how it will."

Measuring success

We designed a multi-instrument evaluation framework: a sleep survey before and after the basket intervention, the sleep diary as a continuous data point, a nurse diary to capture clinical observations, a patient satisfaction survey, and direct doctor observations. No single measure would tell the full story. The combination would.

Study 02

Spotify Recommendations: Why Doesn't It Know You Yet?

Passion project Interviews AI recommendation UX

A personal research initiative examining why Spotify's recommendation system feels disconnected from actual taste — even after years of use.

Status

In progress

Mid-interviews. Patterns are already emerging across casual and dedicated listener segments.

The problem

Spotify's recommendation system reacts to listening behavior, not actual taste. Users have no meaningful way to communicate what they love, so recommendations end up generic and disconnected from who they actually are as listeners.

The algorithm treats everything equally — background music, party playlists, a song played once by accident. The memes write themselves.

Early findings

Interviews are ongoing with both casual and dedicated listeners. Patterns are starting to emerge across very different user types.

  • Recommendations feel generic, not personal
  • No effective way to give meaningful feedback
  • Algorithm tracks play behavior, not emotional connection
  • Users build workarounds because they've already given up on the system

"I've been using Spotify for years and it still doesn't know me."

Project phases

01 Research planning — defined scope, recruited across casual and hardcore listener segments
02 → User interviews — active now. Semi-structured, exploring how users describe taste and recover when Spotify gets it wrong
03 Synthesis — thematic analysis, user personas, core insight statements
04 Design concepts — exploring what a more expressive feedback and discovery experience looks like

Why this project

This started as a personal project. As a music listener, collector, and DJ, I spend a lot of time thinking about taste and discovery. I live off-algorithm by design. But I wanted to understand how this problem shows up across different kinds of listeners, not just obsessives like me.

🔒

The following case studies contain confidential product information.

Enter the password to access. If you're a recruiter or hiring manager and need the password, reach out directly.

Incorrect password. Reach out if you need access.

Study 02

Trading Modes: Researching a Proprietary Trading AI

AI product Usability Strategy

Usability research on The Trade Desk's proprietary trading AI, examining adoption blockers and trust gaps among experienced traders. My first AI product study, run in 2023.

Primary finding

Pricing
transparency

Identified as the top adoption blocker, redirecting roadmap priorities for the AI

Context

The Trade Desk's Kokai platform introduced Trading Modes, a proprietary AI that automates campaign optimization for traders. The product team needed to understand why experienced traders weren't adopting it despite its intended performance advantages. This was live AI decision-making, not a prototype.

What made this different

Researching an AI product is not the same as researching a feature. The question isn't "is this easy to use" but "do users trust it, and why not." I designed the study to surface trust gaps and mental model mismatches, not just friction in the flow. This work began in 2023 and shaped how I approach AI product research.

My approach

  • Led eight moderated sessions with experienced traders running live campaigns
  • Designed a protocol separating task-based observation from attitudinal probing
  • Mapped where traders' mental models broke down relative to how the AI made decisions
  • Produced individual participant reports before cross-participant synthesis to avoid anchoring on early patterns
  • Delivered a stakeholder presentation tailored to both product and design audiences

"Traders weren't skeptical of Performance Mode's results. They were skeptical they'd understand why it made the decisions it did."

What the research showed

Pricing transparency was the primary adoption blocker. Traders needed to understand the logic behind automated decisions before trusting them with live budgets. The AI wasn't failing at automation. It was failing at explainability.

Impact

The finding redirected the roadmap from onboarding flow improvements to transparency features for automated decision-making. The research shaped product strategy for the AI at the platform level, not just the feature level.

Study 03

ACO User Interviews: Contextual Data Willingness-to-Pay

Interviews Behavioral Pricing

Behavioral research on how traders actually value contextual targeting data, built to inform an investment case, not validate one.

Outcome

Investment
case framed

Gave PMs a behavioral framework for pricing conversations tied to actual trader decisions

Context

The Trade Desk was evaluating how to price Automated Contextual Optimization (ACO) features for traders. The key unknown: what drives traders to assign value to contextual targeting data, and where does willingness-to-pay break down? This had real revenue implications and needed behavioral research, not a survey.

My approach

I built a discussion guide centered on real campaign decisions rather than abstract pricing questions. "How much would you pay for X" tells you very little. "Walk me through the last time you decided whether to add a data source" tells you everything.

  • Designed a workflow-grounded protocol built around real campaign decision points
  • Interviewed traders across segments with varying contextual targeting experience
  • Mapped willingness-to-pay to specific use cases, not product capabilities in the abstract
  • Delivered plain-language findings accessible to PM and design partners without a research background

"The question isn't how much traders will pay for contextual data. It's which decisions they need it to make, and whether they trust it to make them."

What the research showed

Willingness-to-pay was task-dependent. Traders assigned significantly more value to data that reduced decision time on high-stakes placements than to data that improved general targeting efficiency. Trust was a prerequisite for any pricing conversation to happen at all.

Impact

The findings gave PMs a framework grounded in actual trader behavior rather than assumed value propositions. The research established which use cases warranted premium pricing conversations and which didn't.

Study 04

Audience Insights Stakeholder Interviews

Stakeholder research Discovery Investment case

Discovery research building the evidence base for integrating Audience Insights into Kokai. I proposed this study myself before anyone asked for it.

Outcome

Integration
thesis built

Moved the team from "should we do this" to a focused thesis with explicit success criteria

Context

Audience Insights was a standalone tool that product leadership believed had potential within Kokai. Before committing to integration work, the team needed a user-grounded investment case. No one asked me to lead this. I scoped it and proposed it to the PM.

My approach

I treated this as a discovery study, not a validation exercise. The goal was to understand how stakeholders and users actually used Audience Insights, where it fell short, and what integration into Kokai would genuinely unlock versus what was wishful thinking.

  • Scoped and led stakeholder interviews across product, sales, and customer success
  • Identified gaps between assumed use cases and actual usage patterns
  • Synthesized findings into an investment case with explicit assumptions and named risks
  • Structured the output for executive and PM audiences simultaneously, without two separate documents

"Good investment cases don't just argue for a product. They name what has to be true for it to work."

What the research showed

The strongest case for integration rested on a specific set of workflows that Kokai users couldn't currently do efficiently. Broader integration arguments lacked the behavioral evidence to support them. The research gave the team a focused thesis, not a sprawling one.

Impact

Product leadership had a user-grounded rationale for prioritizing integration, with success criteria tied to actual workflows. The research moved the conversation from "should we do this" to "here's what integration needs to solve and how we'll know if it worked."

03 — Background

About

UX Researcher currently at The Trade Desk, with a background spanning healthcare human factors, design research at Penn Medicine, and ad tech. M.S. in Human Computer Interaction, Drexel University. I work best when the research agenda isn't fully formed yet — some of my most useful studies were ones I proposed before anyone thought to ask. Off the clock, I'm a professional fangirl, music lover, and DJ with a physical media collection of over 650 vinyl, cassettes, and CDs.

UX Researcher

The Trade Desk  ·  Current

M.S. Human Computer Interaction

Drexel University

AI research experience

Since 2023, including proprietary trading AI

Background

Ad tech  ·  Healthcare tech  ·  Human factors

Location

New York City

Get in touch.

Always interested in talking to people doing research in new territory.

kudzai.musho@gmail.com