UX Research
UX researcher with a background in ad tech and healthcare. I work across qualitative and quantitative methods, and I've been embedding AI into my research practice since 2023.
Good research starts before anyone asks for it. I spend a lot of time thinking about what a team doesn't know yet, and what it would cost them not to find out.
Proactive
I tend to propose studies before they're requested. If I can see a gap, I'd rather fill it than wait for someone else to notice it too.
Resourceful
I've never had everything I wanted for a study. The constraints change; the approach adapts. I make do and keep moving.
Beyond the feature
The most useful research I've done didn't answer the question I was given. It reframed the question entirely. That's what I aim for.
AI in practice
I started using AI in my research workflow in 2023, which also happened to be when I was studying an AI product at TTD. The two things informed each other.
Product-literate
I use the products I research. Not just to test them, but to form real opinions about what they get right and what they don't.
Method-agnostic
Qual, quant, or both. I pick based on what the question actually needs, not what's most comfortable or most available.
Tools and platforms
Dovetail · Qualtrics · UserZoom · Pendo · Miro · Tableau · Databricks · SPSS · R · Pencil · Figma · Claude
Hospitals have many beds, but they are far from restful. Neurology patients in particular face chronic sleep disruption during inpatient stays, with direct consequences for recovery outcomes. This project explored how to increase rest while maintaining patient safety and care efficacy, through changing when and where care is delivered, environmental nudges, sensory interventions, and other approaches.
I served as Graduate Researcher and UX Designer on a cross-disciplinary team spanning graduate research, medicine, and pre-law, supervised by Professor Diana Nicholas, Director of MS Design Research at Drexel's Department of Architecture, Design and Urbanism.
Before any solutions could be explored, I mapped the full ecosystem of people whose needs and constraints shaped what was possible.
Research synthesis pointed to a multi-sensory intervention approach. Rather than a single fix, the solution aimed to help patients reconnect with all five senses before sleep, each one addressed with a targeted, low-cost intervention.
Sight
Calming images, natural scenes, family photos. Ambient lighting adjustments.
Smell
Essential oils (lavender, rosemary, pine) or scent pouches to aid sleep and clear the olfactory nerves.
Touch
Hand lotion, comfortable sleep clothing, natural fibre and soft linens.
Hearing
Ambient sounds, calming music or audio books with soothing voices. Synced lighting and audio.
Taste
Relaxing teas (chamomile, mint) or a healthy snack before bed.
The concept
A sensory care basket, personalised per patient. Each item maps to one sense. Paired with a patient sleep diary.
The prototype was a sensory care basket paired with a patient sleep diary, used as a cultural probe. Patients would evaluate their sleep quality and indicate what they'd keep, remove, or want more of. This gave us behavioral signal, not just self-reported preference.
We named the risks directly: distribution and monitoring overhead for clinical staff, hospital budget constraints, the need for patient diligence to complete the diary, and potential allergic reactions to sensory products requiring alternative options.
"Naming failure modes upfront is part of designing a credible study. If you can't say how it might not work, you haven't thought hard enough about how it will."
We designed a multi-instrument evaluation framework: a sleep survey before and after the basket intervention, the sleep diary as a continuous data point, a nurse diary to capture clinical observations, a patient satisfaction survey, and direct doctor observations. No single measure would tell the full story. The combination would.
Spotify's recommendation system reacts to listening behavior, not actual taste. Users have no meaningful way to communicate what they love, so recommendations end up generic and disconnected from who they actually are as listeners.
The algorithm treats everything equally — background music, party playlists, a song played once by accident. The memes write themselves.
Interviews are ongoing with both casual and dedicated listeners. Patterns are starting to emerge across very different user types.
"I've been using Spotify for years and it still doesn't know me."
This started as a personal project. As a music listener, collector, and DJ, I spend a lot of time thinking about taste and discovery. I live off-algorithm by design. But I wanted to understand how this problem shows up across different kinds of listeners, not just obsessives like me.
Industry work — The Trade Desk (confidential)
The following case studies contain confidential product information.
Enter the password to access. If you're a recruiter or hiring manager and need the password, reach out directly.
Incorrect password. Reach out if you need access.
The Trade Desk's Kokai platform introduced Trading Modes, a proprietary AI that automates campaign optimization for traders. The product team needed to understand why experienced traders weren't adopting it despite its intended performance advantages. This was live AI decision-making, not a prototype.
Researching an AI product is not the same as researching a feature. The question isn't "is this easy to use" but "do users trust it, and why not." I designed the study to surface trust gaps and mental model mismatches, not just friction in the flow. This work began in 2023 and shaped how I approach AI product research.
"Traders weren't skeptical of Performance Mode's results. They were skeptical they'd understand why it made the decisions it did."
Pricing transparency was the primary adoption blocker. Traders needed to understand the logic behind automated decisions before trusting them with live budgets. The AI wasn't failing at automation. It was failing at explainability.
The finding redirected the roadmap from onboarding flow improvements to transparency features for automated decision-making. The research shaped product strategy for the AI at the platform level, not just the feature level.
The Trade Desk was evaluating how to price Automated Contextual Optimization (ACO) features for traders. The key unknown: what drives traders to assign value to contextual targeting data, and where does willingness-to-pay break down? This had real revenue implications and needed behavioral research, not a survey.
I built a discussion guide centered on real campaign decisions rather than abstract pricing questions. "How much would you pay for X" tells you very little. "Walk me through the last time you decided whether to add a data source" tells you everything.
"The question isn't how much traders will pay for contextual data. It's which decisions they need it to make, and whether they trust it to make them."
Willingness-to-pay was task-dependent. Traders assigned significantly more value to data that reduced decision time on high-stakes placements than to data that improved general targeting efficiency. Trust was a prerequisite for any pricing conversation to happen at all.
The findings gave PMs a framework grounded in actual trader behavior rather than assumed value propositions. The research established which use cases warranted premium pricing conversations and which didn't.
Audience Insights was a standalone tool that product leadership believed had potential within Kokai. Before committing to integration work, the team needed a user-grounded investment case. No one asked me to lead this. I scoped it and proposed it to the PM.
I treated this as a discovery study, not a validation exercise. The goal was to understand how stakeholders and users actually used Audience Insights, where it fell short, and what integration into Kokai would genuinely unlock versus what was wishful thinking.
"Good investment cases don't just argue for a product. They name what has to be true for it to work."
The strongest case for integration rested on a specific set of workflows that Kokai users couldn't currently do efficiently. Broader integration arguments lacked the behavioral evidence to support them. The research gave the team a focused thesis, not a sprawling one.
Product leadership had a user-grounded rationale for prioritizing integration, with success criteria tied to actual workflows. The research moved the conversation from "should we do this" to "here's what integration needs to solve and how we'll know if it worked."
UX Researcher currently at The Trade Desk, with a background spanning healthcare human factors, design research at Penn Medicine, and ad tech. M.S. in Human Computer Interaction, Drexel University. I work best when the research agenda isn't fully formed yet — some of my most useful studies were ones I proposed before anyone thought to ask. Off the clock, I'm a professional fangirl, music lover, and DJ with a physical media collection of over 650 vinyl, cassettes, and CDs.
UX Researcher
The Trade Desk · Current
M.S. Human Computer Interaction
Drexel University
AI research experience
Since 2023, including proprietary trading AI
Background
Ad tech · Healthcare tech · Human factors
Location
New York City
Always interested in talking to people doing research in new territory.
kudzai.musho@gmail.com