Fintech PMs Must Master User Research — Here’s How to Do It Right

TL;DR

Most fintech PMs treat user research as a box-checking exercise, not a strategic lever. At Chime and Monzo, the best product leaders don’t run research to validate features — they use it to redefine customer problems. If your research output is a slide deck, you’ve already failed. Success isn’t measured in interviews conducted, but in product decisions redirected.

Who This Is For

This is for product managers with 2–5 years of experience working on consumer fintech apps, especially at digital neobanks or challenger banks like Chime, Monzo, N26, or Revolut. You’ve run usability tests and read through NPS comments. You think you understand user research. You don’t. You’re mistaking data collection for insight generation. This guide is for PMs who want to stop being order-takers and start shaping strategy through evidence.

What Does Good User Research Look Like in Fintech?

Good user research in fintech doesn’t end with insights — it begins there. At Monzo, a research sprint in Q2 2022 with 47 low-income users revealed they weren’t overdrafting because they forgot bills; they were choosing which bill to skip. That reframed the entire overdraft product from a risk-managed lending feature to a cash-flow triage tool. The research didn’t suggest a new feature — it killed three existing ones and redirected £1.2M in engineering effort.

The problem isn’t access to users. Chime PMs can pull 500 survey responses in 48 hours. The failure is in synthesis. Most PMs stop at “users want simpler fee disclosures” — weak, generic, unactionable. The ones who win reframe: “users don’t trust fee disclosures because they’ve been burned before, so transparency alone won’t rebuild trust — behavioral proof will.”

Not insight, but judgment. Not volume, but precision. Not “what they said,” but “what they didn’t say.” In a Chime debrief, the hiring manager pushed back when a PM presented verbatim quotes as findings. “That’s not research — that’s transcription,” he said. “Where’s your model of why they’re behaving this way?”

At Monzo, structured synthesis sessions are mandatory post-research. PMs must submit a one-pager titled “Three Lies Users Told Themselves This Week” — forcing them to identify cognitive dissonance, not just pain points. That’s where real leverage lives.

How Do You Frame the Right Research Questions?

The research question determines the ROI, not the methodology. Most PMs at Chime start with “How can we improve savings adoption?” — a solution-biased, closed frame. The top performers start with “What emotional trade-offs do users make when choosing between paying rent and saving £20?” — a human-centered, open frame.

In a Q3 2021 Chime hiring committee meeting, two PM candidates presented research on overdraft usage. Candidate A asked, “How can we increase opt-ins?” and ran a survey. Candidate B asked, “When do users feel shame about spending?” and conducted in-context diary studies. Candidate B got the offer — not because the methods were better, but because the question revealed a model of behavior.

The shift isn’t from bad to good questions. It’s from product-led to identity-led framing. Not “what do users need?” but “who do they want to become?” At Monzo, research around “money guilt” didn’t yield a feature — it reshaped the tone of automated notifications. They replaced “You’ve gone over budget” with “This week was tough. Let’s reset together.”

Not validation, but discovery. Not usability, but meaning. Not “can they use it?” but “will they defend it?” — that’s the threshold for behavioral loyalty.

How Do You Recruit the Right Users Without Biasing Results?

Recruiting the right users isn’t about demographics — it’s about behavioral thresholds. Chime once ran a savings feature test with users earning over $40K. Engagement was high. Launched to all, it flopped. Post-mortem: the test cohort had stable income timing. The real issue wasn’t income level — it was paycheck volatility.

Top PMs at Monzo don’t recruit by income bracket. They recruit by financial rhythm. One study on emergency spending screened for users who had “delayed a medical visit due to cash flow, not cost.” That specificity surfaced behaviors invisible in broad surveys. They found users would skip insulin refills not because they couldn’t afford them, but because they prioritized keeping the car running to get to work.

Incentives matter more than screening. Chime learned the hard way: $10 gift cards attract professional survey-takers. £25 prepaid cash sent after completion — with no tracking — attracted real users. Response rates dropped 40%, but data quality doubled.

Not representativeness, but richness. Not N=500, but N=12 with depth. Not “statistical significance,” but “strategic significance.”

One Monzo lead PM banned the term “target user” after a research session where a participant said, “I only downloaded Monzo because my landlord uses it to collect rent — I don’t even trust it.” That user wasn’t in the persona deck — but their behavior explained 18% of dormant accounts.

How Do You Synthesize Findings Into Actionable Strategy?

Synthesis is where 90% of research dies. Most PMs dump session notes into Airtable, pull quotes, and call it done. At Chime, a post-research review in 2023 rejected a PM’s findings because the insights were reversible — they could explain opposite behaviors. “Users want control” can justify both more customization and fewer options. That’s not insight — it’s noise.

The best PMs build causal models. After 23 interviews on late fee avoidance, a Monzo PM mapped a decision tree: users didn’t ignore due dates — they saw them, calculated trade-offs, and chose to pay the phone bill over the credit card. The insight wasn’t “send earlier reminders.” It was “help them make trade-off decisions before the due date.” That led to a new feature: “This Week’s Money Battle,” which surfaced upcoming conflicts (rent vs. car payment) and let users pre-commit.

Chime’s product council now requires every research summary to include:

  • One behavior that contradicts the product’s assumptions
  • One user goal that isn’t served by any current feature
  • One emotional state that predicts churn

These aren’t nice-to-have. They’re the minimum bar.

Not themes, but tensions. Not patterns, but paradoxes. Not “users said,” but “the data forces us to admit.”

How Do You Integrate Research Into the Fintech Product Lifecycle?

Research isn’t a phase — it’s a rhythm. At Monzo, PMs don’t “do research” before a build. They run weekly 30-minute user calibrations — not usability tests, but sense-making sessions. “Last week, 4 users called fees ‘unfair’ but still paid them. Why?” That’s the question on the agenda.

Chime’s high-performing teams treat research like a pipeline:

  • Weekly: 2 unmoderated usability clips reviewed in sprint retro
  • Biweekly: 1 deep-dive synthesis session with design and engineering
  • Monthly: 1 “voice of the user” presentation to execs — no metrics, only stories

One Chime team killed a six-month savings goal feature after a single diary study showed users set goals they didn’t believe in — “I put ‘$500’ because I thought I should, not because I could.” The project was scrapped in 48 hours. Engineering grumbled. The PM was promoted.

Not research sprints, but research infrastructure. Not “we talked to users,” but “here’s how user input changed our backlog priority.”

In a 2022 Monzo exec meeting, a PM presented a new overdraft pricing model. When asked for evidence, she didn’t show survey data — she played a 47-second audio clip of a user whispering, “I feel like a failure every time I dip below zero.” The room went quiet. The pricing team rewrote the model in two days.

That’s integration. Not input, but intrusion.

Interview Process / Timeline
At Chime and Monzo, PM candidates don’t just interview — they simulate research. The process:

- Round 1 (30 min): Recruiter screens for domain familiarity — can you name three financial behaviors unique to gig workers?

  • Round 2 (60 min): Case study — “Design a research plan to understand why 70% of users who enable auto-save disable it within two weeks.” Top candidates define behavioral cohorts, not demographics.
  • Round 3 (90 min): Role-play — you’re given a messy transcript of five user calls. You have 20 minutes to identify the core insight and present it to a mock product council. The bar isn’t polish — it’s precision.
  • Round 4 (60 min): Values alignment — “Tell me about a time your research contradicted the roadmap.” If you say you “adjusted messaging,” you fail. If you say you “escalated a strategic risk,” you’re in.

In a hiring committee at Monzo, two candidates analyzed the same data. One concluded, “Users don’t understand the feature.” The other said, “The feature assumes stability — these users live in volatility.” The second got the offer.

The timeline: 2 weeks from app to decision. No delays. Research rigor is non-negotiable.

Mistakes to Avoid

  1. Mistake: Treating research as validation.
    Bad: “We built a feature — let’s test it.”
    Good: “We have a hypothesis about user identity — let’s stress-test it.”
    In 2021, a Chime PM tested a round-up savings tool with 30 users. All said it was “great.” Launched to 100K, 82% turned it off in 14 days. The research wasn’t wrong — it was misused. It validated usability, not motivation.

  2. Mistake: Separating research from shipping.
    Bad: Handing off findings to design and walking away.
    Good: Co-writing PRDs with behavioral assumptions from research.
    At Monzo, one PM included a “user lie detector” section in every spec: “This feature assumes users will check their balance daily. They don’t. Here’s the evidence.”

  3. Mistake: Over-relying on quantitative data.
    Bad: “80% of users want budgeting tools.”
    Good: “Among users who tried budgeting tools, 80% abandoned them because they felt judged, not helped.”
    Chime once surveyed 1,200 users. 74% said they “wanted help with spending.” When observed, only 11% engaged with spending insights. The gap wasn’t in the data — it was in the behavior.

Not activity, but impact. Not delivery, but disruption.

Checklist: User Research Readiness for Fintech PMs

  • Defined the behavioral threshold for recruitment (e.g., “has overdrafted 3x in 6 months”)
  • Framed the research question around identity, not function (e.g., “who do they want to be?”)
  • Built a synthesis plan before collecting data (one-pager due post-study)
  • Scheduled integration points into roadmap reviews (biweekly)
  • Secured access to real user voices (recordings, not summaries)
  • Identified one existing assumption to disprove
  • Allocated time for team sense-making, not just individual analysis

If you can’t check all six, delay the study.

FAQ

Why isn’t NPS enough for user research?

NPS measures satisfaction, not behavior. At Chime, users with NPS 9–10 still churned at the same rate as 0–6 if they’d overdrafted twice. The score didn’t predict retention — specific interactions did. Research must explain the gap between what users say and what they do.

How much time should a fintech PM spend on research weekly?

Top PMs at Monzo spend 30% of their time in direct user contact — not running studies, but listening. That includes reviewing session recordings, reading diaries, and joining support calls. It’s not “extra.” It’s core work. If you’re below 15%, you’re out of touch.

Can you do good research with a small sample size?

Yes, if you’re studying behavior, not opinions. Five in-depth interviews can uncover a mental model that shifts strategy. Chime’s “fee shame” insight came from 7 users. The sample was small — the impact wasn’t. Depth beats breadth when you’re hunting for causality.

Related Reading

The book is also available on Amazon Kindle.

Need the companion prep toolkit? The PM Interview Prep System includes frameworks, mock interview trackers, and a 30-day preparation plan.


About the Author

Johnny Mai is a Product Leader at a Fortune 500 tech company with experience shipping AI and robotics products. He has conducted 200+ PM interviews and helped hundreds of candidates land offers at top tech companies.