Public PM Interview: Analytical and Metrics Questions
TL;DR
Public PM interview questions on analytics and metrics test judgment, not calculation speed. Candidates fail by quoting frameworks instead of making prioritization calls. The real test is how you define success when data is ambiguous — which is always.
Who This Is For
This is for product managers with 3–7 years of experience applying to Public, a fintech platform focused on fractional investing and wealth democratization. You’ve passed the resume screen and are preparing for the onsite loop, particularly the analytical deep-dive and metric design rounds. You’re not an entry-level candidate; Public expects you to operate with minimal direction and influence cross-functional teams under uncertainty.
How does Public evaluate analytical thinking in PM interviews?
Public measures analytical thinking by how you constrain problems, not how fast you solve them. In a Q3 hiring committee (HC) meeting, an engineer pushed back on a candidate who proposed measuring engagement via DAU/MAU — not because the metric was wrong, but because it was lazy. The product lead said: “We don’t need another person who recites AARRR. We need someone who can decide which part of AARRR matters this quarter.”
Analytical thinking at Public isn’t about running regressions. It’s about scoping. The candidate who won the role narrowed the question from “improve user retention” to “reduce drop-off after first trade execution” — then picked one friction point: confirmation delay. Not X, but Y: the goal isn’t comprehensiveness — it’s surgical focus.
Public’s PM interviews simulate real ambiguity. You won’t get clean dashboards. You’ll get partial data, conflicting stakeholder inputs, and 12 minutes to decide what to build. In a recent debrief, the hiring manager praised a candidate who said, “We’re measuring the wrong thing — Net Revenue Retention doesn’t reflect user satisfaction in our model.” That’s the signal: not analysis, but challenge.
What kind of metrics questions will I get in a Public PM interview?
You’ll face three types: metric definition, metric change diagnosis, and metric trade-off prioritization. Public uses these in both the take-home and live case rounds. Expect 1–2 hours total dedicated to metrics across the 4-round onsite.
Metric definition questions sound like: “How would you measure the success of a new auto-invest feature?” The trap is listing every possible KPI. The win is picking one North Star and justifying it. In a recent interview, a candidate listed 8 metrics — session duration, CTR, completion rate, NPS, LTV, churn, support tickets, and referral rate. The debrief note: “Over-indexed on breadth. No decision signal.”
The correct approach is not to list, but to ladder up to business outcomes. Public’s model depends on AUM (assets under management) growth and cost-efficient retention. A strong answer anchors there: “I’d track % of users setting recurring investments, because it increases AUM velocity and reduces churn. Secondary: time-to-first-auto-deposit, because delays here indicate UX friction that compounds.” That’s not metrics — it’s strategy.
Metric change diagnosis questions say: “DAU dropped 15% last week. Debug.” Public doesn’t care if you guess the right cause. They care if you isolate variables. The winning candidates stratify: by user cohort (new vs. existing), by feature (trading vs. learning), by platform (iOS vs. web). The HC rejected a candidate who jumped to “server outage” without checking user-level patterns. “Assumed technical root — but 90% of drop was in first-time investors. That’s a product onboarding issue, not a backend one.”
Trade-off questions are the hardest: “You can improve load time by 200ms or add a new educational tooltip. Which do you pick and how do you measure impact?” The bad answer: “It depends.” The good answer: “I’d A/B test both, but prioritize load time because our data shows >1s latency correlates with 30% drop-off in checkout. Tooltips have <2% CTR historically — so marginal impact.” Not X, but Y: the math isn’t the point. The prioritization logic is.
How is Public’s approach to metrics different from other fintechs?
Public treats metrics as policy levers, not dashboard ornaments. At other fintechs, PMs optimize for funnel conversion or CAC/LTV. At Public, the constraint is user trust — because their model relies on community-led growth and low-touch support. In a hiring manager conversation, she said: “If we push aggressive monetization, we kill the referral engine. That’s why NPS isn’t a vanity metric here — it’s a leading indicator of scalability.”
This shifts metric design. At Chime or SoFi, a PM might celebrate a 10% increase in credit card applications. At Public, that same push could degrade the experience for non-trading users — who are critical for social sharing. So the real KPI isn’t application volume, but balance of engagement across user segments. One candidate proposed tracking “% of users who both trade and share” — that got a strong hire vote.
Public also rejects last-touch attribution. Their product mix includes investing, learning, and community features. A user might watch 12 videos before trading. PMs who attribute conversion only to the trade flow miss the point. In a debrief, the data lead said: “We need PMs who understand multi-touch value, not just last-click credit.” The insight layer: Public measures product synergy, not isolated feature performance.
This shows up in interview feedback. Candidates who say “I’d track conversion from learn-to-trade” get neutral reviews. Candidates who say “I’d build a cohort model to see if video watchers have higher LTV, even if they don’t trade immediately” — those move forward. Not X, but Y: it’s not about accuracy of analysis, but depth of system thinking.
How should I structure my answers to metrics questions?
Lead with the business objective, not the metric. In a recent loop, a candidate started with: “For the auto-invest feature, I’d track completion rate.” Immediate red flag. The interviewer interrupted: “Why completion rate? What business outcome does that drive?” The candidate faltered. The debrief: “Started tactical. No line of sight to AUM or retention.”
The correct structure is:
- State the business goal (e.g., increase AUM)
- Define the user behavior that drives it (e.g., recurring deposits)
- Pick the metric that best reflects behavior change (e.g., % of users setting auto-invest)
- Acknowledge second-order effects (e.g., could reduce manual trading, so track both)
- Define guardrail metrics (e.g., support tickets, error rates)
This isn’t a framework — it’s a decision chain. Public PMs use it in real meetings. In a Q2 planning session, the head of product shut down a proposal by saying: “You haven’t linked your metric to the P&L. Until you do, it’s just activity.” That’s the standard.
Candidates often spend too long on step 3. They debate whether to use median or mean deposit size. That’s wasted time. Public cares about step 1 and 2: is your logic traceable? In a rejected candidate’s notes: “Spent 4 minutes discussing confidence intervals. Never clarified whether the feature was meant to drive growth or reduce support load.” Not X, but Y: rigor in the wrong place is worse than none at all.
One candidate stood out by saying: “I don’t know the perfect metric yet — but I know we need to move the AUM curve. So I’d start by measuring adoption of recurring plans, then refine based on user interviews.” That’s acceptable at Public. They’d rather hear “I’ll learn” than “I’ve decided” when the data isn’t in.
How important are SQL and data skills in the Public PM interview?
SQL is a threshold skill, not a differentiator. Public expects PMs to write basic queries — filtering, grouping, joins — but won’t make you whiteboard window functions. The bar is: can you pull your own data to answer a product question in <20 minutes?
In the take-home exercise, candidates get a schema and must answer 3–4 questions using SQL. One recent prompt: “Calculate the 7-day retention rate for users who completed the educational onboarding.” The rubric had 3 points: correctness (50%), clarity of code (30%), and interpretation (20%). A candidate lost points for correct SQL but no context: “You showed the number, but not what it means for the business.”
Interviewers don’t care about syntax perfection. They care if you think in data. One candidate used “COUNT(DISTINCT user_id)” correctly but didn’t filter out test accounts — a fatal error. The debrief: “Missed data hygiene. Can’t trust their analysis.” At Public, clean data is non-negotiable because their compliance team audits product metrics.
The deeper issue is not technical skill, but data skepticism. In a live case, an interviewer said: “Our data shows 80% of users complete onboarding.” A strong candidate replied: “Is that tracked client-side or server-side? If it’s client-side, we might be missing crashes.” That got a hire vote. Not X, but Y: the SQL matters less than your ability to question the data’s validity.
Public doesn’t require Python or stats modeling. But if you mention regression in an interview, be ready to explain it. One candidate said, “I’d run a logistic regression on churn.” When asked what covariates they’d include, they froze. The feedback: “Used advanced term without depth. Damaged credibility.”
Preparation Checklist
- Run through at least 3 timed metric design exercises: define, debug, trade-off
- Practice SQL on real fintech schemas — focus on time-series and retention queries
- Study Public’s product: understand how AUM, engagement, and trust interact
- Prepare 2–3 examples where you changed a metric and drove a business outcome
- Work through a structured preparation system (the PM Interview Playbook covers Public’s metrics philosophy with real debrief examples from 2023–2024 cycles)
- Do a mock interview with a PM who’s gone through Public’s loop
- Write a one-pager on how you’d measure success for Public’s latest feature (e.g., Smart Deposits)
Mistakes to Avoid
BAD: “I’d track DAU, MAU, NPS, CAC, LTV, and churn.”
GOOD: “I’d track % of users making recurring investments, because it drives AUM growth and predicts long-term retention. Guardrail: support tickets to catch UX issues.”
Why it matters: Public wants focus, not fluency. Listing metrics is not strategy.
BAD: “The drop in DAU is probably a tech issue.”
GOOD: “Let me segment the drop by cohort and platform. If it’s concentrated in new users on Android, it’s likely an onboarding bug — not a system outage.”
Why it matters: Premature root cause assumptions show weak diagnostic discipline.
BAD: “I’d A/B test everything.”
GOOD: “I’d test the higher-impact change first — load time — because latency >1s has historically driven 30% drop-off. Tooltips have low CTR, so lower ceiling.”
Why it matters: “Test everything” is abdication. Prioritization is the PM’s job.
FAQ
What’s the most common reason candidates fail the metrics round at Public?
They treat metrics as measurement tools, not decision levers. In a Q4 HC, a candidate built a perfect retention model but couldn’t say which lever to pull. The hiring manager said: “We don’t need an analyst. We need a PM who ships.” The failure wasn’t technical — it was judgment.
Do Public PMs work with data scientists, or are they expected to do their own analysis?
PMs are expected to do their own exploratory analysis. Public’s data science team supports complex modeling, but PMs must pull basic data. In one case, a new hire waited 3 days for a DS to confirm a 5-line query. The skip-level feedback: “You should’ve written it yourself. Speed matters.”
Is there a specific framework Public wants you to use in metrics interviews?
No. Public rejects rote frameworks like HEART or AARRR. In a debrief, a panelist said: “If I hear ‘North Star metric’ without justification, I stop listening.” The expectation is not to name a framework, but to build a logical chain from user behavior to business outcome.
About the Author
Johnny Mai is a Product Leader at a Fortune 500 tech company with experience shipping AI and robotics products. He has conducted 200+ PM interviews and helped hundreds of candidates land offers at top tech companies.
Want to systematically prepare for PM interviews?
Read the full playbook on Amazon →
Need the companion prep toolkit? The PM Interview Prep System includes frameworks, mock interview trackers, and a 30-day preparation plan.