Supabase PM Analytical Interview: Metrics, SQL, and Case Questions
TL;DR
The Supabase PM analytical interview tests three non-negotiable skills: SQL fluency under time pressure, metric design that aligns with developer behavior, and case structuring that reflects real product constraints. Most candidates fail not because of technical gaps, but because they treat this as a generic tech PM screen instead of a product-led growth interview for a developer platform. Success requires demonstrating precision in instrumentation thinking and trade-off logic, not broad conceptual answers.
Who This Is For
This guide is for product managers with 2–7 years of experience applying to early-to-mid level PM roles at Supabase, particularly those transitioning from B2C or enterprise SaaS into developer-first infrastructure. If your background lacks exposure to usage-based pricing, API analytics, or self-serve funnels, this interview will expose those gaps — especially in how you define and defend metrics.
What does the Supabase PM analytical interview actually test?
Supabase evaluates whether you can think like a product-led growth (PLG) founder for a technical audience, not just a PM executing roadmaps. In a recent Q3 debrief, the hiring committee rejected a candidate from a top cloud vendor because they treated retention as a single metric rather than a cascade of developer behaviors — login frequency, project persistence, API call growth.
The interview is structured around three dimensions:
- SQL under constraint — You’ll write queries in 15–20 minutes with real schema (e.g., projects, auth_sessions, api_logs).
- Metric design — You’ll be asked to define success for a new feature like Row Level Security (RLS) adoption.
- Case structuring — You’ll diagnose a drop in free-to-paid conversion across regions.
Not product intuition, but instrumentation judgment. Most candidates miss that Supabase cares less about your final answer than how you isolate signal from noise in usage data.
In one HC session, a candidate was praised not for perfect SQL syntax, but for explicitly stating assumptions about null values in the auth schema — a detail that matters when tracking developer onboarding drop-offs. Supabase runs on observability, and so must your thinking.
How is the Supabase analytical round different from other PM interviews?
The Supabase PM analytical interview is not a Facebook-style execution screen or a Google product sense round. It is a stress test on your ability to translate developer behavior into measurable outcomes — in real time, with sparse data.
In a debrief last April, the hiring manager pushed back on advancing a candidate who aced a pricing case because they used “active users” without defining it operationally. At Supabase, “active” means: a project made ≥1 API call in the last 7 days. Anything vaguer fails.
Most PM interviews accept proxy metrics. Supabase does not.
Not abstraction, but specificity.
Not frameworks, but fidelity.
Not storytelling, but traceability.
For example, when asked to evaluate a new Auth UI flow, a strong candidate will immediately ask:
- How are we logging auth completion events?
- Are we tracking email verification latency?
- Is the event schema consistent across web and mobile SDKs?
Weak candidates jump to funnel conversion without verifying the underlying data. That’s not acceptable here.
Supabase’s product motion is data-informed by necessity — they can’t rely on sales teams to gather feedback. You must prove you can do the same.
What SQL skills do I actually need for the Supabase PM interview?
You must write executable SQL, not explain concepts. The bar is junior data analyst level: joins, filtering, aggregation, date arithmetic, and handling nulls — all under 20 minutes.
In a recent interview, candidates were given a schema with projects, users, and api_requests tables and asked:
“Find the 7-day retention rate of projects created in the last 30 days.”
A top-scoring response used a CTE to first identify projects created in the period, then checked for any API call activity on day 7. The candidate explicitly handled edge cases: projects with no API calls, or where the project was deleted before day 7.
Not “I’d use window functions,” but actual LAG() implementation.
Not “aggregate by week,” but DATE_TRUNC('day', created_at) alignment.
Not “join tables,” but specifying LEFT JOIN to preserve projects with zero activity.
The difference between pass and fail often comes down to one line: handling nulls in retention calculations. If you don’t filter out projects that never made a second call, your retention number is inflated — and your judgment is questioned.
Supabase expects PMs to validate their own hypotheses. That starts with correct SQL.
How should I approach metric design questions for developer products?
You must design metrics that reflect developer progress, not just activity. Time-to-first-API-call matters more than DAU. Project resurrection rate matters more than churn.
In a case about RLS adoption, one candidate proposed “% of projects using RLS” — a surface-level metric. Another proposed:
- Adoption depth: Number of RLS policies per project
- Retention linkage: % of projects that keep RLS enabled after 14 days
- Error correlation: Rate of 403s post-RLS enablement (indicating misconfiguration)
The second candidate advanced because they treated adoption as a process, not a binary.
Not usage, but utility.
Not volume, but velocity.
Not adoption, but stickiness.
Supabase has an internal framework called the “Dev Progression Ladder” — from signup → first project → first API call → persistence → paid upgrade. Your metrics must ladder up to this.
When asked to measure success for a new Realtime feature, a strong answer would include:
- % of projects with active subscriptions
- Median latency of change propagation
- Drop-off after initial subscription
Because at Supabase, if a developer doesn’t see real-time updates within 5 seconds, they assume it’s broken — and disable it.
How do I structure case questions in the Supabase PM interview?
Start with data, not hypotheses. In a case about declining free-to-paid conversion in Europe, a candidate began with:
“I’d check if the drop is in new signups, activation, or billing completion.”
Wrong.
The hiring manager interrupted: “What tables would you query first?”
The correct starting point:SELECT DATE_TRUNC('week', created_at), COUNT(*) FROM projects WHERE region = 'EU' AND created_at >= '2024-01-01' GROUP BY 1
Then, layer in:
- Activation rate (projects with ≥1 API call within 24h)
- Project deletion rate
- Payment method failure logs
Not root cause analysis, but data triage.
Not brainstorming, but querying.
Not segmentation, but isolation.
Supabase cases are reverse engineering problems. You’re not creating a new product — you’re diagnosing a live issue with limited context.
In a real debrief, a candidate lost points for proposing user interviews before checking log data. The feedback: “We have terabytes of behavioral data. Talk to users only after you’ve ruled out instrumentation or billing failures.”
Preparation Checklist
- Practice writing timed SQL on real datasets (use Supabase’s public schema or HackerRank’s PostgreSQL tracks). Aim for 15-minute fluency on joins, grouping, and date math.
- Memorize the Dev Progression Ladder: signup → auth → project → API call → persistence → paid. Align every metric to a stage.
- Build a personal cheat sheet of 5 common anti-patterns: e.g., mistaking project creation for activation, ignoring nulls in retention, using daily instead of rolling windows.
- Run through 3 full case simulations with a timer: diagnose a metric drop, propose a metric, write the SQL. Record yourself to spot vague language.
- Work through a structured preparation system (the PM Interview Playbook covers Supabase-style analytical cases with real debrief examples from infrastructure PM screens at Vercel, Stripe, and MongoDB).
- Study usage-based pricing models — especially how Supabase defines “compute units” and “bandwidth tiers.” Know where friction lives.
- Review API analytics concepts: rate limiting, error codes, latency distribution, and session tracking.
Mistakes to Avoid
BAD: “I’d look at user engagement.”
GOOD: “I’d query api_requests to calculate the 7-day active project rate, filtering for projects with at least 5 API calls to exclude exploratory use.”
BAD: Proposing NPS as a success metric for a new Auth flow.
GOOD: Defining success as “% of users who complete email verification within 5 minutes of signup, with <2% failure rate from resend requests.”
BAD: Starting a case by listing possible causes: “Maybe pricing changed, or competitors launched.”
GOOD: “First, I’ll pull project creation and activation trends over the last 6 weeks to confirm the drop and isolate the stage where leakage occurs.”
FAQ
Do I need to know Supabase’s internal schema?
No, but you must ask clarifying questions about table structure and event definitions. In a 2023 interview, a candidate was told to assume a projects table with created_at, last_active, and tier columns — but failed to ask whether last_active was updated on API calls or UI visits. That assumption invalidated their analysis.
Is the SQL part live or take-home?
It is live, typically on CoderPad or a Google Doc, lasting 15–20 minutes. No autocomplete. You must write syntactically correct PostgreSQL — Supabase does not use MySQL. Expect 1–2 queries per session, often with follow-ups like “Now modify this for rolling 7-day windows.”
How deep should I go on pricing in analytical cases?
Go deep enough to identify friction points. If diagnosing free-to-paid drop-off, you must consider: failed payments due to region-specific methods (e.g., SEPA in EU), credit card decline rates, and whether free tier limits are too restrictive. One candidate scored highly by suggesting a check on “projects hitting the free tier API limit but not upgrading — indicating pricing mismatch.”
About the Author
Johnny Mai is a Product Leader at a Fortune 500 tech company with experience shipping AI and robotics products. He has conducted 200+ PM interviews and helped hundreds of candidates land offers at top tech companies.
Want to systematically prepare for PM interviews?
Read the full playbook on Amazon →
Need the companion prep toolkit? The PM Interview Prep System includes frameworks, mock interview trackers, and a 30-day preparation plan.