Early-Stage PMs: How to Define Metrics When You Have No Data
TL;DR
You define metrics by anchoring them to the riskiest assumption, not to existing data. Use proxy signals, qualitative triangulation, and time‑boxed experiments to create a measurable hypothesis before any users exist. Communicate uncertainty openly; stakeholders respect a clear learning agenda more than false precision.
Who This Is For
This is for product managers at pre‑seed or seed startups who are tasked with setting success criteria for a product that has zero live users, no analytics instrumentation, and often only a mock‑up or a prototype. If you are interviewing for an early‑stage PM role or have just joined a founding team and need to convince investors that you can measure progress without a dashboard, this article applies to you.
How do I choose a north star metric when there is no user data?
The north star must reflect the core value hypothesis, not a vanity number you can’t yet measure. In a Q2 debrief at a health‑tech startup, the hiring manager rejected a candidate who proposed “daily active users” as the north star for a product that hadn’t built its MVP; the manager said, “You’re optimizing for a metric you can’t influence until after launch, which tells me you don’t understand risk prioritization.”
Instead, the team adopted “problem‑solution fit score” derived from a structured interview rubric: each interviewee rated pain severity (1‑5) and solution relevance (1‑5); the product of the two averages became the proxy north star. This gave the team a single number that moved as they refined the problem statement and prototype, even before any code was written.
The insight here is that a north star in a data‑void environment is a leading indicator of future behavior, not a lagging outcome. Treat it as a hypothesis‑tracking device: define the assumption it validates, set a target threshold, and plan to replace it with a behavioral metric once you have sufficient traffic.
What proxy signals can I use to validate assumptions before launch?
Proxy signals are observable behaviors that correlate with the outcome you care about, even if they are not the outcome itself. During a hiring committee discussion for a fintech founder, a senior PM argued that “time to complete a mock‑up task in a usability test” was a stronger proxy for future conversion than stated interest scores. The committee agreed after seeing data from a prior prototype where users who finished the task in under 90 seconds were three times more likely to sign up in the eventual beta.
You can generate three classes of proxies: (1) effort‑based (time, clicks, steps to complete a core action), (2) expression‑based (willingness to pre‑sign up, give email, or commit a small resource), and (3) social‑based (referrals, shares, or advocacy in a closed community). Pick the proxy that has the strongest logical link to your ultimate metric and that you can measure with the least friction.
A counter‑intuitive observation is that the most reliable proxy often involves a small cost to the user; if they are willing to incur that cost, they are signaling genuine interest rather than polite feedback.
How do I set up experimentation frameworks with zero traffic?
Experimentation does not require live traffic; it requires a controlled environment where you can manipulate the independent variable and observe the proxy. In a seed‑stage SaaS company, the head of product ran a “concierge test” where two groups of five prospects each received either a manual onboarding walkthrough (variant A) or a self‑serve demo video (variant B).
The team measured the proportion that agreed to a paid pilot after the interaction. The variant with the manual walkthrough achieved 60% conversion versus 20% for the video, giving a clear direction before any code was written.
Structure your experiment as follows: (1) define the hypothesis, (2) select a proxy metric, (3) choose a variant that isolates one factor, (4) recruit a homogeneous sample (aim for 20‑30 participants per variant to detect large effects), (5) run the test for a fixed period (e.g., one week), and (6) apply a simple decision rule (e.g., if variant A exceeds variant B by 30% points, pivot).
The organizational psychology principle at play is the “illusion of validity”: teams tend to overconfide in intuition when data is scarce. A time‑boxed experiment forces a disciplined check against that bias.
When should I rely on qualitative insights versus quantitative proxies?
Qualitative insights are essential for uncovering unknown unknowns; quantitative proxies are essential for tracking known knowns. In a hiring manager’s debrief for a consumer‑app PM, the manager recalled a candidate who presented only interview quotes claiming users loved the concept. The manager said, “Quotes are useful for inspiration, but they don’t tell me if the willingness to pay exists.” The candidate was passed over because they lacked any proxy to test price sensitivity.
Use qualitative work to generate the list of assumptions and to craft the proxy; then switch to quantitative tracking as soon as you can collect the proxy at scale. A useful rule of thumb: if your sample size for a qualitative finding is below 10, treat it as exploratory; if you can collect the proxy from 30+ homogeneous users, treat it as confirmatory.
A counter‑intuitive observation is that over‑reliance on qualitative data in early stages often leads to “false consensus” – the team believes they have validated demand because a few enthusiastic users said so, while the broader market remains indifferent.
How do I communicate metric uncertainty to stakeholders and investors?
Stakeholders respect a transparent learning agenda more than a false sense of certainty.
In a Series A pitch rehearsal, a founder presented a slide that said, “Our north star is projected to reach 10k DAU by month six, based on current user growth.” The VC partner interrupted, “You have zero users today; that’s a guess, not a forecast.” The founder then revised the slide to show three columns: assumption, proxy metric, and target threshold (e.g., “Problem‑solution fit score ≥ 3.5 from 30 interviews”). The VC noted the clarity and said, “Now I can see what you’re learning and when you’ll pivot.”
Communicate by: (1) stating the assumption explicitly, (2) showing the proxy you are using to measure it, (3) declaring the success threshold and the time box, and (4) outlining the next metric you will adopt once you have sufficient data. Keep the language factual, avoid hedging words like “maybe” or “might,” and replace them with “if–then” statements (e.g., “If the proxy reaches X, we will invest Y”).
Preparation Checklist
- Identify the riskiest assumption behind your product vision and write it as a testable hypothesis.
- Select a proxy metric that has a logical, observable link to that assumption and can be measured with minimal friction.
- Design a time‑boxed experiment (e.g., one‑week concierge test) with a clear decision rule based on the proxy.
- Run at least two variants with a homogeneous sample of 20‑30 participants each to reduce noise.
- Document the assumption, proxy, threshold, and outcome in a one‑page learning memo for stakeholders.
- Work through a structured preparation system (the PM Interview Playbook covers defining proxy metrics with real debrief examples).
- Review your metric communication with a trusted peer; ask whether they can articulate the assumption and the next step without jargon.
Mistakes to Avoid
- BAD: Choosing a metric that requires data you cannot collect (e.g., “monthly recurring revenue” before you have a billing system).
- GOOD: Start with a leading indicator like “percentage of interviewees who express willingness to pay $X” that you can gather through a simple survey.
- BAD: Treating qualitative interview quotes as proof of demand and ignoring any quantitative validation.
- GOOD: Use quotes to shape the proxy, then run a small experiment to measure the proxy at scale (e.g., track pre‑sign‑up clicks after presenting the prototype).
- BAD: Presenting a metric forecast as a certainty to investors without exposing the underlying assumptions.
- GOOD: Show the assumption, the proxy, the target, and the timeline; explicitly state what will trigger a pivot or a double‑down.
FAQ
How long should I wait before replacing a proxy metric with a real behavioral metric?
Replace the proxy as soon as you can collect the behavioral data with sufficient statistical power—typically after you achieve 100+ active users or after a stable conversion funnel emerges. Do not wait for perfection; wait for enough signal to make the proxy redundant.
Can I use industry benchmarks as my north star when I have no data?
Benchmarks are useful for sanity‑checking, but they are not a substitute for a hypothesis‑driven metric. If you adopt a benchmark north star without tying it to your specific assumption, you risk optimizing for something that may not matter for your product’s unique value chain.
What if my proxy metric moves in the opposite direction of what I expect?
Treat an unexpected direction as a signal that your assumption is flawed. Immediately revisit the assumption, run a follow‑up qualitative session to understand why, and generate a new proxy. The goal is learning, not validation of a pre‑set target.
What are the most common interview mistakes?
Three frequent mistakes: diving into answers without a clear framework, neglecting data-driven arguments, and giving generic behavioral responses. Every answer should have clear structure and specific examples.
Any tips for salary negotiation?
Multiple competing offers are your strongest leverage. Research market rates, prepare data to support your expectations, and negotiate on total compensation — base, RSU, sign-on bonus, and level — not just one dimension.
Ready to build a real interview prep system?
Get the full PM Interview Prep System →
The book is also available on Amazon Kindle.