Rappi New Grad PM Interview Prep and What to Expect 2026

TL;DR

Rappi’s new grad PM interviews test execution under ambiguity, not theoretical frameworks. Candidates fail not from lack of answers, but from misreading the company’s bias toward scrappy, metrics-driven decision-making. The process takes 21–28 days, includes 4 rounds, and hinges on proving you can ship fast with minimal data.

Who This Is For

This is for new grads from Latin American or global universities targeting entry-level product roles at Rappi in 2026. You have internship experience in tech, startups, or operations, and you’re fluent in Spanish and English. You’re not applying to FAANG by default—you want hypergrowth, real P&L impact, and to build products for 50M+ users in emerging markets.

How many rounds are in the Rappi new grad PM interview?

Rappi’s new grad PM process has four rounds: recruiter screen (30 minutes), take-home product challenge (48-hour window), technical + metrics interview (60 minutes), and a behavioral loop with two PMs (90 minutes total). The entire cycle lasts 21 to 28 days from first contact to decision.

In a Q3 2025 debrief, the hiring manager rejected a candidate who aced the take-home but stalled in the behavioral round when asked to describe a failed launch. The candidate said, “We didn’t measure impact,” and left it there. The HC noted: “That’s not ownership. That’s excuse-making.” Ownership at Rappi means diagnosing why you failed, not just admitting it.

Not all PM interviews are the same here. Unlike Google, where frameworks dominate, Rappi evaluates how you operate when the playbook doesn’t exist. The take-home isn’t about polish—it’s about logic under time pressure. One candidate submitted a 5-slide deck with handwritten charts. It passed because the metric hypothesis was tight: “If we reduce onboarding steps from 7 to 4, conversion jumps 18%—based on internal funnel data from Medellín pilots.”

The technical round is not coding. It’s SQL + metrics. You’ll write one query live (e.g., “Find users who churned after first purchase”) and explain how you’d measure success for a feature like Rappi’s subscription tier. You don’t need 95% accuracy. You need to show you understand cohort decay and confounding variables.

What type of product cases does Rappi ask new grads?

Rappi gives real, past product problems—not abstract “design a feature for X” cases. You’ll get prompts like: “Improve first-time user retention for RappiFood in Guadalajara” or “Design a re-engagement flow for dormant RappiPay users.”

In a 2024 hiring committee debate, two PMs argued over a candidate who proposed a referral program for the retention case. One said, “It’s lazy. Anyone can say ‘referrals.’” The other countered: “But he tied it to a 15-day experiment, offered a $2 credit cap, and excluded users who’d ordered in the past 30 days. That’s specificity.” The candidate was approved.

The issue isn’t idea volume—it’s constraint navigation. The best answers start with: “What’s the bottleneck?” not “Here are five solutions.” One top-scoring candidate mapped the onboarding funnel first, then identified step 4 (address validation) as the drop-off point. Their solution wasn’t a new feature—it was simplifying the UI and pre-filling data from phone GPS. The debrief comment: “No code, high impact. That’s Rappi speed.”

Not creativity, but calibration. Rappi doesn’t want moonshots. It wants 5% improvements at scale. When asked to design a feature for RappiGuard (fraud prevention), the winning answer wasn’t AI detection—it was adding a one-tap “I’m the buyer” button post-login, reducing false positives by 12% in testing.

You won’t get market-sizing cases. You will get behavioral variants: “Tell me about a time you influenced without authority.” But the expectation isn’t storytelling—it’s evidence of speed. One candidate said, “I got engineering to prioritize my bug fix by showing it cost $8K/day in lost deliveries.” The interviewer responded: “Show me the LTV model you used.” The candidate couldn’t. Red flag.

How technical is the Rappi new grad PM interview?

The technical bar is light on engineering, heavy on data. You must write basic SQL (SELECT, WHERE, JOIN, GROUP BY) and interpret A/B test results. You won’t touch Python or ML pipelines. You will be asked to define statistical significance and explain p-hacking.

In a 2025 interview, a candidate claimed their feature increased retention by 22%. When asked, “How many users were in the test?” they said, “Around 10K.” Follow-up: “What was the confidence interval?” Blank stare. The interviewer ended the call early. The debrief summary: “Can’t defend results. Not PM-ready.”

The SQL question is straightforward. Example: “Write a query to find users who ordered from RappiStore but not RappiFood in the last 30 days.” You’re expected to self-correct if you miss an edge case. One candidate forgot to filter for “last 30 days” initially. When prompted, they fixed it and said, “I’d add a comment in production to log timezone handling.” That recovery scored points.

Not technical depth, but technical hygiene. Rappi PMs work with data scientists daily. You don’t need to build models, but you must know what overfitting looks like. In the metrics round, you might get: “Our A/B test shows a 5% increase in checkout completion, but overall revenue dropped. Why?” The expected answer includes: “Check for substitution effects—maybe users are buying cheaper items.”

You won’t be asked system design. You will be asked to sketch a dashboard. Example: “What metrics would you track for a new RappiBike courier onboarding flow?” Top answer: “Time to first delivery, acceptance rate of first 5 orders, support ticket volume.” Not NPS, not “ease of use.”

One candidate listed “user satisfaction” as a metric. Interviewer: “How do you measure that?” Candidate: “Survey.” Interviewer: “When?” Candidate: “After onboarding.” Feedback: “Too late. By then, the damage is done.” The bar is immediacy. Actionable metrics now, not opinions later.

What behavioral questions do Rappi PM interviewers ask?

Rappi’s behavioral questions target ownership, speed, and learning from failure. The big three are: “Tell me about a time you shipped something fast with limited data,” “How do you handle conflict with engineering?” and “Describe a project that failed. What did you do?”

In a 2024 debrief, a candidate said they “collaborated with engineering” to launch a feature. The HC pushed back: “That’s not conflict. That’s harmony. We want the messy part.” The candidate then described how they escalated a timeline dispute to the director after engineers deprioritized their task. They shared the user impact model and won. That’s the level of detail Rappi wants.

Not maturity, but urgency. “Shipped fast” doesn’t mean “we took six weeks.” It means “we launched in 72 hours with a manual workaround.” One candidate described using WhatsApp to route early users to a dummy app while the real one was in dev. They tracked conversions in a spreadsheet. The interviewer said: “That’s Rappi energy.”

Learning from failure isn’t reflection—it’s iteration. A strong answer doesn’t say “We learned users didn’t like the color.” It says: “We assumed users wanted one-click reordering. We were wrong. We then added a ‘favorite items’ carousel, which lifted reorders by 9%.” Data links failure to fix.

One candidate said, “My project failed because stakeholders weren’t aligned.” Red flag. The PM owns alignment. Better: “I didn’t loop in legal early enough. We paused for 10 days. Now I send pre-kickoff checklists to all functions.”

Rappi also asks: “Tell me about a time you used data to influence a decision.” Top answer: “I noticed 40% of cart abandonments happened at tip selection. I ran a test hiding the tip prompt until checkout confirmation. Conversion increased 6%. We scaled it to three cities.” Specific, causal, scaled.

How should I prepare for the Rappi PM take-home challenge?

The take-home is a 48-hour product task: improve retention, grow activation, or reduce churn for a real Rappi vertical. You submit a short deck (max 6 slides) with problem framing, solution, metrics, and rollout plan. No design specs. No wireframes.

In 2025, a candidate submitted a 12-slide deck with animations. It was not reviewed. The instruction was “max 6 slides.” Rule-breaking is a rejection. Another candidate submitted 4 slides—clean, one metric hypothesis, a phased test plan. Hired.

The mistake most make: starting with solutions. Rappi wants problem validation first. One top submission began: “We’re assuming low retention is due to discovery. But data shows 68% of users who try 3+ stores in week one stay active. Hypothesis: the real issue is narrow initial usage.” That reframing earned top marks.

Not completeness, but clarity. You don’t need every edge case. You need one strong insight. Example: a candidate noticed that users who added a payment method before first order had 2.3x higher LTV. Their solution: a “Pay Once, Skip Lines” nudge during signup. The rollout plan: test in Santiago with 5% of users, track 7-day retention.

You’ll be asked to walk through your thinking in the next round. If your deck says “improve onboarding,” but you can’t explain which step has the highest drop-off, you fail. One candidate wrote, “We’ll simplify onboarding.” In the interview, they couldn’t say how many steps it had. Immediate no-hire.

Work through a structured preparation system (the PM Interview Playbook covers Rappi-style take-homes with real debrief examples from 2024 cycles). The playbook’s “Problem Laddering” framework—moving from symptom to root cause—is what separates 6-slide flashes from real insight.

Preparation Checklist

  • Study Rappi’s app: use RappiFood, RappiPay, RappiStore in Mexico, Colombia, Brazil. Note friction points.
  • Practice 3 real SQL queries: cohort retention, churn identification, conversion rate by segment.
  • Internalize 3 key metrics: weekly active users (WAU), cost per acquisition (CPA), and lifetime value (LTV).
  • Prepare 4 stories: one each for speed, conflict, failure, and data influence. Use the STARL format (Situation, Task, Action, Result, Learning).
  • Review A/B testing fundamentals: significance, duration, guardrail metrics.
  • Work through a structured preparation system (the PM Interview Playbook covers Rappi-style take-homes with real debrief examples from 2024 cycles).
  • Time yourself on a mock take-home: 48 hours max, 6 slides, no fluff.

Mistakes to Avoid

BAD: Submitting a take-home with five solutions and no prioritization.

GOOD: Focusing on one high-impact lever with a testable hypothesis and clear success metric.

BAD: Saying “I collaborated with the team” in behavioral questions.

GOOD: Naming the engineer who blocked your launch and explaining how you realigned incentives using data.

BAD: Writing a perfect, polished deck with no rough edges.

GOOD: Submitting a lean, logical deck that shows your thinking—even if slides look basic.

FAQ

What’s the salary for a new grad PM at Rappi in 2026?

Base salary ranges from $48K–$62K USD annually, depending on location. Mexico City and Bogotá are at the lower end. São Paulo and Buenos Aires are higher. Equity is rare for new grads. Total comp rarely exceeds $70K. The trade-off is velocity, not pay.

Do Rappi PM interviews include case studies on new markets or verticals?

No. Cases are always tied to existing Rappi products. You won’t get “Enter Nigeria with Rappi.” You will get “Improve RappiBank’s credit uptake in Monterrey.” The focus is operational excellence, not strategy theater.

Is fluency in Spanish required for new grad PM roles at Rappi?

Yes. Business meetings, user research, and cross-functional syncs are in Spanish. English is used only in global executive updates. If you can’t conduct a product review in Spanish, you won’t pass the behavioral loop. Bilingualism isn’t a plus—it’s table stakes.


Ready to build a real interview prep system?

Get the full PM Interview Prep System →

The book is also available on Amazon Kindle.