Anduril PM Interview: Behavioral Questions and STAR Examples

TL;DR

Anduril’s PM interviews test judgment under ambiguity, not just execution. The behavioral round is a proxy for decision-making in high-stakes environments — your stories must show prioritization amid uncertainty, not just outcomes. If your examples lack tension between trade-offs, you will fail the debrief.

Who This Is For

This is for product managers with 3–8 years of experience transitioning from consumer tech, defense adjacent roles, or hardware/software hybrids who have passed Anduril’s recruiter screen and are preparing for the onsite behavioral loop. You’ve been told “tell me about a time” — but no one has explained that Anduril isn’t listening for leadership, they’re listening for command presence.

How Does Anduril Evaluate Behavioral Questions in PM Interviews?

Anduril uses behavioral questions to simulate command decisions — not leadership stories. In a Q3 HC meeting, a candidate was dinged because their “conflict resolution” story showed compromise, not escalation clarity. The panel ruled: “This person would delay calling in artillery.”

The problem isn’t your story — it’s your framing. Anduril PMs operate in environments where delayed decisions cost lives. Your example about launching a feature late isn’t about stakeholder management; it’s about whether you defaulted to consensus when you should have forced a call.

One interviewer told me: “We’re not hiring PMs. We’re hiring battlefield lieutenants who happen to work in software.”

Not “Did you lead?” but “Did you own the outcome?” That’s the filter.

At Google, you’re rewarded for aligning teams. At Anduril, you’re assessed on whether you’d make the hard call alone. A former HC lead once said: “If I can’t imagine this person on a midnight op with a malfunctioning sensor, they don’t pass.”

Your story must contain:

  • A moment of technical ambiguity
  • A decision made without full data
  • A consequence that was accepted, not reversed

Omit any of these, and the debrief will label you “low agency.”

What Are the Most Common Anduril PM Behavioral Questions?

The top three questions dominate 80% of behavioral rounds:

  1. Tell me about a time you led a cross-functional team through a technical failure
  2. Describe a product decision you made with incomplete data
  3. When did you push back on engineering or leadership, and why?

These are not open-ended. Each maps to a core Anduril competency: crisis response, judgment under fire, and command autonomy.

In a February debrief, a candidate nailed the “incomplete data” question by describing a drone deployment where telemetry failed mid-flight. Instead of waiting for diagnostics, they used radar cross-section estimates to redirect the mission. The outcome was partial success — but the decision logic passed.

Contrast that with a failed candidate who discussed a mobile app A/B test with “inconclusive metrics.” That’s not incomplete data — that’s standard product work. Anduril doesn’t care about iterative optimization. They care about irreversible decisions.

Another trap: answering “push back on leadership” with a story about budget reallocation. That’s politics. They want stories where lives or missions were at stake — even if metaphorically. One successful candidate reframed a cloud billing dispute as a risk to customer continuity during a national emergency drill.

Not “What did you do?” but “What did you risk?” That’s the unspoken prompt.

How Should You Structure STAR Answers for Anduril?

STAR is table stakes. What Anduril actually uses is STAR-L: Situation, Task, Action, Result, Learning — but only if the learning reveals a permanent shift in decision-making.

In a post-interview review, a STAR answer was criticized not for being unclear, but because the “Learning” was: “I now involve more stakeholders.” That’s regression. At Anduril, growth means faster unilateral decisions, not more committees.

Your STAR-L must end with: “I now act earlier” or “I no longer wait for consensus.” Anything else signals risk aversion.

Scene: A candidate described rebooting a failed edge AI update in a live border surveillance system.

  • Situation: Cameras went dark during active monitoring
  • Task: Restore functionality in <15 minutes or lose tracking
  • Action: Overrode automated rollback protocol, manually flashed patch based on memory logs
  • Result: 80% sensor recovery; one unit damaged
  • Learning: “I now maintain offline firmware copies — and will bypass approval chains during critical failures”

That passed. The damaged unit wasn’t a failure — it was proof of acceptable risk.

Compare to a BAD example:

  • Learning: “I created a new escalation path with engineering leads”
    This implies the system should prevent such decisions in the future. That’s the opposite of Anduril’s culture.

Not “What went well?” but “What will you do differently, forever?” That’s your anchor.

What Behavioral Traits Do Anduril PM Interviewers Actually Look For?

They look for command velocity — how fast you move from ambiguity to action. Not charisma, not collaboration, not empathy.

In a hiring committee, a candidate with a flawless resume from SpaceX was rejected because their stories “lacked urgency.” One interviewer noted: “They explained trade-offs beautifully — but never said, ‘I decided.’” The final vote was 2-1 no-hire.

Anduril PMs are expected to operate like special forces: minimal briefing, maximum autonomy. Your interviewers are often ex-military or ex-intel. They listen for linguistic cues:

  • “I assessed and directed” — good
  • “We aligned and proceeded” — dead

One debrief sheet listed: “Candidate used ‘team’ 14 times, ‘I’ 3 times. Low ownership signal.”

They also assess risk articulation. Not whether you took risk — everyone claims that — but whether you can quantify it. A strong answer includes:

  • Probability estimates (“I judged failure likelihood at 40%”)
  • Downside bounds (“Worst case: 2-hour outage”)
  • Trade-off justification (“Lost data was less critical than mission continuity”)

A weak answer says: “It was risky, but necessary.” That’s hand-waving.

Anduril doesn’t want leaders who inspire. They want operators who decide.
Not “Did you collaborate?” but “Did you close the loop?”
Not “Were you effective?” but “Did you break symmetry?”
Not “Did you succeed?” but “Would you do it again?”

How Is the Behavioral Round Scored in Anduril’s Hiring Committee?

Each interviewer submits a written assessment using a 5-point scale:

  1. Strong no
  2. Leaning no
  3. Neutral
  4. Leaning yes
  5. Strong yes

Scores of 3 or below require written justification. A single 2 typically kills an offer unless overridden by a senior sponsor.

In a recent case, a candidate got two 4s, one 3, and one 2. The 2 came from a former Navy SEAL interviewer who wrote: “Candidate optimized for reusability in their API design story. That’s a peacetime mindset. We need wartime builders.”

The HC debated for 20 minutes. Final decision: no hire. The argument was: “We can’t have PMs designing for elegance. We need ones designing for survival.”

Bar raisers look for decision density — how many irreversible calls you made per story. One story with three high-stakes decisions beats three stories with one each.

They also flag outcome dependency. If your story’s success hinges on the result — e.g., “we increased conversion by 20%” — and you wouldn’t have done it if the metric failed, that’s not judgment. That’s gambling.

Judgment is: “I would make the same call even if it failed.” Say that, and you pass.

Not “What was the impact?” but “What would you own if it blew up?”
Not “Were you right?” but “Did you act?”
Not “Did the team follow?” but “Did you lead alone?”

Preparation Checklist

  • Conduct 3 mock interviews with PMs who’ve worked in defense, robotics, or real-time systems — not consumer apps
  • Rewrite each STAR story to include a moment of irreversible action
  • Remove all passive language: “we decided,” “it was agreed,” “leadership approved”
  • For each story, define the risk: probability, impact, alternative
  • Work through a structured preparation system (the PM Interview Playbook covers Anduril-specific command decision frameworks with real HC debrief examples)
  • Practice delivering stories in under 2.5 minutes — silence after is part of the test
  • Identify one story where you accepted permanent accountability, even if the outcome was mixed

Mistakes to Avoid

BAD: “I gathered input from engineering, design, and marketing before launching.”
This signals consensus dependence. At Anduril, gathering input is assumed. What they need to know is: when did you stop gathering and start deciding?

GOOD: “I heard concerns from security, but launched anyway because the border incursion required immediate deployment. I accepted the risk of a breach.”

BAD: “We failed to meet the deadline, but learned how to plan better.”
This shows failure aversion. Anduril wants: “I would prioritize speed again — lives depend on it.”

GOOD: “I cut two features to ship in 72 hours. One unit failed — I’d make the same call.”

BAD: “I collaborated with the team to resolve the outage.”
Vague. Who directed? Who owned the call?

GOOD: “I overrode the rollback timer and initiated manual recovery. The team executed — I led.”

FAQ

Do Anduril PM interviews focus more on hardware or software in behavioral questions?
They focus on consequence, not domain. A software outage that halts drone operations is equivalent to a hardware failure. If your story lacks real-world impact — e.g., downtime during active surveillance — it won’t pass, regardless of layer.

Should I use military terminology in my answers?
No. Do not say “mission,” “combat,” or “tactical” unless the context demands it. Anduril employees think this way — but don’t perform it. Use plain language: “critical system,” “live environment,” “irreversible outcome.”

How long after the onsite will I get a decision?
Most candidates hear within 5 business days. Delays beyond 7 days usually indicate a split HC vote or bar raise escalation. Silence isn’t rejection — but prolonged silence often is.


About the Author

Johnny Mai is a Product Leader at a Fortune 500 tech company with experience shipping AI and robotics products. He has conducted 200+ PM interviews and helped hundreds of candidates land offers at top tech companies.


Want to systematically prepare for PM interviews?

Read the full playbook on Amazon →

Need the companion prep toolkit? The PM Interview Prep System includes frameworks, mock interview trackers, and a 30-day preparation plan.