Title: Anthropic SDE Onboarding and First 90 Days Tips 2026
TL;DR
The first 90 days at Anthropic as a software engineer are not about coding output — they’re about calibration. Success hinges on absorbing the company’s operational tempo, navigating unstructured ownership, and aligning with long-term safety goals, not immediate feature delivery. Your performance is evaluated not on lines of code, but on judgment, prioritization, and systems thinking within high-stakes AI development.
Who This Is For
This is for newly hired SDEs at Anthropic, or those with signed offers, who are preparing for onboarding in 2026. It’s also useful for engineers transitioning from FAANG or startup environments who underestimate how differently engineering velocity, ownership, and impact are defined at a safety-first AI lab. You’re likely earning between $305,000 and $468,000 total compensation, based on Levels.fyi data from recent hires, and need to shift from output-driven execution to context-driven contribution.
What does Anthropic onboarding actually look like for SDEs?
Onboarding at Anthropic lasts four weeks but functions less like structured training and more like a compressed immersion into operational ambiguity. There’s no curated track — just access to internal docs, sandboxed environments, and a rotating set of 1:1s with senior engineers.
In Q2 2025, a new hire spent three days just understanding why their access to the inference serving stack was gated behind a security review no one had mentioned. That wasn’t a failure — it was the point. The system doesn’t onboard you; you reverse-engineer your way in.
This isn’t chaotic — it’s intentional. Anthropic assumes you can parse incomplete information, ask precise questions, and tolerate silence. Not every team uses the same onboarding checklist. Some provide a week-long sprint on model evaluation tooling; others expect you to shadow an L5 for two weeks and infer your role.
The real test isn’t technical fluency — it’s discerning what matters. The engineers who succeed are not those who complete every tutorial, but those who identify high-leverage gaps in tooling or process within the first 10 days.
Not a lack of structure, but the demand for self-directed structure — that’s the hidden filter.
> 📖 Related: Anthropic PM case study interview examples and framework 2026
How should I prioritize in my first 30 days as an SDE at Anthropic?
Your first 30 days should prioritize context acquisition over contribution. Shipping code in week two is a red flag, not a win. In a Q4 2025 hiring committee debrief, a new hire was flagged not for slow output, but for building a dashboard no one requested — a classic case of FAANG muscle memory misfiring.
Anthropic measures early impact through relevance, not velocity. A useful metric: by day 15, you should be able to explain, in one paragraph, how your team’s work ladders up to model safety or alignment infrastructure. If you can’t, you’re optimizing for the wrong inputs.
Focus on three things:
- Map the data flow of one core system (e.g., how prompts enter the model, how outputs are evaluated).
- Identify the top three pain points senior engineers complain about in meetings.
- Attend at least two cross-functional reviews — even if you don’t speak.
The problem isn’t inactivity — it’s premature optimization. Not effort, but alignment — that’s the currency.
What technical systems will I need to learn immediately?
You must master four systems within the first 21 days: the internal model evaluation framework, the experiment tracking platform, the safety guardrail deployment pipeline, and the distributed training telemetry suite. These are not optional — they’re the scaffolding of daily work.
In a 2025 postmortem, an SDE spent two weeks debugging a latency issue, only to learn the root cause was logged in the evaluation dashboard — a tool they hadn’t accessed. That delay wasn’t penalized, but it revealed a pattern: engineers who skip tool literacy create invisible debt.
The evaluation framework, codenamed “Critic,” is where most early contributions happen. It’s not glamorous — you’re writing test cases for model behavior, not building APIs. But it’s where safety bugs are caught. Ignoring it means operating blind.
The experiment tracker, “Trialhead,” is version-controlled and tied to model checkpoints. If you run a test without logging it there, it effectively didn’t happen. One engineer’s optimization was dismissed in a review because the data lived only on their local machine.
Not coding, but instrumentation — that’s where trust is built.
> 📖 Related: Anthropic Sde System Design Interview What To Expect
How is performance evaluated in the first 90 days?
Performance is evaluated through three lenses: judgment in ambiguity, precision in communication, and consistency in systems thinking. Output volume is irrelevant.
During a Q1 2026 HC meeting, a manager pushed to extend a new hire’s ramp period. The reason? The engineer had submitted five small PRs but had not once questioned the assumptions behind their task. “They’re executing, not thinking,” the EM said. The extension was approved.
Your first 90-day review isn’t a formality — it’s a structured assessment by your manager, your mentor, and one cross-functional peer. You’ll be graded on:
- One written design doc that surfaces edge cases in a safety-critical system
- One incident response or postmortem participation
- Weekly sync notes that show evolving understanding
The bar isn’t perfection — it’s trajectory. A steep learning curve beats early productivity. Not code shipped, but questions asked — that’s the real KPI.
How do I build credibility quickly on my team?
Credibility at Anthropic isn’t earned through heroics — it’s earned through reliability in high-signal moments. Volunteering for the 2 a.m. model rollback during a false positive surge matters more than leading a sprint retrospective.
In a 2025 incident, a junior SDE noticed a drift in evaluation accuracy that others dismissed as noise. They correlated it with a recent data pipeline change and surfaced it in a 10-line report. That single action prevented a flawed model version from advancing. The engineer wasn’t praised for speed — they were praised for pattern recognition.
To build trust:
- Own one monitoring alert end-to-end. Don’t just triage — improve the detection logic.
- Write a “lessons learned” note after every task, even small ones.
- When you disagree, lead with data, not opinion.
The culture rewards quiet competence, not visibility. Not speaking often, but speaking precisely — that’s the signal.
Preparation Checklist
- Set up your development environment using the internal onboarding guide — expect setup to take 3–5 days due to security approvals
- Schedule 1:1s with your EM, mentor, and one senior engineer outside your team in week one
- Complete the security and AI safety fundamentals training modules — these are mandatory and non-negotiable
- Read the last three postmortems from your team and identify one recurring theme
- Work through a structured preparation system (the PM Interview Playbook covers safety-critical system design with real debrief examples from AI labs like Anthropic)
- Draft a 30-60-90 day plan focused on learning goals, not deliverables
- Bookmark the internal “Incident War Room” archive and review one case weekly
Mistakes to Avoid
- BAD: Starting to code on day three without understanding the system’s failure modes
GOOD: Spending the first week reading design docs and asking, “What breaks first?”
- BAD: Sending a 50-line Slack message explaining a technical issue
GOOD: Writing a three-sentence summary with a link to a doc, ending with a clear ask
- BAD: Waiting for someone to assign you a “big project” to prove yourself
GOOD: Identifying a recurring manual task in incident response and automating it quietly
FAQ
What salary should I expect as an SDE at Anthropic in 2026?
Based on Levels.fyi data from 2024–2025, SDEs at Anthropic earn between $305,000 and $468,000 in total compensation. Base salary is typically $200,000–$250,000, with the remainder in equity and signing bonuses. Compensation scales with experience level, but even L3 hires are at the higher end of the range due to market competition for AI talent.
Is Anthropic’s onboarding more intense than FAANG companies?
Yes — not in hours worked, but in cognitive load. Unlike Google or Meta, Anthropic provides minimal hand-holding. You’re expected to navigate ambiguity, infer priorities, and contribute to safety-critical systems within weeks. The intensity comes from responsibility, not workload. FAANG onboarding optimizes for comfort; Anthropic’s optimizes for signal.
Should I focus on coding practice before my first day?
No — not in the way you think. Coding fluency matters, but Anthropic’s early work is dominated by debugging, tooling, and systems analysis. Focus on understanding distributed systems, logging, and monitoring patterns. The ability to read code and trace data flow is more valuable than writing new code. Not syntax, but semantics — that’s the real test.
Ready to build a real interview prep system?
Get the full PM Interview Prep System →
The book is also available on Amazon Kindle.