GM SDE interview questions coding and system design 2026
TL;DR
GM SDE interviews are not a LeetCode contest. They are a judgment test wrapped around medium DSA, practical system design, and STAR behaviorals, and GM’s own hiring page says the process is competency-based and behavior-focused (How We Hire).
Recent candidate reports show either a 3-round path with OA, behavioral, and technical rounds, or an HR screen followed by one live programming interview and two system design interviews spaced about a week apart (Glassdoor, Glassdoor). General Motors Software Engineer compensation in the US currently ranges from $103K to $308K, with a median of $145K, so the bar is real, but the interview is still more industrial systems than big-tech theater (Levels.fyi).
Who This Is For
This is for candidates targeting GM Software Engineer, SDE, embedded, autonomy, infotainment, or systems-adjacent roles who can already pass medium coding but lose control when the discussion shifts to architecture, failure modes, or cross-functional tradeoffs. It is also for people who keep over-preparing generic cloud design and under-preparing automotive constraints, because GM does not reward that mismatch. If you have backend, mobile, embedded, or data experience, the loop is winnable only when you can turn that experience into safe, testable, low-latency design under ambiguity.
What coding questions does GM ask in SDE interviews?
GM coding questions are usually practical, not ornate. The pattern is medium DSA with state management, graph traversal, hash maps, and straightforward implementation discipline, not puzzle-box algorithm theater.
In recent GM candidate reports, the prompts were the kind of thing a serious engineer should expect: nested key-value storage, a transaction system, a graph components problem, LRU cache, and a generic easy LeetCode-style question (Glassdoor, Glassdoor, Glassdoor). That is the real signal. GM is not asking whether you memorized obscure patterns. GM is asking whether you can preserve invariants while writing correct code under interview pressure.
In a GM-style debrief, the candidate who rushes to the “optimal” answer often loses ground. The panel does not care that you knew the textbook data structure if your code is brittle, your edge cases are sloppy, or your explanation collapses under follow-up. The problem is not your answer, but your judgment signal.
Not raw algorithm novelty, but clean execution is what matters here. Not “can you solve it once,” but “can you explain why it stays correct when the input gets ugly.” That is why LRU cache is such a useful GM question. It looks simple, then immediately tests whether you understand state, eviction policy, and complexity without hand-waving.
The strongest coding answers at GM are usually the ones that expose their own assumptions. State the invariants early. Name the failure cases. Explain what breaks when the service retries a write, when a queue fills, or when the graph contains disconnected components. That is the difference between a candidate who looks clever and a candidate who looks shippable.
What system design questions actually matter at GM?
GM system design is about constraints, not spectacle. The right answer usually reflects vehicle software, low-latency behavior, reliability, and safety boundaries, not a generic “design Twitter” performance.
GM’s current job postings tell you what the company is actually building. One Software Engineer role in Sunnyvale describes “complex, highly scalable, low-latency software in C++ on Linux-based systems” for Autonomy Interface SW, including interface contracts and sensing integration (GM job posting).
Another Senior Software Engineer role for AV Frameworks calls out IPC, middleware, shared memory, discovery, QoS, and deterministic runtime behavior (GM job posting). A separate Software Engineer posting points to OTA orchestration and security, while Systems Engineer roles emphasize interfaces, performance targets, and cross-functional requirements (GM job posting, GM job posting).
That means the interview is likely to reward designs with explicit failure modes. Think service degradation, retry behavior, message ordering, cache invalidation, observability, and safe fallback states. Not “how do I scale this forever,” but “how does this survive partial failure without corrupting the vehicle experience.” That is a different interview.
In one GM debrief-style conversation, the hiring manager does not get impressed by the biggest diagram on the whiteboard. The pushback is usually on whether the design survives bad connectivity, slow storage, or a component restart. The interview is not about producing architecture wallpaper. It is about showing you can keep the system honest when the assumptions fail.
A candidate-reported question like “LRU cache in system design” tells the same story. That is not distributed systems theater. It is a test of state ownership, lifecycle management, and clarity about where correctness lives. GM uses system design to see whether you can reason about a product system, not just sketch a service stack.
Not broad cloud architecture, but domain-aware boundaries is the winning posture. Not “I would split everything into microservices,” but “I would keep the design simple until a real fault line forces a split.” That judgment matters more at GM than in companies that fetishize abstraction for its own sake.
How many interview rounds and how long is GM's process?
GM’s process is variable, but it is rarely a one-and-done loop. Expect 3 to 4 stages for many software roles, with HR screening, coding, and system design or behavioral coverage mixed in.
Recent candidate reports give you the shape. One May 5, 2026 report for a Software Developer new grad described 3 rounds total with a possible phone screen, including an OA with 1 easy, 1 medium, and 1 medium-hard LeetCode question, then a behavioral round, then a technical round (Glassdoor). Another March 15, 2026 software engineer report described an HR screener, one live programming interview with a manager, and two system design interviews, with about a week between each step (Glassdoor).
That spacing matters. GM is not always moving at startup speed. A slower cadence is not automatically a bad sign. It often reflects role-specific interviewers, scheduling friction, and a hiring process that wants separate reads on code, systems, and behavior. If you read silence as rejection, you will misread the room.
In a hiring debrief, this separation creates a specific psychology. Interviewers are not grading one heroic performance. They are aggregating weak and strong signals across stages. One great coding round does not rescue a sloppy design conversation. One smooth behavioral round does not erase an inability to implement correctly.
Not “faster is better,” but “consistent is better” is the right mental model here. GM is not searching for the loudest performer. It is looking for someone whose signal holds up when three different interviewers inspect three different parts of the job.
This also means you should treat every round as distinct. The manager coding round is not the same as the system design round. The behavioral loop is not a formality. If you prepare as if one strong skill can carry the entire process, GM will expose that immediately.
What does GM look for in behavioral answers?
GM behaviorals are not filler. They are a second engineering interview, because GM’s hiring page explicitly says the process uses competency-based questions focused on behaviors and requirements, with STAR expected (How We Hire).
Recent GM interview reports repeatedly mention teamwork, conflict resolution, problem solving, and “why” behind decisions, not just polished career summaries (Glassdoor, Glassdoor). That is a judgment call from the company. GM wants engineers who can work inside a cross-functional machine. It does not want someone who can only perform in isolation.
The debrief usually turns on whether your stories show ownership or just participation. A strong answer is not a biography. It is a conflict, a decision, and a concrete result. The panel is listening for maturity under pressure, not polished phrasing. Not “I helped the team,” but “I changed the outcome when the team was stuck.”
This is where many technical candidates lose the loop. They talk like the STAR framework is a recital. It is not. It is a way to prove judgment under constraints. In GM-style rooms, a thin behavioral answer makes the interviewer assume the same thinness will appear in design review, code review, and production debugging.
One useful signal is whether you can describe a disagreement without turning it into a morality play. In a GM debrief, the strongest candidates do not make themselves the hero. They show how they handled ambiguity, escalated appropriately, and left the system or team in a better state. That reads as leadership. Everything else reads as noise.
Not “tell me everything you ever did,” but “show me one hard situation and how you moved it forward” is the right level. GM is not paying for your autobiography. It is buying evidence that you can operate without drama.
What makes a GM debrief turn into a pass or a no?
GM passes candidates who show stable judgment under imperfect conditions. It rejects candidates who optimize for cleverness, over-answer, or hide uncertainty.
The real debrief question is usually simple: would this person make the next engineer’s life easier or harder? That is why code quality, explanation quality, and behavioral clarity all matter. Not because GM wants perfection, but because GM wants low-friction execution in a domain where mistakes cost real time and real money.
If you want the blunt version, the no usually comes from one of three places. The candidate cannot write correct code without help. The candidate cannot explain tradeoffs in a way that survives follow-up. The candidate looks technically fine but gives weak evidence of collaboration, ownership, or calm under stress. Any one of those is enough to stop a pass.
The interesting part is that GM’s process often tolerates some unevenness. A candidate may be slightly rough in one round and still advance if the overall read is coherent. That is a very debrief-specific behavior. Interviewers are not scoring a beauty contest. They are reconciling signal. The problem is not one imperfect answer. The problem is a pattern that suggests the candidate will be hard to trust in production.
Not “best algorithm wins,” but “best engineering judgment wins” is the core principle. Not “sound impressive,” but “sound dependable.” GM’s environment is built around vehicles, embedded systems, and cross-team execution. That elevates patience, clarity, and invariants over theatrics.
This is why the candidate who explains a cache, a retry flow, or a degraded mode cleanly often beats the candidate with the flashier résumé. The interview is not asking who is the smartest person in the room. It is asking who will still be useful when the system is dirty.
Preparation Checklist
GM preparation works when you practice the right failure modes, not when you stack more hours.
- Drill medium DSA with a bias toward hash maps, graphs, BFS/DFS, heaps, linked lists, and cache-style state problems.
- Practice one or two LRU-style problems until you can explain both complexity and invariants without looking down.
- Prepare one system design story around a low-latency stateful service, then add failure handling, observability, and graceful degradation.
- Study GM’s current technical language on autonomy, embedded Linux, IPC, OTA, and system-level requirements so your examples sound relevant, not generic (GM careers, GM careers, GM careers).
- Write 6 STAR stories for conflict, ambiguity, debugging, ownership, cross-functional alignment, and a time you changed your mind.
- Rehearse 60-second answers that state the decision, the tradeoff, and the result. GM does not reward wandering.
- Work through a structured preparation system (the PM Interview Playbook covers system-constraint framing and real debrief examples that map well to GM’s automotive interviews).
Mistakes to Avoid
GM filters for judgment failures faster than it filters for missing trivia.
- BAD: Treating GM like a generic FAANG clone and only grinding exotic algorithms. GOOD: Prepare medium DSA, but frame every solution around correctness, state, and product constraints.
- BAD: Giving system design answers that sound like a cloud architecture template. GOOD: Design for low latency, partial failure, safe fallback, and testability in a vehicle context.
- BAD: Reciting STAR stories like a script. GOOD: Use one concrete episode, name the tension, and show the decision you made.
FAQ
Is GM more technical or behavioral for SDE roles?
It is both, and that is the point. GM’s hiring page explicitly says it uses competency-based interviews focused on behaviors and requirements (How We Hire). If you treat behaviorals as optional, you are underestimating the loop.
Does GM ask pure LeetCode?
No. Recent reports show a mix of medium coding, system design, and behaviorals, including prompts like nested key-value storage, transaction systems, and LRU cache (Glassdoor, Glassdoor). The company is testing whether you can build correct systems, not just solve puzzles.
What should I emphasize if I want GM to say yes?
Emphasize correctness, calm explanation, and automotive-relevant judgment. GM responds better to engineers who can handle constraints, failure modes, and cross-functional communication than to candidates who only optimize for algorithm flash.
Ready to build a real interview prep system?
Get the full PM Interview Prep System →
The book is also available on Amazon Kindle.
Related Reading
- Meta Pgm Vs Tpm Role Differences
- [The Only ATS-Proof Resume Template You Need for the Tech Industry [Download]](https://sirjohnnymai.com/blog/ats-proof-resume-template-tech-industry)
- Retool PM Behavioral Interview
- gojek-pm-interview-questions-2026