Notion vs. Wikeep: How Would You Decide Which Feature to Kill?
TL;DR
Deciding which feature to kill isn’t about usage metrics alone—it’s about strategic alignment and cost of maintenance. Most PMs default to engagement data, but the real test is whether the feature blocks future bets or distracts from core positioning. The wrong cut preserves vanity functionality and kills momentum; the right one sharpens product focus and frees engineering bandwidth.
Who This Is For
This is for product managers preparing for product sense interviews at tooling or productivity companies like Notion, Wikeep, or similar SaaS platforms, where feature tradeoffs are constant and strategy is tightly coupled with technical debt. If you’ve struggled to defend a feature deprecation in interviews—or been challenged on “Why not improve it instead?”—this breaks down how hiring committees actually judge those decisions.
How Do You Frame a Feature Kill in a Product Sense Interview?
Start with the business outcome, not the feature. In a Q3 2023 debrief for a Notion PM candidate, the hiring manager interrupted after 45 seconds: “You’re describing user complaints, not tradeoffs.” The candidate had listed low adoption of the /sync-to-slack command but hadn’t tied it to roadmap tax. The panel rejected the answer not because the data was wrong, but because the framing lacked strategic teeth.
Notion’s real constraint isn’t user growth—it’s velocity. Every underused feature that requires backend sync pipelines eats into the 20% innovation time engineers need for AI integrations. Wikeep, in contrast, operates under acquisition pressure: their /wiki-export-pdf feature sees 18% weekly use but costs $38K/year in third-party rendering licenses. The cost profile, not usage, makes it a candidate for removal.
The insight: feature kills are never about the feature. They’re about opportunity cost. Not X: “This feature has low DAU.” But Y: “This feature consumes engineering cycles that could unblock a wedge market play in regulated industries.” That’s the signal hiring managers want—judgment calibrated to company stage and technical leverage.
A winning frame has three layers:
- Strategic misalignment – Does this feature contradict or dilute the core use case?
- Maintenance burden – What’s the real cost in bugs, support, and engineering hours?
- Alternative path – Can the need be met through a lighter solution?
In a real Wikeep interview, a candidate won over the committee by proposing to kill /auto-tag-revisions not because it was underused (it had 12% WAU), but because it relied on a deprecated NLP model that blocked migration to their new AI infra. The tradeoff was clear: keep the feature and delay AI search by 11 weeks, or kill it and accelerate a $2.1M ARR initiative.
What Data Matters Most When Evaluating a Feature for Deprecation?
Usage frequency is table stakes. The data that moves hiring committees is cost-to-serve and strategic blocking. In a Notion HC meeting, one PM proposed killing the /calendar-embed feature, citing 4% weekly active users. The committee pushed back—until she revealed it required a dedicated front-end engineer to maintain cross-browser iFrame rendering, blocking two planned API v2 endpoints.
Not X: “Only 5% of users click this button.” But Y: “This button triggers 18% of all mobile app crashes and consumes 300 engineering hours/year in patching.” That’s the pivot—shifting from user behavior to system impact.
Three data types that carry weight:
- Incident rate per feature – How often does it generate P1 bugs or support tickets?
- Engineering effort allocation – What % of platform team time is spent maintaining it?
- Roadmap dependency – Does keeping it block a higher-leverage initiative?
At Wikeep, the /export-to-confluence integration was used by 11% of teams but accounted for 40% of API auth failures. During a hiring committee review, a candidate referenced a post-mortem showing it delayed SSO rollout by 6 weeks. That wasn’t just data—it was a causal chain. The committee approved the hire because the answer showed systems thinking, not segmentation.
Don’t lead with NPS or CSAT drops. Lead with what the feature prevents. Hiring managers aren’t measuring your analytics skills—they’re testing your prioritization spine.
How Do You Compare Two Tools When One Is Established and the Other Is Niche?
You assess which product’s constraints reveal the sharper tradeoff. Notion is post-product-market fit with 30M+ users; Wikeep has 450K users and is still refining wedge positioning in technical documentation. The scale difference means their cost structures behave differently.
In a PM interview at Notion, a candidate analyzed Wikeep’s /real-time-coedit-latency feature. He assumed it was low value because Notion’s co-editing is faster. The committee rejected him. Why? He missed that Wikeep’s latency optimization wasn’t about speed—it was about offline resilience for field engineers in low-connectivity zones. His comparison was surface-level; it failed the “context collapse” test.
Not X: “Notion does it better.” But Y: “Wikeep’s constraint (offline reliability) defines a different success metric than Notion’s (real-time visibility).” That’s the lens: compare not features, but the jobs to be done in specific user contexts.
The organizational psychology principle at play is bounded rationality. Users in regulated industries don’t choose tools based on feature parity—they choose based on failure mode tolerance. A pharmaceutical documentation team won’t care if Wikeep lacks Notion’s database relations if Wikeep guarantees audit trail integrity during sync loss.
When comparing tools, map:
- User risk profile – What happens when the feature fails?
- Environment constraints – Connectivity, compliance, team size
- Failure cost – Data loss? Audit failure? Legal exposure?
In a real debrief, a candidate won points by arguing that Wikeep should kill its /slack-status-update bot because it assumed stable internet—unreliable for their target users in manufacturing plants. Notion could keep a similar feature because its users are desk-based. The insight wasn’t about the bot—it was about ambient trust in infrastructure.
How Do You Handle Stakeholder Pushback When Advocating for a Kill?
You prebake alignment by reframing the feature as a liability, not a loss. In a Notion interview simulation, a candidate proposed killing /template-gallery-analytics, a low-usage dashboard. The mock EM responded: “Marketing uses this to prove ROI.” The candidate faltered. The evaluators noted: “He didn’t anticipate power gradients.”
Stakeholder management isn’t about consensus—it’s about control of the narrative. The winning move isn’t to defend the kill, but to reposition the feature as blocking something the stakeholder also cares about.
Not X: “We should remove this because no one uses it.” But Y: “Marketing can’t get clean campaign attribution because this dashboard shares backend queues with the new user attribution pipeline—we’re forcing a choice between legacy reporting and growth analytics.” Now the tradeoff is visible and shared.
At Wikeep, a PM successfully killed the /csv-bulk-import feature by showing it shared a database pool with the new compliance audit module—required for SOC 2 certification. The engineering lead had to choose: keep a feature used by 7% of customers or delay certification by 8 weeks. The decision wasn’t his to make—it was forced by architecture.
Three tactics that work:
- Link to a must-win initiative – Security, compliance, revenue-critical launch
- Quantify cross-team drag – Show how it consumes shared resources
- Offer a temporary proxy – A lightweight alternative to ease transition
In a hiring manager conversation post-interview, one leader said: “I don’t care if they kill the right feature. I care if they can make the organization feel the necessity.” That’s the core test—narrative control.
How Do You Test If a Feature Should Be Improved vs. Killed?
You run a cost-improvement forecast, not a user research sprint. Most candidates jump to “Let’s talk to users,” but in a Google PM debrief I sat in on, one interviewer said: “If you need to interview five customers to decide whether to kill a feature used by 2%, you’re not operating at PM seniority.”
The real test is whether improvement is bounded and cost-effective. A feature should be improved only if:
- The root cause is known and fixable within 6 weeks,
- The fix increases engagement by 3x in a strategic segment,
- It doesn’t require new dependencies.
At Notion, the /sync-comments-to-email feature had 9% usage but 42% complaint rate. One PM proposed a redesign. The HC killed the idea after learning the fix required rewriting the notification engine—a 14-week project. Instead, they killed the feature and redirected users to in-app notifications.
Not X: “Users say they want it fixed.” But Y: “Fixing it costs more than rebuilding the entire onboarding flow—which drives 3x more activation.” That’s the comparison ladder.
Wikeep faced a similar call with /ai-summarize-docs. It used an outdated model with 68% accuracy. Improving it meant renegotiating with a vendor at $120K/year. The PM proposed killing it and launching a human-reviewed summary template instead—cost: $18K. The committee praised the answer not for cost savings, but for optionality preservation—it kept the AI budget for a higher-leverage use case.
The insight: improvement is a commitment; deprecation is a flex. Senior PMs don’t optimize features—they optimize portfolios.
Preparation Checklist
- Define the job to be done for each feature, not just the functionality
- Map engineering cost per feature using incident logs and sprint data
- Identify one roadmap dependency per feature to reveal blocking effects
- Practice framing kills around opportunity cost, not user metrics
- Work through a structured preparation system (the PM Interview Playbook covers feature tradeoff frameworks with real debrief examples from Notion and similar tooling companies)
- Anticipate stakeholder incentives—map who benefits from the status quo
- Prepare one cost-improvement forecast for a low-usage feature in a product you’ve owned
Mistakes to Avoid
- BAD: “We should kill this because only 5% of users use it.”
This ignores cost of maintenance and strategic context. In a Notion interview, a candidate used this line and was immediately questioned: “What if it’s the 5% who pay $25K/year?” The committee marked him down for binary thinking.
- GOOD: “This feature has 5% usage but consumes 22% of our mobile crash reports and blocks the offline mode rollout. We can redirect those cycles to improve sync reliability for enterprise clients.”
This links low usage to system impact and future value. The same candidate later passed with this version.
- BAD: “Let’s run user interviews to decide.”
This signals indecisiveness. In a Wikeep debrief, an EM said: “If the data isn’t clear enough to act, you haven’t done your homework.” Research is for validation, not decision-making.
FAQ
How do you decide which tool’s strategy to emulate in a comparison?
You don’t emulate—you diagnose. Notion’s strategy is breadth with integrations; Wikeep’s is depth in technical workflows. The right answer identifies which constraints (scale, compliance, latency) make one approach non-transferable. Hiring managers reject candidates who say “Notion is better” without context.
Is it better to propose improving a feature or killing it in an interview?
It depends on the cost-improvement curve. If fixing it takes longer than shipping a new wedge feature, kill it. In a Meta PM interview, a candidate lost points for suggesting a redesign of a low-use tool that required backend re-architecture. The bar is strategic efficiency, not optimism.
Do hiring managers expect exact numbers in feature kill scenarios?
Yes—but only if they’re plausible. You don’t need real data, but fabricated precision (“317 hours/year”) fails. Use rounded, credible figures: “about 300 engineering hours,” “roughly 20% of bug tickets.” In a Stripe interview, a candidate said “$38K/year in licensing” for a Wikeep feature—matched actual public pricing. That specificity signaled preparation.
What are the most common interview mistakes?
Three frequent mistakes: diving into answers without a clear framework, neglecting data-driven arguments, and giving generic behavioral responses. Every answer should have clear structure and specific examples.
Any tips for salary negotiation?
Multiple competing offers are your strongest leverage. Research market rates, prepare data to support your expectations, and negotiate on total compensation — base, RSU, sign-on bonus, and level — not just one dimension.
Ready to build a real interview prep system?
Get the full PM Interview Prep System →
The book is also available on Amazon Kindle.