TL;DR

Most 1:1 meeting software fails at the one job that matters: surfacing judgment, not just notes. The best tools enforce structured reflection, not just agenda templates. If your tool doesn’t force you to answer “What’s the one thing I need to decide today?”, it’s holding you back. The top three are Lattice, Fellow, and 15Five—but only if configured correctly.

Who This Is For

This is for managers and ICs who run 10+ 1:1s per month and have already tried at least two different agenda tools. If you’ve ever left a 1:1 thinking “We talked, but nothing changed,” this is for you. Skip if you’re still using Google Docs or Notion without a structured framework—you’re not ready for software yet.


What’s the real job of 1:1 meeting software?

The real job isn’t note-taking. It’s forcing you to make a decision before the meeting ends. I sat in a debrief last year where a hiring committee rejected a Director candidate because their 1:1 notes were “a graveyard of unresolved items.” The problem wasn’t the tool—it was that the tool allowed them to defer judgment. Not documentation, but decision pressure.

Most tools optimize for ease of use, not outcome. They let you drag-and-drop talking points, but they don’t ask: “What’s the one thing that will make this person’s week better?” The best tools don’t just store agendas—they enforce a judgment loop. Not “What did we talk about?” but “What did we decide?”

Why most agenda tools fail at the last mile

I watched a Staff PM at Meta abandon a $50K/year enterprise tool after three months. The reason? It auto-generated “action items” from transcripts, but those items were never tied to a decision owner. The tool created noise, not clarity. Not completion, but accountability.

The failure mode isn’t technical—it’s psychological. Tools that let you “set and forget” agendas create what organizational psychologists call “procedural compliance without cognitive engagement.” You check the box, but you don’t change behavior. The best tools make you uncomfortable by forcing you to label each item as “decide,” “align,” or “park.”

How to spot a tool that actually changes behavior

In a 2023 debrief, a hiring manager at Google flagged a candidate’s 1:1 notes because they used the same three bullet points for six consecutive weeks. The tool they used (which shall remain nameless) allowed this. The best tools don’t. They enforce what I call the “three-strike rule”: if an item appears three times without resolution, the tool flags it as a systemic issue, not a meeting problem.

Not “Does it integrate with Slack?” but “Does it force me to confront recurring items?” Not “Can I share notes?” but “Does it make it impossible to ignore unresolved decisions?” The tools that change behavior are the ones that make you feel slightly guilty when you try to move an item to next week.

The top three tools—and when to use each

Lattice is for managers who need to tie 1:1s to performance outcomes. It forces you to link every agenda item to a growth area or OKR. I’ve seen it backfire when used with junior ICs who aren’t ready for that level of scrutiny—it turns 1:1s into performance reviews. Not a note-taking tool, but a performance pressure tool.

Fellow is for teams that need to scale 1:1s without losing depth. It enforces a “pre-meeting reflection” step where both parties must answer: “What’s the one thing I need to get from this conversation?” The tool won’t let you start the meeting until both answers are submitted. Not a template, but a gate.

15Five is for remote teams where trust is the bottleneck. It forces you to rate each 1:1 on a 1-5 scale for “psychological safety” and “decision clarity.” The tool then aggregates these scores and flags managers whose teams consistently rate them low. Not a meeting tool, but a trust diagnostic.

What to pay for—and what’s a waste of money

I’ve seen companies waste $20K/year on tools that auto-summarize meetings. The summaries are always wrong. The real value isn’t in transcription—it’s in forcing you to label each item as “decide,” “align,” or “park” before the meeting ends. Not AI-generated notes, but human-enforced judgment.

The only features worth paying for are:

  • Decision pressure (forces you to label items)
  • Recurring item flags (three-strike rule)
  • Trust metrics (psychological safety scores)

Everything else is noise.

How to migrate without losing institutional knowledge

I watched a Director at Amazon lose six months of 1:1 history when they switched tools. The new tool didn’t support bulk import of unresolved items, so the team had to manually re-enter them. Not a data migration problem, but a judgment migration problem.

The best migration strategy isn’t technical—it’s behavioral. Before switching, run a “decision audit” on your last 10 1:1s. Label each item as “resolved,” “parked,” or “systemic.” Only migrate the systemic items. The rest can die. Not a tool switch, but a judgment reset.


Preparation Checklist

  • Audit your last 10 1:1s for unresolved items. Label each as “decide,” “align,” or “park.”
  • Run a “three-strike test”: if any item appears three times, flag it as systemic.
  • Before switching tools, export all unresolved items and label them with decision owners.
  • Configure your new tool to enforce a “pre-meeting reflection” step for both parties.
  • Set up recurring item flags (three-strike rule) and trust metrics (psychological safety scores).
  • Work through a structured framework for labeling items (the PM Interview Playbook covers decision pressure techniques with real debrief examples).
  • Pilot the new tool with one team for two weeks before rolling it out company-wide.

Mistakes to Avoid

  • BAD: Using a tool that auto-generates action items from transcripts.
  • GOOD: Using a tool that forces you to label each item as “decide,” “align,” or “park” before the meeting ends.
  • BAD: Migrating all historical notes to a new tool without filtering for unresolved items.
  • GOOD: Running a “decision audit” and only migrating systemic items.
  • BAD: Paying for AI-generated summaries that are always wrong.
  • GOOD: Paying for features that enforce decision pressure and trust metrics.

FAQ

Should I use the same tool for 1:1s and team meetings?

No. 1:1s require judgment pressure, not collaboration. Team meeting tools optimize for brainstorming; 1:1 tools optimize for decision-making. Not a single tool, but a toolkit.

How do I convince my team to switch tools?

Don’t sell the tool—sell the outcome. Run a “decision audit” on your last 10 1:1s and show how many items were unresolved. The problem isn’t the tool; it’s the lack of judgment. Not a software problem, but a behavioral one.

What’s the minimum viable setup for a 1:1 tool?

A tool that enforces three things: pre-meeting reflection, decision labeling, and recurring item flags. Everything else is optional. Not features, but judgment enforcers.

Related Reading