Signals & Subtractions #027: Managers over Models

Dec 1, 2025 | Issue 27

TBD


Issue-027


đź”­ Signal: Managers Write The Real AI Policy

Most companies think their AI policy lives in a document.

It does not. It lives in 1:1s, or it dies there.

There might be a multi page responsible use framework. There might be an AI council that meets once a month. But every week there is a meeting with precisely two people in it where something more powerful happens.

In practice, the real rules for AI get written in three places:

  • Offhand comments in team meetings
  • Side remarks in performance reviews
  • What managers actually reward, ignore, or punish

In that 1:1, a manager either:

  • Invites AI into the work with “show me where you used AI on this” or “what did you keep and what did you throw away”
  • Keeps AI invisible with “let’s just review the final deck” or “I do not really trust those tools, just send me your version”

Teams also watch what their manager actually does:

  • If the manager never uses AI in front of the team, then using AI is risky
  • If the manager only mentions AI when something goes wrong, AI sounds dangerous
  • If the manager openly says “here is where AI helped me and where I overruled it,” AI starts to look like a legit option they should get in on

People might applaud the slide deck, but they don’t generally follow it. They have been burned before, and learned to follow the person who signs their review instead.


đź§  Strategic (Human) Prompt: Boost, Tool, Risk, or Silence?

For most employees, the core AI question is not technical. It is personal.

On your team, does using AI feel like: A. a career boost B. a neutral tool C. a career risk D. we do not even talk about AI here

Answer that honestly. Not the official answer, the lived one.

If you are a manager or senior leader, go ask your team this week. In the meantime, consider: what have you actually done with AI in your 1:1s, team meetings, performance reviews, and goal setting that communicates the answer you want your people to give?

If AI never appears in those conversations, people will invent their own story about what is safe. That story will be based on fear, not strategy.

We don’t need perfect AI roadmaps to improve things. Start small with:

  • Proposing a 5 minute slot in each 1:1 titled “Where AI showed up”
  • Asking 3 questions:
    • “Where did you use AI on real work”
    • “Where did it feel risky or confusing”
    • “What is one place you would like to try it if it felt safe”

Then do the hardest part: listen louder than you talk. Those short conversations will tell you more about your true AI posture than any dashboard.


âž– Strategic Subtraction: Performance Ambiguity

The biggest drag on healthy AI adoption right now is not a lack of tools. It is performance ambiguity.

When people don’t know how AI use will show up in their annual evaluation, they play it safe. As we head into year-end and early 2026 performance reviews, this is the perfect time to be the clear AI onramp for your team by subtracting phrases like:

  • “Use your judgment.”
  • “Play with AI if you want.”
  • “It is there as an option.”

Those all sound flexible. But they land as “You are on your own here” and make people think twice. Consider replacing them with explicit guidance that is written down and visible:

  • “These tasks should be AI assisted by default, unless there is a reason not to.”
  • “These tasks must be human led with AI as optional input only.”
  • “These tasks are AI free for now, and here is why. Do you see anything I am missing?”

If your people have to guess which category their work is in, they will tend to guess conservatively. You can go one step further and connect this to the upcoming reviews directly:

  • Call out 1 or 2 workflows where smart AI use is part of meets expectations
  • Name 1 or 2 situations where blindly trusting AI is a performance problem, not a clever shortcut
  • Recognize people who document and share reusable AI patterns as demonstrations of leverage, not laziness

If AI shows up in someone’s work all year and never shows up in their performance review, everybody loses.


🏋️ Analogy of the Week: Managers As Spotters

Think of AI as the barbell that can make us stronger and faster, with real potential for injury. The employee is the lifter. The manager is the spotter.

What the lifter attempts has everything to do with their spotter.

Bad spotters:

  • Look at their phone while you lift
  • Panic or blame when something goes wrong
  • Tell you “go heavy” with no support, then step back

In that environment, lifters play it safe and do far less than they could. They hide what they are trying, and treat the barbell as a liability.

Good spotters:

  • Help pick the right weight for the moment
  • Stand close when risk is high
  • Give clear cues on form
  • Take responsibility for safety
  • Give credit to the lifter when it goes well

In that environment, people try new weight. They push into new capacity without ignoring risk.

Managers decide whether anyone feels safe taking on new loads. If you are building human onramps for AI, this is the first one that matters. Not the model, not the platform, not the tool. It’s the person next to you saying:

“I am here, I got you. Try it like this, and if something slips, we’ll rack it together.”


🎵 Closing Notes

If your AI strategy doesn’t mention managers explicitly, it is not really a strategy. It’s just a wish.

If AI never shows up in 1:1s, it never becomes part of serious work. It stays a side hobby or a guilty secret. Most teams are already living with unspoken AI rules. The job of every manager is to make those rules visible and safer, not leave people guessing.

So they wait for a clearer signal. If you are a manager, that signal is yours to send. Start with one sentence in your next 1:1 that removes ambiguity instead of adding to it.

Until next time,

Sam Rogers Your AI Spotter Snap Synapse – from AI promise to AI practice

đź“… Book a meeting

One of the best things you can do is benchmark where your team is with AI. Not in an oversimplified good/bad or confidence rating, but with a nuanced behavioral assessment. Try PAICE.work for free for individual team members. And if you want to see how your team is handling AI risk, let’s talk about a pilot that shows true business value for your org in Q1. Deeply discounted if you lock it in by December 15th.

Read on Substack

Get in touch

Book a call, send a message, or connect on LinkedIn.

Book a meeting Contact us LinkedIn