
Dec 8, 2025 | Issue 28
đź” Signal: Typecasting L&D as The AI Hype Department
More and more, organizations are asking L&D to “lead our AI readiness” or “run some AI sessions” or “get people excited about our org’s AI-enabled future.”
This sounds like it should be flattering. But here’s what it usually becomes:
- Tool tours instead of real workflow enhancements
- Prompting tips instead of building actual capability
- Satisfaction scores instead of observable behavior or improved outcomes
Meanwhile, the rest of the org treats AI as either noise, a risk to manage, a cost to control, or a competitive edge to protect.
L&D gets the excitement. Everyone else gets the accountability.
Your pattern probably looks familiar:
- Someone books an AI workshop
- People show up, nod, get energized
- A month later…nothing reliably changed. No workflows adjusted. No measurable shifts in how people actually work.
The energy was real, but the change was optional.
If your L&D function is going to be a valued AI on-ramp for your people (instead of The AI Hype Department) it cannot operate in the “optional” space. It has to move from “exposure to AI” → “practice with AI” → “formal working agreements for AI use that our people can rely on throughout their day.”
đź§ Strategic (Human) Prompt: From Workshops To Working Agreements
After that AI workshop, ask yourself: what new working agreement was written down, shared, and actively used?
Not a takeaway or a slide deck. An agreement like:
- “For these three key tasks, we use AI by default.”
- “For this decision type, AI is input-only. A human approves, and their name is attached to the output.”
- “Plans to generate media are evaluated and approved before any media is created. This hasn’t changed just because we now have AI.”
If you can’t point to one such agreement, the workshop was a demo, not an onramp.
L&D can’t control all aspects of AI adoption. But it can influence:
- Where and how AI practice happens
- How “safe and responsible use” gets interpretted
- What “good enough” for AI-assisted work looks like
That’s a lever, if we treat it as one.
âž– Strategic Subtraction: Any “Everyone” Sessions
For L&D to be a functional strategic onramp for AI, something needs to go. That function is already stretched too thin, so stop spreading it thinner with one-size-fits-all “AI 101 for Everyone” events.
Those sessions satisfy no one: too basic for power users, too abstract for skeptics, and too disconnected from real work for everyone else. And every hour spent on “AI 101 for Everyone” is an hour NOT spent fixing an actual workflow.
Instead, carve out tailored paths:
- Frontline worker path
- Manager / leader path
- Builder / automation path
- Risk / compliance / governance path
Each group faces different risks and decisions because their workflows differ. They measure success differently, and the impact of their work is not the same.
We don’t need to measure everything. But we do need to measure something meaningful to the people doing the work. Otherwise they’ll stop listening. Rightly so.
🛫 Analogy of the Week: Airport Moving Walkway
Think of a big airport. Two kinds of things try to move you forward:
- Posters on the wall shouting “Welcome to the future of travel!”
- Moving walkways that quietly carry you along whether you care or not.
Most AI adoption efforts today are more poster than walkway.
Posters grab attention. They signal something big. They also add noise to an already crowded terminal.
Walkways are different:
- They’re intentionally boring
- They don’t need your focus or enthusiasm
- They change your actual speed, not just your sense of motion
If you’re serious about AI adoption, you can’t stop at posters. You need walkways. L&D is the function that builds those.
🎵 Closing Notes
Last week, I spent three days in DC with a cohort of L&D professionals in the new “Applying AI in Learning & Development" ATD certificate program built around the new book Applying AI in Learning & Development: From Platforms to Performance by Josh Cavalier. (Debbie Richards and I will both be facilitating these 3-day workshops in 2026.)
Participants started with AI as some kind of space-age promise, a technology indistinguishable from magic. Once they worked with it inside a performance consulting workflow, under plausible real-world constraints, the bigger fears surfaced. And so did the real potential.
I saw firsthand how once the sparkle wears off, the questions turn to accountability, workflow, and risk. These are the right questions.
If your team is being asked to “lead AI readiness” without a change in charter, authority, or metrics, you’re not alone. The gap between “teach people about AI” and “change how work gets done” is exactly where most organizations are stuck.
We don’t need a perfect roadmap or 100 workshops.
We need:
- One workflow that actually matters to the org
- One set of working agreements around how AI shows up there
- One reliable way to measure whether it changed outcomes, or simply added noise
L&D doesn’t have to own AI across the org. It only needs to do what it does best: help people practice new ways of working under real constraints.
Until next time,
Sam Rogers Chief People Mover Snap Synapse – from AI promise to AI practice
âś… Next Step
If you want help shifting from AI workshops to AI walkways inside your organization: → Explore the PAICE Pilot Program
You get actionable insights from within your own org to guide its responsible AI adoption next month. No guesswork. No delays. Just evidence-based readiness that lets you lead the pack, rather than chase after it.