AI Training

Team Leaders AI Learning Pathway

A scaffolded pathway for team leaders, with timely pop-up modules woven through at the points where they are most relevant.

Learn AI Skills in Minutes, Not Hours

Our microlearning pathways break down complex AI concepts into focused, 30-45 minute modules that fit into your busy schedule.

Tier 1 (Setting the stage): the leader as change agent
Course 1 30–45 min
Leading Teams Through New AI Workflows
Help leaders guide teams as work processes change with AI: clarifying roles, setting expectations and supporting people through the uncertainty that comes with genuine change.
Takeaway
Recommended slide deck text to guide your team through upcoming AI workflow transitions, outlining your leadership approach, peer-sharing expectations and stance on experimentation.
  1. How AI is reshaping workflows and roles in practice, not in theory, and what that means for how a leader understands and communicates their team's work
  2. The difference between AI changing what people do and AI changing how people feel about their work, and why leaders need to address both simultaneously
  3. Clarifying the boundary between human and AI responsibility in your team's work: who owns what, and how to make that clear without creating rigidity that prevents learning
  4. Why ambiguity about AI's role creates anxiety even when the actual change is manageable, and how clear, early communication from a leader closes that gap
  5. Setting realistic expectations for AI-supported work: what it will and won't improve, how quickly, and what effort is required from the team to get there
  6. How to position yourself as a leader who is navigating this change alongside your team rather than directing them through it from a distance
  7. The common mistakes leaders make when introducing AI-driven workflow changes: moving too fast, too slow, or without involving the people most affected
  8. How to identify which parts of your team's workflow are most ready for AI integration and which need more groundwork before change is introduced
  9. Supporting individuals who are struggling with the transition without slowing down those who are ready to move: balancing the team's pace of change
  10. Building momentum through early, visible wins that demonstrate AI's value in your team's specific context rather than relying on organisation-wide promises
Course 2 30–45 min
Creating Psychological Safety for AI Experimentation
Enable safe experimentation and learning with AI by creating a team environment where people feel genuinely confident to try, fail, learn and share, without fear of judgement or consequence.
Takeaway
A one-page team safety audit: a short self-assessment for leaders to identify where psychological safety is strong and where it needs strengthening, with five specific actions to take in the next 30 days.
  1. Why psychological safety is the prerequisite for AI adoption, not a nice-to-have: teams that don't feel safe to experiment don't experiment, regardless of how good the tools are
  2. What psychological safety actually looks like in an AI context: not the absence of standards, but the presence of trust that honest attempts are valued even when they fall short
  3. The specific fears that hold people back from experimenting with AI: looking incompetent, wasting time, making a mistake that reflects on the team, or appearing to question the organisation's direction
  4. How to normalise failure as part of the learning process: the difference between saying "it's okay to fail" and building a culture where failure genuinely leads to shared learning rather than quiet embarrassment
  5. Why experimentation without structure creates anxiety rather than safety: how to design low-stakes experiments that give people a clear brief, a time limit and a defined way to share what they found
  6. The role of the leader's own behaviour in setting the tone: what happens when a leader shares their own AI failures and learning openly, and what happens when they don't
  7. How to balance innovation with risk management without using risk as a reason to avoid experimenting at all: drawing the line in the right place
  8. Recognising the difference between a team that appears psychologically safe (nobody complains) and one that actually is (people speak up, share failures and ask questions freely)
  9. How to handle it when an experiment goes wrong in a visible way: the moment that either reinforces or destroys the safety you have been building
  10. Building iteration into your team's AI work as a habit: the expectation that first attempts are starting points, not finished products, and that improving them is part of the job
Culture set; now build capability and accountability
Tier 2 (Building capability and accountability)
Course 3 30–45 min
Training Your Team to Work Effectively with AI
Equip leaders to develop AI capability within their teams: identifying gaps, coaching individuals and building the habits and practices that make AI work sustainable rather than sporadic.
Takeaway
A slide deck draft for a team coaching session on AI capability: structured to help leaders run a 45-minute conversation that identifies gaps, shares good practice and agrees next steps.
  1. Why AI capability development is a leadership responsibility, not just an L&D one, and what it looks like when leaders actively develop their team's skills versus waiting for training to be provided
  2. How to identify skill and capability gaps in your team without making the process feel like an audit: listening for hesitation, workarounds and avoidance as much as looking at outputs
  3. The difference between training people to use specific AI tools and developing the underlying judgement, prompting skills and critical evaluation that transfer across tools as they change
  4. Coaching individuals rather than broadcasting to groups: how to have a useful one-to-one conversation about someone's AI capability without it feeling like a performance conversation
  5. How to use learning-by-doing as the primary development method: designing real tasks that build skill through practice rather than waiting for formal training to land first
  6. Identifying your early adopters and how to deploy them as peer coaches: turning individual enthusiasm into team capability without creating a two-tier culture
  7. How to pace capability development across a team with different starting points: moving quickly enough to maintain momentum without leaving anyone so far behind they disengage
  8. Reinforcing consistent and responsible AI practices as part of normal team life: making good habits visible and standard rather than optional or aspirational
  9. How to give useful feedback on AI-assisted work: what to look at, what questions to ask and how to make the conversation developmental rather than evaluative
  10. Building a team learning rhythm around AI: regular moments to share what's working, what isn't and what has changed, making development continuous rather than event-based
Course 4 30–45 min
Helping Your Team Use AI Responsibly and Consistently
Reinforce policy adherence and ethical AI use across your team, and equip leaders to handle the grey areas, edge cases and difficult conversations that arise once AI is genuinely in use.
Takeaway
A one-page team responsibility checklist: a leader's reference for policy adherence and best practice, with a framework for handling grey areas and a short guide on escalation.
  1. Why leader accountability for AI use is different from personal accountability: you are responsible not just for what you do with AI, but for what your team does with it in your name
  2. How to make organisational AI policies practical and visible in your team's day-to-day work rather than leaving them as documents people have read once and forgotten
  3. The gap between policy awareness and policy compliance: why people who know the rules still break them, and what a leader can do to close that gap at team level
  4. How to handle grey areas: the situations where the policy doesn't give a clear answer and a team member needs the leader's judgement, not a reference to the document
  5. Recognising the signs that AI is being used inconsistently or irresponsibly in your team: what to look for in outputs, conversations and working patterns before something becomes a problem
  6. How to have a constructive conversation with a team member about AI misuse without it becoming punitive: addressing the behaviour while protecting the relationship and the team's willingness to engage
  7. Building a consistent team standard for AI use that goes beyond minimum compliance: what responsible use looks like in your specific context, in practical terms
  8. Escalation and governance awareness: knowing when a situation is beyond your authority to resolve, who to involve, and how to escalate without creating alarm
  9. How to stay informed enough about AI developments to know when your team's current practices need updating, without needing to become a technical expert
  10. The leader's role in shaping team culture around AI ethics over time: making responsible use the expected norm rather than an additional burden
Pop-up module 15 min
When AI Agents Join Your Team: Governance for Leaders
Agentic AI (tools that act autonomously rather than just responding) is arriving in team environments now. This module gives leaders the governance framework they need to set appropriate boundaries before something goes wrong, not after.
Takeaway
A one-page team agent governance template: a simple framework for defining what agents can and can't do in your team's name, who is accountable for agent actions, and what approval checkpoints must stay in human hands.
  1. What it means when an AI agent is operating within your team: the shift from tools your team uses to tools that act on your team's behalf, including sending communications, accessing systems and making decisions
  2. Why governance at team level matters even when organisation-level policy exists: the gap between what the policy covers and the specific situations that arise in your team's work
  3. How to define what agents in your team are and aren't authorised to do: setting clear, practical boundaries before deployment rather than discovering the limits through an incident
  4. The accountability question: when an AI agent takes an action that causes a problem, who is responsible, and how a leader needs to have answered that question in advance
  5. How to design approval checkpoints that keep humans genuinely in the loop without creating so much friction that the agent's value disappears
  6. The specific risks of agentic AI at team level: compounding errors, agents acting on outdated context, and autonomous actions that a human would have paused on but the agent did not
  7. How to communicate to your team what agents are doing in their environment: making autonomous AI activity visible and understood rather than something that happens in the background
  8. What to do when an agent does something unexpected or incorrect: the immediate steps, who to inform, and how to review and adjust before it continues
  9. How to evaluate whether an agentic tool is genuinely improving your team's work, and how to make the case to pause or adjust deployment when it isn't
  10. How the governance frameworks you build now will scale: why establishing clear principles at team level contributes to the organisation's broader ability to deploy agents safely
Capability built; now measure impact and lead strategically
Tier 3 (Performance and strategic leadership)
Course 5 30–45 min
Measuring AI Adoption and Impact at Team Level
Help leaders track meaningful AI outcomes rather than activity, and connect what their team is doing with AI to the business results that matter to the organisation.
Takeaway
A one-page metrics template for tracking AI adoption and linking team-level activity to outcomes, with a guide on leading versus lagging indicators and how to use feedback loops to improve performance.
  1. Why measuring AI adoption matters for leaders: not to monitor compliance, but to understand whether the investment in change is actually producing the outcomes it was supposed to
  2. The difference between measuring AI activity (how much the team uses AI) and measuring AI impact (what changes as a result), and why the first without the second tells you almost nothing useful
  3. How to define what success looks like for AI-enabled work in your specific team context: before you start measuring, not after
  4. Leading indicators versus lagging metrics: the early signals that tell you whether adoption is going in the right direction, and the outcome measures that confirm it has
  5. How to connect team-level AI use to business outcomes that your own leaders care about: translating activity into the language of results, quality and value
  6. The risk of measuring the wrong things: how tracking outputs like number of AI prompts used creates perverse incentives and tells you nothing about whether AI is making work better
  7. How to gather qualitative evidence of AI impact alongside quantitative metrics: what your team tells you, what you observe in the work, and what changes in how problems get solved
  8. Using feedback loops to improve performance: building a regular cycle of measuring, reviewing and adjusting that keeps AI adoption moving forward rather than plateauing
  9. How to present AI impact data to your own leadership in a way that is credible, honest about limitations and useful for decision-making
  10. Knowing when metrics tell you something isn't working, and having the confidence to pause, adjust or change course rather than continuing to measure a failing approach
Course 6 30–45 min
Leading the Hybrid Workforce: Managing Humans and AI Together
Equip leaders to manage teams where AI is a permanent, evolving presence: developing the strategic leadership mindset needed to lead effectively when the destination keeps shifting.
Takeaway
A personal leadership development plan template: a one-page framework for leaders to map their own AI leadership strengths and gaps, set 90-day development priorities and identify one team-level change to lead in the next quarter.
  1. What it means to lead a hybrid workforce in 2026: not a temporary transition state, but a permanent operating model where the balance between human and AI contribution keeps shifting
  2. The fundamental change in what leadership now requires: from setting a clear destination and driving toward it, to navigating effectively when the destination itself keeps moving
  3. How to maintain team cohesion and shared purpose when roles, tools and workflows are all in flux at the same time: what remains constant when almost everything else is changing
  4. The leader's role in preserving the human elements of team life that AI cannot replicate: trust, empathy, shared meaning and the relationships that make people want to do good work
  5. How to make decisions effectively when AI is generating more options, more data and more noise than ever before: sharpening rather than outsourcing human judgement
  6. Managing the emotional reality of a team living with ongoing change: the exhaustion, the uncertainty and the unspoken anxiety about what the team will look like in a year
  7. How to develop a leadership style that is honest about uncertainty without being destabilising: the difference between "I don't know" as an admission of failure and "I don't know, and here's how we'll find out together"
  8. Building your own AI literacy as a leader: not to become a technical expert, but to stay credible with your team and informed enough to make good decisions about AI in your area
  9. How to position your team's AI capability as a competitive advantage within the organisation: making the work your team is doing visible upward and across the business
  10. Developing a continuous leadership practice for the AI era: the ongoing habits of reflection, learning and adaptation that sustain effective leadership when the environment never fully stabilises
Pop-up module 15 min
Leading Through Fog: Wayfinding in an AI-Uncertain World
Leadership has always rewarded confidence and clear direction. But when the destination itself is uncertain, that approach breaks down. This module introduces wayfinding: a timely, research-backed framework for leading effectively without pretending to have answers you don't have.
Takeaway
A personal wayfinding guide: a one-page reflection tool for leaders to identify where they are navigating well in uncertainty and where they are reverting to false confidence, with three practical commitments to make to their team.
  1. The traditional leadership model and why it is under pressure right now: the expectation that a good leader sets a clear destination, projects confidence and says "follow me; it'll be okay"
  2. What wayfinding is and why it is the more honest and effective leadership model for an AI-driven environment: navigating thoughtfully through fog rather than pretending the fog isn't there
  3. The specific uncertainty team leaders are carrying right now that most don't say out loud: genuine doubt about what their team's work will look like in a year, and what skills will matter
  4. Why pretending to have certainty you don't have damages team trust over time, and why acknowledging genuine uncertainty, done well, actually builds it
  5. The difference between leading with uncertainty and leading without direction: how to give a team enough stability and purpose to work effectively even when the longer-term picture isn't clear
  6. How to build learning loops into your team's work: short cycles of trying, observing and adjusting that make uncertainty productive rather than paralysing
  7. The wayfinding skills that research identifies as most critical right now: curiosity, adaptability, the ability to ask good questions rather than always provide answers, and comfort with revision
  8. How to communicate about AI uncertainty with your team in a way that is honest without being alarming: the language that opens conversation rather than shutting it down
  9. The risk of false confidence from AI-generated outputs: how leaders can inadvertently reinforce over-certainty in their teams by treating AI analysis as more definitive than it is
  10. How to use the reflective habits developed throughout this pathway to continue growing as a wayfinding leader: making uncertainty a navigable feature of leadership rather than a problem to solve
Courses for marketing teams
How to Audit Your Brand's AI Presence
AI systems are already shaping how buyers find and choose brands. This course focuses on how to audit your AI presence, giving you a clear picture of where gaps exist.
Takeaway
A practical roadmap to help you carry out an AI brand audit.
How to Optimise Your Brand's AI Presence
Using the insights from your AI brand audit, you'll now learn how to close those gaps and build the authority needed to get found.
Takeaway
A practical roadmap to help you optimise your AI brand.
Tier 1: Setting the stage
Tier 2: Building capability
Tier 3: Strategic leadership
Pop-up module
AI Training