Train Your Team in AI — By Doing, Not Overthinking

From boardrooms to breakrooms, leaders are wrestling with how to unlock AI’s promise without overwhelming teams.

HR and People teams sit at the frontlines of this shift. They’re not just evaluating tools,they’re shaping how employees learn, experiment, and grow confidence with entirely new ways of working. The stakes are high: get it right, and AI becomes a multiplier for productivity, creativity, and impact. Get it wrong, and hesitation or fear can freeze progress before it begins.

PeopleTech Partners sat down with Nelson Spencer, PTP Advisor & AI Transformation Thought Leader, to explore what it really takes to build confidence with AI. His perspective is refreshingly simple: don’t overthink it. Start small, try often, and create space for your team to learn by doing.

Start with real work, not theory
For individuals, begin where time already goes: email, docs, calendars, recurring communication. Treat AI as a thought partner to organize thinking, test assumptions, and accelerate first drafts. The key is supplying rich context so it can help you reason, not just rephrase. As Spencer puts it:

“The biggest thing for AI is that it needs as much context as possible so that it can… give you options.” 

For teams, the first “tool” isn’t a model — it’s psychological safety. Normalizing experimentation and removing fear of “wrong” prompts so everyone can build skill in public helps drive effective learning. “I always start with psychological safety…” Spencer advises, “ it’s important to build the right environment of trust that we’re all kind of going through this together … and to help employees understand that you’re not going to make a mistake if you… put the wrong prompt in.” 

Use constraints to beat overthinking
Beginners stall on blank pages and “perfect prompt” pressure. Time and prompt limits flip that script. A short window and fixed prompt count force action, reveal what matters, and make iteration the default. Spencer encourages organizations to create boundaries for skill building, “the constraints are encouraging you to not overthink it, not to feel like you have to have this perfect prompt… getting started is the most important thing.”  

Run the Prompt Sprint (Team Edition)
Pick one familiar task (job description, interview recap, policy draft). Give the team mock inputs (e.g., hiring-manager notes). Set 15–20 minutes and cap prompts (e.g., six). Ship a first pass, compare, discuss what changed the output (role assignment, tone, context, formatting), then iterate. Repeat weekly.

What happens? People stop theorizing and see cause-and-effect in real time. Teams quickly see what changes output quality — role assignment, tone, context, and formatting. As Spencer explained:

“Giving AI like a role to play… actually literally makes a difference like when you put that at the beginning versus just saying write a job description.”

That unlock builds confidence and a shared language for quality.

Make it stick with lightweight infrastructure
As skills rise, Nelson suggests creating a prompt library for repeated tasks (announcement emails, JD templates, policy rollouts). Standardize tone, structure, and inputs so different authors produce consistent outputs — and speed improves without sacrificing voice. Layer in basic model awareness (which tasks benefit from more context vs. quick hits) so teams pick the right “vehicle” for the job.

Try this exercise this month:

  1. Run a 20-minute sprint on a real task your team already does.

  2. Debrief what moved quality (role, tone, details, formatting, examples).

  3. Save the winning prompts into a shared doc; label when to use each.

  4. Schedule a weekly rep. Confidence grows with cadence.

Key Lessons Learned
Confidence grows with reps, not theory. Give people a real task, a timer, and a safe space to try — and they’ll level up fast. Or, as Spencer sums it up:

“The most important thing is literally to… get started… you can actually get pretty decent at it pretty quickly.”

Next
Next

Megan Barbier