I've been working on AI adoption since early 2024, and I've tried just about every strategy in the playbook: vendor onboarding, champion-led pilot groups, initiative-led adoption through company OKRs, department-level mandates, board-level mandates, forming an internal community, and hands-on labs.
What actually drove adoption? The hands-on onboarding. Sitting with individuals, unblocking their biases and doubts, and replacing assumptions about what the tool is capable of with firsthand experience.
The Only Strategy That Moved the Needle
Every failed adoption attempt I've seen traces back to the same moment: an engineer tries the tool, gets a mediocre result, and writes it off. "I tried it. It outputs garbage." From that point forward, no amount of Slack channels, video tutorials, or executive push will change their mind.
The only thing that consistently broke through was sitting down 1-on-1 with the engineer and unblocking their actual work. Not a demo on a sample project. Their codebase, their problem, their context.
In those sessions, I did three things:
Replaced zero-shot with few-shot prompting. Most engineers default to typing a vague request and judging the tool by its first response. Teaching them to provide examples and context even just 2-3 immediately changed the quality of output from "garbage" to "useful starting point."
Generated context files tailored to their repo. Generic prompts produce generic output. Walking engineers through creating context files specific to their codebase, architecture patterns, and team conventions transformed the tool from a novelty into a productivity lever.
Made human review non-negotiable. This was my biggest early mistake. I assumed engineers understood that AI output requires review and refinement. When I failed to make this explicit, the same engineers came back a week later saying the tool was useless because they'd treated raw output as finished code or accepted the AI generated context as is. The uncomfortable takeaway is that AI adoption at this stage is a 1-on-1 coaching problem, not a content distribution problem. Guides don't fix it. Videos don't fix it. Mandates definitely don't fix it. At this stage of AI usage across our industry, I think a key gap in engineering onboarding and standards is the lack of prompt and context engineering principles at a base level that are taught and upheld. Thank goodness for spec-driven development and some of the standards forming around that, which I'll cover in a later post.
How to Actually Measure AI Adoption
Your board and execs want a number and that typically starts with Adoption rate. "We bought 200 seats, how many are active?" That's a valid question, but seat utilization tells you almost nothing about whether AI is making your engineering org better.
While anecdotes from engineers "this tool saved me 5 hours this week" are valuable for assessing sentiment, they're rarely sufficient for justifying ROI. A healthy engineering organization should already be tracking DORA metrics, and each dev team should be having frequent retros over this at least every month, if not bi-weekly.
Some specifics to track:
Cycle Time Reduction - Time from first commit to production. Is it shrinking? And can you track this back to any Agentic workflow process you have in place.
PR Volume & Velocity - Are engineers merging 20% more code (as seen in high-performing teams), or just producing more noise?
Deployment Frequency - How often code is successfully deployed.
Change Failure Rate - The critical indicator. If CFR drops, AI is improving quality. If it rises, you're shipping faster but worse. This is also assuming that you have proper alerting and monitoring in place and the AI generated code and Human review process didn't skip this step
Add two AI-specific metrics: the ratio of review comments to AI-generated lines of code (are reviewers catching more issues?), and a 30-day regression rate comparing AI-assisted vs. human-only code paths.
Picking the Right Pilot Group
I've run pilots both ways entire teams and handpicked individuals spread across the org. My default is now the latter, and it's not close.
Take 1-2 volunteers from each team. Hand-raisers first, they're the ones who will actually make the pilot succeed. This approach solves two problems at once: teams continue delivering on their roadmap without interruption, and the perceived exclusivity of being selected for the pilot (as much as I hate to admit it) drives significantly higher engagement than a blanket rollout.
Your pilot group becomes your internal proof. When their teammates see them shipping faster and with fewer issues, curiosity does the rest.
Adoption Doesn't Stop at Onboarding
Last but not least AI adoption does not stop after the tool has been onboarded. The continued reinvestment in training, education, demos, workshops, and lunch-and-learns is a key part of sustained success.
Why? Because organizational maturity in AI or tooling adoption isn't a binary state. As your engineering teams capabilities grow, the strategies that worked at 20% adoption won't work at 80%. Someone needs to own that evolution whether it's a platform engineering group, a center of excellence, or a dedicated internal role.
Skip this, and you'll plateau at early-adopter levels and never get the ROI your board is asking about.
What's worked (or not worked) for you? I'd love to hear how other engineering leaders are approaching this and If I'm missing something critical across Adoption strategies.

