Driving the Behavior Change
5.1 The behaviors you're trying to create
- Reps send polished, structured follow-up emails after every customer call. Not when they have time. Every time.
- Dedicated artifacts show up in deals and accounts that didn't exist before. One-pagers, battle cards, competitive positioning docs, tailored slide decks.
- Reps analyze their deals against the sales framework after every call, not just at formal pipeline reviews. Gaps are identified in real time.
- Mid-tier performers produce customer-facing materials at a quality level above their historical output. The floor rises.
- Individuals share AI-generated resources, prompts, and learnings with the broader team without being asked. Knowledge flows sideways, not just up.
- CSMs produce structured account health summaries after every meaningful customer interaction. Internal visibility on the book of business improves.
- Marketing produces content at the brand's quality bar with increased volume, without sliding into generic AI output.
- Team members proactively communicate wins and efficiency gains from AI usage. The conversation about AI is happening openly, not quietly.
- People are burning through their AI credits and asking for more. Usage is visible and growing.
- Managers are using AI in their own workflows and referencing it openly.
5.2 Permission structures
The single most powerful thing a revenue leader can do in Era 1 is lead from the front. Use AI in your own work. Produce something with it. Share that work with your team and be open about how you made it. When you stand in front of your team and say "I used Claude to help me build this QBR deck" or "I drafted this board update with AI and then refined it," you do more for adoption than any policy or mandate. You kill the stigma. You show that AI-assisted work is still your work.
“When you stand in front of your team and say ‘I used Claude to help me build this QBR deck’ or ‘I drafted this board update with AI and then refined it,’ you do more for adoption than any policy or mandate.”
This needs to be visible and repeated. Not once in an all-hands and then never again. In team meetings, in Slack, in one-on-ones. But maintain a high quality bar when you do it. If you ship sloppy AI-generated work and point at it as an example, you've just confirmed every skeptic's concern. Ship great work that was made faster and better with AI. That's the proof point.
Do not tie AI usage to compensation. This is about performance, not about tool adoption. If someone is using AI and their numbers are improving, the comp plan already rewards that. If someone is using AI and their numbers aren't improving, the answer is better experimentation, not a bonus for trying. The results should speak. If they don't, the team needs more support on how to use the tools effectively, not a financial incentive to open the app.
Make it safe to try and fail. Make it safe to share what didn't work. Make it clear that the only wrong move is refusing to engage at all.
5.3 Enablement systems
The most effective enablement in Era 1 is peer-to-peer sharing, not top-down training.
Create a space for your team to share prompts that work. A Slack channel, a shared doc, a recurring fifteen-minute slot in the team meeting. The format matters less than the habit. When someone figures out a prompt that produces a great follow-up email or a sharp deal analysis, the whole team should have access to it within a day. This is how the productivity gain stops being siloed to individuals and starts becoming organizational knowledge.
Share failures too. The prompts that produced garbage. The times AI hallucinated something confidently wrong. The experiments that wasted time. Most orgs only share wins. The failures are just as valuable because they prevent others from making the same mistakes and they normalize the reality that AI is not magic. Not every attempt works. That's fine. The learning compounds.
Invest in platform education. Tools like Claude have features that most people never discover on their own. Claude Projects is a strong example. A Project lets you upload reference materials, style guides, frameworks, and past examples that persist across conversations. Instead of pasting your MEDDPICC framework into every new chat, you set it up once in a Project and it's always there. This is the difference between using AI as a throwaway chat and using it as a repeatable workflow. But Projects, memory, custom instructions, these are not intuitive features. People need to be shown how they work and why they matter. Build lightweight guides or run quick walkthroughs that help your team get past the basics and into the features that create real leverage.
Keep enablement ongoing, not one-off. A single training session in week one will be forgotten by week three. A weekly prompt share in Slack and a monthly fifteen-minute workflow demo in a team meeting create the kind of steady reinforcement that actually changes habits.
Technology deployment happens in the first week. Behavior change takes 60 to 90 days. Do not confuse the two timelines. Give permission publicly and repeatedly. Celebrate early wins visibly. Enable the cautious middle through peer-to-peer sharing, not top-down training. Then hold people accountable for engaging. The people timeline is always longer and it’s the one that actually matters.
5.4 Measurement
The clearest leading indicator of Era 1 adoption is license usage. Who has an active AI subscription and how much are they using it? The people requesting additional credits or hitting usage limits are your power users. The people who haven't logged in after two weeks are the ones who need attention.
In the first 30 days, look at adoption breadth. What percentage of the team is using AI at least once a week? You don't need everyone to be a power user yet. You need the majority to have tried it and found at least one workflow where it helps. If adoption is below 50% after a month, your permission structures or enablement aren't working and you need to diagnose why.
In the first 60 days, look at output changes. Are follow-up emails going out faster and more consistently? Are new artifacts showing up in deals that weren't there before? Are CSMs producing more structured account reports? You can observe this without sophisticated tracking. Pull up a handful of recent deals and compare the quality and volume of customer-facing materials to what the same reps were producing three months ago.
In the first 90 days, look at performance indicators. Are the reps who adopted AI early showing improvement in their numbers? Pipeline velocity, conversion rates, average deal size, quota attainment. These are lagging indicators but by 90 days you should see directional movement. Compare your early adopters against your non-adopters. If there's a gap opening up, you have the proof you need to push harder on org-wide adoption.
Track knowledge sharing as a behavior metric. How many prompts or workflows have been shared in your team channel? How many people are contributing versus just consuming? The transition from a few people sharing to many people sharing is a signal that the culture is shifting from individual experimentation to collective learning.
5.5 Resistance patterns
The most common resistance in Era 1 is "I don't trust the output."
Someone tried AI once. It hallucinated a fact. It produced something generic. It missed the point of what they were asking for. They closed the tab and decided it wasn't ready. This is an education and enablement problem, not a technology problem. The output quality from AI is directly tied to the quality of the input and the skill of the prompter. Someone who pastes three bullet points into ChatGPT and gets back a mediocre email has had a very different experience from someone who pastes a full call transcript with specific instructions and gets back a sharp, structured follow-up. Help people understand that the bad experience they had was a starting point, not a verdict. Show them what good looks like. Pair them with someone who's getting great results and let them see the difference in approach.
The second common resistance is fear. People are scared that AI will take their job. They won't say it that directly. They'll say things like "I don't see how this helps" or "I prefer to do things my way" or "my clients expect a personal touch." Underneath all of those statements is the fear that if a machine can do their work, they become expendable.
This is an emotional problem, not a logical one. Responding with data about productivity gains doesn't address the fear. You need to reframe what AI means for their role. This is not about replacing what you do. This is about giving you the capacity to do more of it. You can manage more accounts. You can work more deals. You can hit a higher number. The quality you already deliver stays the same. The volume of what you can deliver goes up. Frame AI as a tool that makes their existing skills more valuable, not less.
The third resistance is distrust in leadership's motives. Some people hear "we're implementing AI" and translate it to "we're looking for ways to cut headcount." This is not an irrational fear. Some companies are doing exactly that. If your intent is genuinely to build capacity and not to reduce staff, say so explicitly and back it up with actions. Don't announce an AI initiative and a hiring freeze in the same quarter. Don't celebrate AI productivity gains and then restructure the team. The gap between what you say and what you do is where trust lives or dies.
The real concern at Era 1 is not any of these resistance patterns. It's the absence of adoption entirely. An org where people refuse to engage with AI is an org that will fall behind competitors who embrace it. The concern should never be that AI adoption is happening too fast. It should be that it's not happening fast enough. Address every legitimate concern with a real answer. Remove every barrier you can find. Then hold people accountable for engaging, experimenting, and moving forward.
“The concern should never be that AI adoption is happening too fast. It should be that it’s not happening fast enough.”