Decisions You Need to Make
Era 1 decisions are straightforward. That is a feature, not a limitation. The complexity comes later. The mistake revenue leaders make here is overthinking the technology and underthinking the people. The decisions that matter most at this stage are about access, permission, and observation.
3.1 Systems decisions
3.1.1 Current state audit
The systems audit for Era 1 is not about your AI infrastructure. You don't have one yet. It's about the systems your team already uses every day and whether the data inside them is ready to be useful when someone copies it into an AI chat.
Start with your CRM. Not whether you have one, but what actually lives in it. How complete are your opportunity records? Are deal stages accurate or are they a fiction that gets cleaned up before forecast calls? Are contacts associated with the right accounts? Are notes from calls and meetings logged consistently or sporadically? The quality of what your team gets out of AI is directly tied to the quality of what they put in. If a rep copies their deal notes into Claude and those notes are thin, the output will be thin.
Look at your call recording and transcription setup. Does your team have a tool that captures meeting transcripts? Tools like Granola, Gong, or Chorus give your reps raw material they can paste into AI immediately after a call. If your team is still relying on handwritten notes or memory, the copy-paste workflow has nothing good to copy. This is the single most impactful system to have in place before Era 1 starts producing real value.
Look at your email. Are reps using a shared email platform where conversations are visible and searchable, or is everything locked in individual inboxes? This matters less for Era 1 than it will for Era 2, but the habits you build now determine the foundation you have later.
The honest truth is that the systems audit for Era 1 is really a data hygiene audit. Your systems are probably fine. Your data probably isn't. The gap between what your team should be logging and what they actually log is the gap that limits how useful AI can be, even in a simple copy-paste workflow.
3.1.2 Changes required
You don't need new systems for Era 1. You need your existing systems to be used properly.
The most important change is getting meeting transcription in place if you don't already have it. This is the highest-leverage system decision in Era 1 because it gives every rep a rich, detailed record of every customer conversation that they can immediately use as AI input. The difference between a rep who pastes a full transcript into Claude and one who pastes bullet points from memory is enormous. Invest in this first.
“The difference between a rep who pastes a full transcript into Claude and one who pastes bullet points from memory is enormous.”
The second change is reinforcing CRM hygiene. Not because AI is reading your CRM yet, but because the habits your team builds now are the foundation for Era 2 and Era 3 where AI will be reading your CRM directly. Frame this as future-proofing. Every field your reps fill out today is training data for the AI workflows you'll build tomorrow. That framing makes hygiene feel strategic instead of bureaucratic.
The third change is establishing basic guidelines for what data can go into AI tools. Work with your security and legal teams to define what's acceptable. Can reps paste call transcripts? Customer names? Revenue figures? Deal terms? Get clear answers to these questions and communicate them simply. The goal is to remove ambiguity so your team doesn't have to guess what's allowed. If they have to guess, your most cautious people will avoid AI entirely and your most aggressive people will paste everything without thinking.
“If they have to guess, your most cautious people will avoid AI entirely and your most aggressive people will paste everything without thinking.”
3.1.3 Sequencing
First, get meeting transcription in place. This is a quick win that delivers immediate value. Most tools are easy to deploy and require minimal configuration. Your team starts getting richer AI input within a week.
Second, run a CRM hygiene baseline. Have your RevOps team pull a simple report on data completeness across key fields. Opportunity stage accuracy. Contact coverage. Notes logged per deal. You don't need to fix everything. You need to know where you stand so you can set expectations and track improvement.
Third, publish your data guidelines. What's allowed in AI tools, what isn't. Keep it to one page. Make it permissive enough that people actually use AI and specific enough that your security team can live with it. This can happen in parallel with the first two.
The whole sequence should take two to four weeks. This is not a major systems overhaul. It's cleanup and access.
Get meeting transcription in place before anything else. It is the single highest-leverage system decision in Era 1 because it gives every rep rich AI input from every customer conversation immediately. Then publish a one-page data guideline so people know what’s allowed. The whole sequence takes two to four weeks.
3.2 Technology decisions
3.2.1 Current state audit
The technology audit for Era 1 starts with one question. What AI tools is your team already using?
Some of them will tell you. Others won't. The reality in most revenue orgs is that your early adopters have already signed up for ChatGPT or Claude with their personal email and are using it daily. They might be doing this without your knowledge and without your IT team's awareness. This isn't malicious. They found something that makes them better at their job and they didn't want to wait for a procurement process.
Your audit needs to surface this. Ask directly. Send a simple survey or raise it in a team meeting. How many people on this team are using an AI tool at least once a week? Which tools? What for? You will be surprised by the answers. Some people are using it for everything. Others haven't touched it. The distribution tells you a lot about where you are and where the energy is.
Also look at adjacent AI-powered tools that might already be in your stack. Your marketing team might be using Jasper or Writer for content. Your reps might be using an AI email tool. Your ops team might be experimenting with AI features inside your existing platforms. Map what exists, even if it's scattered and unofficial. You need the full picture before you make decisions about what to formalize.
The security posture matters even at this stage. Where is customer data being pasted into AI tools? Are those tools enterprise-grade with appropriate data handling, or consumer products with no contractual protections? You don't need a full AI governance framework yet, but you need to know the risk surface.
3.2.2 Changes required
Pick your core AI tool and get everyone licensed. This is the central technology decision in Era 1 and it's genuinely simple. Claude and ChatGPT are the two serious options for a general-purpose AI assistant. Claude tends to be more enterprise-friendly on data privacy and security. ChatGPT has broader consumer awareness. Either works. Pick one, make it the standard, and give everyone on the team a paid seat.
The paid seat matters. Free tiers are limited enough that people will hit walls and stop using the tool. If you're serious about Era 1 adoption, the cost of licenses is trivial compared to the productivity gain. And tracking who's using their license and who's asking for additional credits gives you a direct signal of adoption and engagement. The people who burn through their credits are your power users. Pay attention to them.
Beyond the core AI tool, evaluate a small set of adjacent AI-powered tools for specific workflows. Meeting transcription is the most important, as covered in the systems section. Email productivity tools that use AI to draft or categorize messages can reduce friction for reps who live in their inbox. Content generation tools can accelerate your marketing team's output. Keep this set small and focused. You are building a collection of disconnected tools, not an integrated platform. That's fine for Era 1.
Do not try to build custom integrations or connect AI to your internal systems yet. That is Era 2 work. The technology decision in Era 1 is purely about giving individuals access to capable tools and letting them figure out how to use them in their own workflows.
3.2.3 Sequencing
First, pick and deploy your core AI tool. Get licenses provisioned. Make sure everyone knows how to access it and that they have explicit permission to use it. This should take days, not weeks.
Second, formalize any shadow AI usage that's already working. If reps are already using tools that are producing results, don't fight it. Either approve those tools or migrate those users to your chosen standard. Forcing people off a tool they love onto one they didn't choose creates unnecessary friction. If the shadow tool is a security concern, explain why and provide the sanctioned alternative immediately. Don't create a gap where they have nothing.
Third, roll out adjacent tools based on team need. Start with meeting transcription if it's not already in place. Layer in email or content tools based on where you see the most manual effort in your team's daily workflow. Deploy one at a time. Give each tool a few weeks to settle before adding another.
The full technology stack for Era 1 should be in place within a month. It's a short list of disconnected tools. The harder work is what comes next in the people decisions.
3.3 People and org decisions
3.3.1 Current state audit
The people audit is where you actually learn something. The technology decisions in Era 1 are obvious. The people dynamics are not.
Start by observing the distribution of AI adoption across your team. You will find three groups. The first group has already figured it out. They're using AI daily, producing better work, and probably telling anyone who will listen about it. The second group is open to it but hasn't started yet, either because they don't have access, don't know how, or haven't found the time. The third group is resistant. They might have tried it once and dismissed it, or they might be avoiding it on principle.
The relative size of these three groups tells you how much work you have ahead of you. If your early adopter group is 20% or more of the team, you have natural momentum to build on. If it's under 10%, you need to invest more in permission and enablement before you'll see organic adoption.
Look at who specifically is in each group. Are your top performers the early adopters, or are they in the resistant group? Both happen. If your best reps are already using AI, you can use their results as proof to pull others along. If your best reps are the ones resisting, you have a harder problem because the rest of the team takes cues from them.
Pay attention to the internal communication patterns. Who is sharing AI wins with the team? Who is producing shared resources like prompt templates or workflow tips? These people are your force multipliers regardless of their title or seniority. They are building capacity for the org, not just for themselves. Identify them and retain them. They will be critical to everything that comes after.
Look honestly at your management layer. Are your frontline managers using AI themselves? If managers aren't using AI, they can't coach their teams on it, can't recognize good AI-augmented work, and can't model the behavior you're trying to create. Manager adoption is a leading indicator of team adoption.
Finally, assess your incentive structure. How does your team get paid and promoted? If comp plans reward individual deal execution and nothing else, AI adoption will stall because there's no reason to change what's working. If you want people to experiment, share learnings, and help others adopt, you need to recognize and reward those behaviors. This doesn't require a comp restructure at Era 1. But you need to know what your current incentives are actually rewarding before you can understand why people are or aren't changing their behavior.
3.3.2 Changes required
The biggest change required in Era 1 is not a structural one. It's a cultural one. You need to make AI adoption feel like a competitive advantage, not a compliance requirement.
Start by giving explicit, public permission. Say it in a team meeting. Say it in writing. Your team has permission to use AI tools to improve their work. Say it again a month later. People need to hear permission more than once before they believe it, especially in organizations where previous technology mandates came with strings attached or disappeared after a quarter.
Create visibility around wins. When someone uses AI to produce a great follow-up, a strong one-pager, or a sharp competitive analysis, make it visible. Share it in Slack. Mention it in team meetings. Not as a mandated show-and-tell, but as a genuine celebration of better work. The point is to connect AI usage to results so the competitive instinct in your team does the rest.
For your early adopters, give them room to run and make it easy for them to share what they're learning. These are the people who will pull the rest of the team forward. If they want to create a shared doc of useful prompts, support it. If they want to run an informal session showing how they use AI in their workflow, give them the time. This kind of organic enablement is more effective than formal training because it comes from a peer, not from leadership.
For your resistant group, distinguish between two types. Some people have legitimate concerns. Security worries. Uncertainty about what's allowed. Discomfort with a new tool. These people need clarity, access, and a low-pressure on-ramp. Pair them with an early adopter. Show them one specific workflow that saves time. Make the first step small and concrete.
Then there are people who are blocking progress without offering a path forward. They raise concerns but don't work toward solutions. They find reasons not to adopt but don't engage with the reasons to try. These people are a real problem because they slow down the entire org. The concerns they raise may be valid, but organizations that refuse to adapt to AI will fall behind. Your job is to address every legitimate concern with a real answer and then hold people accountable for moving forward.
At the management level, make sure your frontline managers are using AI themselves. This is non-negotiable. A manager who doesn't use AI cannot coach a team on AI. Get managers on the tool first. Have them use it in their own workflow for a week before you ask them to support their teams. They need firsthand experience to be credible and helpful.
3.3.3 Sequencing
First, give permission. Publicly, clearly, repeatedly. This costs nothing and unblocks your cautious middle group immediately. Do this the same week you deploy licenses.
Second, watch. Give it two to three weeks. See who adopts, who doesn't, and what early results look like. Resist the urge to push too hard too early. You are gathering information about your team that will inform everything you do next.
Third, celebrate wins. Once you have real examples of people getting results with AI, make those examples visible. This should start within the first month. The wins don't have to be dramatic. A better follow-up email, a faster proposal, a battle card that didn't exist before. Small, specific, connected to the number.
Fourth, enable the middle. After the early wins are visible, offer lightweight enablement for the people who are open but haven't started. Pair them with early adopters. Share specific prompts and workflows. Keep it practical and low-pressure. This is not a training program. It's peer-to-peer knowledge sharing with leadership support.
Fifth, address the resistance. By this point you'll know who's struggling with legitimate barriers and who's blocking without engaging. Work with the first group directly. For the second group, be clear about expectations. AI adoption is a strategic priority. Concerns will be heard and addressed. But standing still is not an option.
The full people sequence takes 60 to 90 days to see meaningful behavior change across the team. Technology deployment happens in the first week. Behavior change takes two months. Do not confuse the two timelines. The people timeline is always longer and it's the one that actually matters.