Stop Planning Your AI Strategy and Start Running Experiments
Earlier this year, I watched a Fortune 500 financial services company spend $4 million and eight months on their AI transformation. They brought in a major consulting firm. They ran a pilot program. They trained 200 people. They built a governance framework.
When I asked the CTO how many teams were actually using AI in production six months later, he said "maybe three, and they were already doing it before we started."
This isn't unusual. Many enterprise AI transformations struggle, not because the technology doesn't work, but because large companies are designed to prevent exactly the kind of rapid change that AI requires.
Here's what I've learned: the organizations that succeed don't treat AI as a technology rollout. They treat it as an organizational redesign.
What Actually Blocks Enterprise AI
Before jumping into solutions, let's think about what makes enterprises move slowly in the first place. These aren't bugs - they're features designed for a different era.
Your approval chains assume risk is expensive and reversible mistakes are rare.
A 15-person approval process made sense when deploying new infrastructure meant $2 million in hardware and 18 months of work. If you made a mistake, you were stuck with it. So you built process to prevent mistakes.
AI is the opposite. The cost to try something is near zero. Most ideas should fail fast. But your approval process still treats every experiment like you're buying a data center. By the time legal, security, compliance, and three VP layers sign off, the problem you were solving has changed.
Your incentives reward visible success, not learning.
Nobody gets promoted for running ten experiments where eight failed. People do get promoted for delivering predictable results on known problems. So everyone gravitates toward safe projects with guaranteed ROI.
The problem is that AI's biggest wins come from trying things nobody has tried before. You can't know if an LLM can replace your customer service tier-one routing until you test it. But testing means risking a visible failure. So teams stick to small, safe projects that don't move the needle.
Your knowledge lives in silos because that's how you're organized.
Your procurement team figured out how to use AI to process invoices 60% faster. Your legal team built a contract analysis tool. Your sales team automated proposal generation. None of them know what the others are doing.
This happens because enterprises are organized by function, not by capability. There's no forcing function to share knowledge across divisions. Each silo optimizes locally, and the company learns slowly.
Your security team isn't wrong to be nervous.
They're concerned because AI creates new attack surfaces nobody fully understands yet. What happens if an employee accidentally pastes customer PII into ChatGPT? What if a model gets poisoned? What if someone jailbreaks your internal chatbot?
These aren't hypothetical. They're real risks. But the current response - blocking everything until we have perfect answers - means you move at 1% speed while your competitors move at 100%. There's a better middle ground.
What I've Seen Actually Work
The companies making progress share three characteristics. They're not moving faster by ignoring the constraints. They're redesigning how they work within them.
They create protected sandboxes with clear boundaries
A manufacturing company I know set up what they called "the fast lane." Any AI project that met three criteria got automatic approval:
- Uses only public or synthetic data
- Runs in an isolated environment
- Has a sunset date (expires after 90 days unless renewed)
This works well for two reasons. First, it removes the approval bottleneck for 80% of experiments. Second, it forces teams to think about risk upfront. You can't use customer data? Fine, generate synthetic data or use public datasets. That constraint actually makes you more creative.
The key is that the boundaries are crystal clear. Teams know exactly what they can do without asking. Security knows exactly what to monitor. Nobody is guessing.
Where this can stumble: When you make the boundaries too restrictive. One bank limited sandboxes to "read-only database access." Sounds safe, but it meant you couldn't test anything that needed to write data. So nobody used it. The boundaries need to be tight enough to manage risk, but loose enough to be useful.
They embed AI people in business teams, not in a center of excellence
The traditional enterprise move is to create an "AI Center of Excellence" - a central team that everyone comes to for help. This rarely scales well.
Here's why: if you have 50 business units and 20 AI engineers, the AI team becomes a bottleneck. Projects queue up. Each one takes weeks to start. People get frustrated and build their own shadow solutions.
The companies that move fast do the opposite. They embed one or two AI-savvy engineers directly into large business units. These people report to the business unit, sit with them, understand their problems deeply. They're not "doing AI projects" - they're part of the team solving business problems, and AI is just one tool they use.
Then you keep a small central team (maybe 5-8 people) who set standards, run the governance framework, and help the embedded people when they get stuck. But the central team isn't doing the work. They're enabling others to do it.

Where this can stumble: When you embed junior people. The embedded person needs to be senior enough to make decisions independently and push back when an idea doesn't make sense. If you embed someone who needs constant guidance, you just moved the bottleneck closer to the problem.
They measure adoption, not ROI (at first)
Every executive wants to see ROI from AI. That's reasonable. But measuring ROI too early can kill momentum.
Here's the dynamic: AI projects fail a lot. Maybe 7 out of 10 don't deliver value. If you require teams to prove ROI before scaling, you're essentially asking them to pick winners before they know what works. So they either overstate the numbers or only propose projects with guaranteed returns (which are usually small, incremental improvements).
The companies making progress track different metrics early on:
- How many teams are actively experimenting?
- What's the time from idea to first test? (Should be days, not months)
- How many experiments are teams running per quarter?
- What percentage of experiments lead to a second iteration?
These metrics tell you if your organization is learning. ROI comes later, once you've figured out what actually works.
One retail company I know had a simple rule: for the first 12 months, they didn't measure ROI at all. They only measured "did the team learn something that changed their next experiment?" After 12 months, they had 30 teams actively using AI, and the ROI became obvious. But they never would have gotten there if they'd demanded financial justification for every experiment.
Where this can stumble: When you never shift to ROI. At some point, you do need to show value. The trick is to delay that conversation until you have enough data to have it honestly. 6-12 months is usually right for enterprises. Earlier than that, you're mostly guessing.
The Security Problem Nobody Wants to Talk About
Let me be direct: most enterprise security teams are blocking AI adoption because they're trying to apply old security models to new technology. They're right to be cautious. But the current approach leaves companies in a tough spot.
Here's what I've seen work:
Option 1: Build your own controlled environment
Some companies are building internal AI platforms - essentially their own ChatGPT-like interface that runs on models they control, with data that never leaves their environment. This sounds expensive, but if you're a 50,000-person company, the cost of building it is small compared to the productivity gain.
The advantage is that your security team can control exactly what data goes where. The disadvantage is that it takes 6-12 months to build and you're always playing catch-up with the latest models.
Option 2: Negotiate data protection agreements with AI vendors
Some companies are negotiating Business Associate Agreements (or equivalent) with AI vendors like Anthropic, OpenAI, and others. These contracts specify exactly how your data can be used (usually: not for training, deleted after processing, encrypted in transit, etc.).
This is faster than building your own platform. The disadvantage is that you're trusting the vendor's security. For some enterprises, that's a non-starter. For others, it's acceptable risk.
Option 3: Tier your data and your tools
The smartest approach I've seen is to stop treating all AI tools the same. Create tiers:
- Tier 1 (Public models, zero trust): Employees can use ChatGPT, Claude, etc. for work that involves no company data. This covers things like "write me a Python function to parse JSON" or "explain how OAuth works." This is maybe 40% of use cases.
- Tier 2 (Approved vendors, limited data): Employees can use approved AI tools with non-sensitive company data. Sales proposals, marketing copy, internal documentation. Requires data protection agreements.
- Tier 3 (Internal only, sensitive data): For anything involving customer PII, financial data, or trade secrets, you use your internal platform or nothing.
The key is making this simple enough that people actually follow it. If your guidance is "check with legal and security for every use case," people will just ignore the policy. If your guidance is "here's a 30-second decision tree," they'll follow it.
What About the 2,000-Person Marketing Department?
Here's the problem with most AI transformation advice: it assumes everyone in your company is technical or at least technical-adjacent. They're not.
You've got thousands of people in roles like marketing, HR, legal, finance, operations. They're not going to learn Python. They're not going to read model documentation. They need AI to just work, the way Excel just works.
The companies succeeding here are doing two things:
First, they're building extremely simple interfaces. Not "here's API access to Claude." More like "here's a button in the tool you already use that says 'generate first draft.' Click it."
One insurance company integrated AI directly into their claims processing software. Adjusters don't know they're using AI. They just see better suggestions. That's the right level of abstraction for most employees.
Second, they're finding champions in each department. Not "AI experts," just curious people who like trying new tools. These champions figure out what works for their team, then teach others. This spreads knowledge way faster than top-down training.
A consumer goods company identified about 50 champions across their organization. They gave these people early access to new tools, monthly check-ins with the AI team, and a budget to experiment. Within six months, these 50 people had trained about 1,500 others, just through casual conversations and showing their work.
The Budget Conversation
Here's what's worth being honest about: enterprise AI transformation requires real investment, even though the tools themselves are cheap.
You're not spending money on AI models. You're spending money on:
- Engineering time to integrate AI into existing tools
- Security infrastructure to make it safe
- Change management to get people to adopt it
- Potentially some full-time headcount for the central team
For a 10,000-person company, budget $2-5M per year for the first two years if you're serious about this. That's not huge for an enterprise budget, but it's enough that you need executive buy-in.
The business case is pretty straightforward: if AI makes your employees even 10% more productive, that pays back quickly. The challenge is that the value shows up as "work that used to take 3 hours now takes 30 minutes" rather than as a line item on the P&L.
One approach I've seen work: pick one high-visibility pain point, solve it with AI, measure the time savings, then use that as your proof point for broader investment. Don't try to boil the ocean. Show value, then expand.
What This Actually Looks Like in Practice

Let me describe what a realistic 18-month transformation looks like for a large enterprise:
Months 1-3: Setup and early experiments
- Secure executive sponsorship (this is critical)
- Set up the sandbox environment with clear rules
- Identify 3-5 embedded AI engineers for your largest business units
- Run 10-15 small experiments across different departments
- Most experiments fail, that's expected, you're learning
Months 4-6: First wins and knowledge sharing
- 2-3 experiments show real value
- Document what worked and why
- Start a monthly "show and tell" where teams demo their projects
- Begin building your internal AI platform (if going that route)
- Revise your security policies based on what you've learned
Months 7-12: Scaling what works
- The successful experiments from phase 1 get scaled to more teams
- You've now got 20-30 teams actively using AI
- Patterns emerge: certain use cases work really well, others don't
- You start seeing unsolicited adoption (teams you didn't train are using AI)
- Your governance framework gets stress-tested and improved
Months 13-18: Organizational muscle
- AI tools are integrated into core workflows
- New employees get AI training as part of onboarding
- You're measuring productivity gains and can point to concrete ROI
- The central AI team is mostly unblocking others, not doing projects
- Security team is comfortable because they understand the risks
This timeline assumes you have budget, executive support, and good people. Without those, expect things to take longer.
The Hardest Part Nobody Tells You
The hardest part of enterprise AI transformation isn't technical. It's political.
You'll have VPs who feel threatened because AI makes their team's work seem less valuable. You'll have middle managers who slow-roll adoption because they're worried about headcount. You'll have executives who want flashy demos for board meetings instead of real change.
The way through this is to have an executive sponsor who genuinely cares and has real power. Not a "Chief AI Officer" who reports to the CIO. Someone in the CEO's inner circle who can cut through the politics when needed.
Without that person, you're likely doing a pilot program that will generate some PowerPoint slides and then quietly fade away rather than a real transformation.
So Where Do You Actually Start?
If I were leading this at a large enterprise tomorrow, here's what I'd do:
Week 1: Have an honest conversation with the CEO or relevant exec. Do they actually want this? Are they willing to give it budget and air cover? Get a clear yes or no.
Week 2: Set up the sandbox with clear rules. This is your fastest path to velocity.
Week 3-4: Identify your embedded engineers. These need to be your best people, not whoever is available.
Month 2: Run 10 experiments simultaneously. Make them diverse - different departments, different use cases. Expect most to fail.
Month 3: Document everything you learned. The failures teach you more than the successes right now.
Month 4-6: Double down on what worked. Start building the organizational muscle.
The key is to start small but start real. Don't spend six months planning. Spend one month planning and five months doing.
The Real Question
The companies that figure this out will have 5-10 years of competitive advantage over the ones that don't. That's a big enough gap to reshape entire industries.
The question worth sitting with is: are we willing to be uncomfortable for 18 months to get there? Are we willing to let some experiments fail publicly? Are we willing to change how we work, not just what tools we use?
Those are harder questions than "which model should we use?" But they're the ones that actually matter.
If the honest answer is "we're not ready for that kind of change," that's useful information. It means you can save the $4 million and be honest about doing a pilot program instead of pretending you're doing a transformation.
But if the answer is yes, then you've got a real shot at being one of the companies that pulls this off.
No spam, no sharing to third party. Only you and me.
Member discussion