#23 - Why 60% of AI projects will fail by 2026
Your legacy data is sabotaging your AI dreams (and here's how to fix it)
Read time: 3 minutes
Hi Data Modernisers,
Most companies rushing into AI are about to hit a brick wall made of their own unprocessed data.
Gartner just dropped a reality bomb: 60% of AI projects running without AI-ready data will be abandoned by next year. That's not a prediction, it's a warning. Most companies treating AI like a magic wand need to understand that their decades-old ERP systems and siloed databases were not built for this, and pretending they can handle AI workloads is like trying to run a Tesla on coal.
Today, we are diving into why traditional IT infrastructure is failing the AI revolution and what you actually need to do about it:
Why your current data management practices are sabotaging every AI initiative
The three foundational steps that separate AI winners from the 60% who quit
How to build AI-ready data pipelines without disrupting the business
Let's get into the details.
3 Steps To Make Your Data AI-Ready Even If Your Current Systems Are Legacy Disasters
Here is the uncomfortable truth: you can not build AI on top of systems that were struggling before AI existed.
Most IT leaders are discovering this the hard way, trying to force AI initiatives through data pipelines that were already at breaking point.
Let me show you the three steps that actually work.
Step 1: Accept That Traditional IT Infrastructure Won't Cut It
You need to abandon the fantasy that you can clean up decades of data mess across disconnected systems and somehow make it AI-ready.
Here's why this approach fails:
"It's nearly impossible to clean up data across a sprawling estate of disconnected systems and make it useful for AI."
- Eric Helmer, CTO at Rimini Street
When you clean data in your HR system, those changes don't automatically propagate to:
Your CRM platform
Your financial applications
Your customer service systems
The result? Inconsistent data across systems, exactly what AI models hate most.
What you actually need: Dedicated AI data pipelines that collect, cleanse, and catalog enterprise information using modern methods.
"The AI revolution is forcing a modernization of the data center across all industries."
- Jason Hardy, CTO for AI at Hitachi Vantara
This isn't about upgrading existing infrastructure. It's about recognizing that AI workloads require fundamentally different approaches to data management.
Step 2: Use AI To Improve Your Data (Yes, Really)
The irony is beautiful: AI can help you prepare data for AI, creating a virtuous cycle of improvement.
The expert insight:
"We're seeing 'AI for data' as one of the largest applications of AI in the enterprise at the moment."
- Beatriz Sanz Sáiz, global AI sector leader at EY
What AI can do for your data:
Generate synthetic data to fill gaps
Analyze data distribution to identify outliers
Automatically flag values outside reasonable ranges
Enforce consistency across hundreds of systems
Real-world example: When a customer record updates in one system, AI agents ensure it updates everywhere in near real-time across:
CRM platforms
Contact centers
Financial applications
"knowledge is becoming more important than data because it helps interpret the data."
- Sáiz
Build a knowledge layer on top of your data infrastructure. This provides context and minimizes hallucinations, making your AI actually useful instead of confidently wrong.
Step 3: Transform One Project At A Time (Don't Boil The Ocean)
You don't need perfect data across your entire organization before starting your AI journey; you need a systematic approach to improvement.
The smart approach:
"Once you put the foundational principles and practices in place, you can make the transformation one project at a time."
- Jason Hardy, Hitachi Vantara
Start with these foundations:
Cybersecurity protocols
Data governance frameworks
Clear retention policies
Then tackle transformation iteratively:
For each AI project, identify:
The specific data you need
Systems you need to interface with
Security requirements for that use case
Hardy's golden rule: "Instead of trying to boil the ocean before you see any return, focus on your data transformation one outcome at a time."
Pro tip: Establish a governing body for consistency, but don't let governance become paralysis. The goal is to build momentum through successive wins, not to achieve perfection before you start.
That's it.
Here's what you learned today:
Traditional IT infrastructure cannot physically support AI workloads at scale
AI can be part of the solution for improving your own data quality
Incremental transformation beats waiting for perfect data
The companies that will win with AI aren't necessarily the ones with the cleanest data right now; they're the ones moving fastest to build AI-ready foundations.
Start with one high-impact use case, identify the data requirements, and build the infrastructure to support that specific outcome. Then rinse and repeat.
PS...If you're enjoying this newsletter, please consider referring this edition to a friend. You'll help them avoid the 60% failure rate that's looming.
And whenever you are ready, there are 2 ways I can help you:
Free Data Readiness Assessment - Let's evaluate where your current infrastructure stands for AI implementation and identify the biggest gaps holding you back. Free Assessment
AI-Ready Data Migration Planning - Collaborate with me to design a phased approach that modernizes your data infrastructure while ensuring business continuity and laying the groundwork for AI capabilities.
That’s it for this week. If you found this helpful, leave a comment to let me know ✊
About the Author
Khurram is a former Teradata Global Data Consultant with over 15 years of experience implementing data integration solutions across the financial services, telecommunications, retail, and government sectors. He has helped dozens of organisations implement robust ETL processing. His approach emphasises pragmatic implementations that deliver business value while effectively managing risk.