AI Isn’t Failing. Your Company’s Approach Is
I am nowhere near the world’s leader in AI solutions. However, I have launched a few AI solutions and have seen good success. So, a while ago, when I read stats about AI deployments failing, and I got confused about what was going on.
This one MIT study estimates “95% of generative AI implementations in enterprise 'have no measurable impact on P&L” and you can pull up almost any business publication right now and see doom and gloom around AI deployments and AI in general.
Look, I hate to say it, but as Steve Jobs told people about the iPhone 4 during Antennagate, you are using it wrong!
So here are three ways companies are using AI wrong (and for the technical people out there, I am talking about GenAI/LLMs):
1 - They think AI is all about the data
I used to think this way, as that is how it worked with machine learning. With machine learning, whoever had the largest and cleanest data got the best results. GenAI can work that way if you are training your own models, but only a few of the largest companies in the world are doing that. Instead, most companies use one of the foundational models, such as ChatGPT, Gemini, or Claude, and do not do their own training. However, most companies still treat them as machine learning models, as if they are training them, and throw a ton of data at them, hoping to get better insights.
These models are getting better at handling these data loads with always-expanding context windows, but from my testing, a good “old-fashioned” machine learning model still outperforms LLMs when it comes to large data sets and getting predictive and prescriptive insights from them.
And here’s another thing people misunderstand: your company’s data isn’t automatically “better” than what the model already knows. These LLMs have been trained on large datasets, and if you think dumping your whole dataset in will magically change how the model behaves, it won’t. Unless you have truly unique, proprietary information, most of the value comes not from making the AI “smarter.” Instead, it comes from using the “smarts” already in the AI and allowing it to access limited sets of your data when it needs them to answer something proprietary. If you fight against the AI and what it already knows, you will lose every time.
It’s like business leaders think they can just upload a bunch of random data to the LLMs and ask any questions they want and get immediate, accurate answers, or a custom AI that is just for their company. Sure, LLMs might be okay with finding obscure data insights, but a lot of the time, they just get lost in the mounds of data, as they are not set up to “learn” that way. This is rapidly improving, but reinforcement learning is out of reach for most companies right now, so instead, use limited data sets or connect your AI with machine learning models for some data tasks.
I am sure that soon this won’t be the case, and we will be able to do reinforcement learning with proprietary data on established LLMs, but we aren’t there yet. To use AI effectively, think about what it already knows, refine and structure that with various agents, and then give it access to your data to find specific information it needs. Don’t go into it thinking you are going to make the LLM smarter with your data — you won’t. Well, unless you have access to a bunch of GPUs and cash to do your own training. But if that is you, you are not reading this.
2 - The AI is never good enough for them
It is funny: when making something with AI, sometimes you can get to 80% completion in minutes, and then it will take you days, weeks, or months to get it to 90%. And you could work on it forever and never get it to 100%.
You need to stop thinking about AI like a computer. Since computers were created, humans have come to expect 100% accuracy 100% of the time from them. They are binary and work that way, whereas AI works on a binary system to replicate non-binary human intelligence. Because of this, AI operates much more like a human, making connections with things and very rarely giving the exact same answer twice. Consider it a free spirit; it is designed not to be deterministic, and unless you really lock it down, it will, by nature, be a bit random.
You should start treating your AI projects similar to how you treat a new employee you are onboarding. With humans, you know an employee is never going to be 100% all the time and will make mistakes sometimes. Yet you let them do their thing while you work with them to optimize and improve as they go. When an employee makes a mistake, you don’t fire them immediately; you work with them to limit the mistakes and keep them within a margin of error everyone is happy with. We need to do the same thing when deploying AI projects. Realize that it will be a never-ending process of optimizations and improvements, testing new things, and that your AI solution will never be 100%. Find the error level you’re comfortable with and continue to refine it to reduce the errors to a place where you would be happy with an employee doing the same task.
3 - They are sabotaged by their team
I don’t think I have ever seen so much resistance to a new technology as I currently see to AI. As my friend Kevin O’Reilly told me the other day, people think AI is neither good nor accurate enough to work, yet at the same time, they believe it will take over the world and ruin humankind. Thinking both is odd, but that is a topic for another day.
That said, this resistance is real, and it is ruining AI solutions before they even have a chance to succeed. It is very similar to how people behave when they worry their job might be outsourced. Understandably, they get worried and find ways to slow or block the outsourcing. This is why many AI projects are moving incredibly slowly or are never deployed; internal team members are doing what they can to ensure this is the case.
There are only two things you can do in this situation. First, every company needs to train all its employees on how to use AI. Companies need to develop policies and procedures for AI, then monitor use and ensure every team member’s AI use is growing over time. If you are interested, hit up Samuel Melancon, and he can get your company moving down this path. The second thing you need to do is ensure your current team feels part of the AI future at your company from the start. Obviously, not many companies have native AI experts in-house, but you need to make sure you have key team members who buy into AI and want to be part of any project you are doing with outside consultants. This needs to be their time to learn and become AI experts as well, and to see it as an opportunity for growth. If you can’t find enough of these people, then you need to make sure you get the right people on the team who are excited to do this
What now?
First, don’t think AI is not going to impact your company or your job. It will. The timeline might not be months, but in the next few years, it will change everything we understand about business and work. If you or your company are not integrating and learning AI now, you will be passed by an AI-native company that delivers similar, if not better, solutions than yours at 20% of the cost.
Not every AI project is going to work, but don’t be scared to try a few. And if you go into it knowing what AI is good at and your team buys in, you will find more success than failure