AI won’t fix your business. It won’t clean up your data, align your teams, or clarify how decisions actually get made. What it will do is expose every shortcut, inconsistency, and fragile process you’ve been operating with, often faster and more brutally than any internal audit ever could.
After 15 years building AI systems in production across industries and operating models, I’ve seen the same pattern repeat itself. When AI “fails”, it’s rarely because the models aren’t good enough. It’s because organizations aren’t ready to confront what the technology reveals about how they really work.
I was recently featured in TIME Magazine’s TIME100 AI list, and that recognition led to conversations with CEOs, CIOs, board members, investors, and operators across industries. Almost nobody was asking whether AI would matter to their business. That question is already settled.
Instead, leaders were asking questions like:
Those are reasonable questions. But after enough of these conversations, the pattern became impossible to ignore. We were spending most of our time talking about use cases, pilots, and vendors, and almost none talking about the organizational conditions required for AI to create real, compounding value.
There’s a better question leaders should be asking instead:
What bottlenecks do we need to remove so current and future AI can generate maximum value for our business?
That shift changes everything. It moves the conversation away from what AI can do and toward what’s stopping it from working. After 15 years in this space, I can tell you this with confidence:
The organizations that learn to remove those bottlenecks faster than their competitors will win. The ones that don’t will keep funding pilots, and then wonder why AI never shows up in the P&L.
In the early days of mainstream enterprise AI, roughly 2010 through 2020, building anything meaningful meant:
The activation energy was enormous. You needed serious conviction and budget to even start. That constraint shaped everything: which projects got funded, which problems were worth solving, and who could participate. Faced with that level of investment, many leaders walked away and kept doing business as usual.
Today, we’re seeing two forces converge that fundamentally change the equation:
Put together, these two forces mean you can now build more capable systems faster than ever before. Not incrementally faster. Exponentially faster.
And here’s what history tells us happens next. We don’t build the same amount with fewer people. We build exponentially more with the same people. Every time we’ve reduced the cost and effort of building software, from assembly to high-level languages, from physical servers to cloud infrastructure, teams didn’t shrink. They expanded their product surface. The three-person startup that could maintain one product now maintains four. The enterprise team that could experiment with two approaches now tries ten.
The constraint is shifting from “can we build this?” to “what should we build?” When that shift happens, the bottleneck moves into your organization: your data, your decision-making processes, and your incentives.
So what are those organizational bottlenecks? And why do they matter more than picking the right model?
The technology got easier, but the organizational work stayed hard. And that’s exactly why this moment matters.
AI is now good enough, cheap enough, and accessible enough that “can we build this?” is no longer the main constraint. The constraint is whether your organization can support it: your data, your decision-making processes, and your incentives.
This is the part most teams underestimate. Lower friction doesn’t reduce work. It moves the bottleneck. And in enterprise AI, the bottleneck almost always becomes internal.
If you look around, it feels like AI is transforming everything at lightning speed. Top AI researchers are being paid like professional athletes. Infrastructure spending is historic. Demos are impressive. Developers are offloading more and more code to AI assistants.
But when you look at the bottom line of most enterprises, the picture is grimmer. Real, sustained value from AI in the P&L is still rare.
After so much time in this space, my biggest takeaway is simple:
Leaders want AI, but most organizations are not ready to do the dirty work required to get value out of it.
Many leaders believe their companies are data-driven because they’ve invested in dashboards, platforms, and analytics teams. Meanwhile, the people closest to the data know how fragile the foundation really is. Different teams follow different rules. Key definitions change by region. Critical data was never collected, or it’s buried across systems that don’t talk to each other. And even if it’s there, we can’t access it because Joe went on vacation and he’s the only one who understands it.
These problems were always there, but you could work around them. AI forces you to confront them if you want results. The more ambitious the initiative, the more these cracks become canyons.
Unless you are technical or have been there before, it’s not trivial to visualize why solving certain problems with AI might be so hard, or take so long.
Let me give you a concrete example from our work. A few years ago, a major international duty-free retail chain came to us with what sounded like a straightforward question:
“We run about 25 promotions at the same time across our stores. Which ones are working? Which ones should we kill?”
They couldn’t A/B test. There were too many promotions, too many confounding variables, plus commitments with brands. So they needed causal inference: estimate the impact of each promotion on sales while controlling for everything else.
On paper, the data requirement sounded simple: every item sold, at every store, over time, with flags for which promotions were active at the moment of purchase.
Here’s what actually happened:
Had we not had a very strong technical sponsor on the client side, the project could have been canceled by business leaders pushing for results without understanding why progress was so slow.
This wasn’t a failure of AI. The models would have worked fine if we’d had reliable data, but that is almost never the case. This was a failure of organizational infrastructure. The company had been running on duct tape and tribal knowledge for years, and it didn’t matter until they tried to do something rigorous with their data. Then it mattered a lot.
Cases like the one above might feel like failures, but discovery has real value. You can’t fix what you don’t fully understand.
To make AI work, leaders have to be willing to defibrillate their organizations. You have to actively shock the system to expose what’s broken underneath: messy processes, inconsistent decision-making, undocumented tribal knowledge, and data nobody trusts.
No matter how capable the AI models are, if you skip this step, they can only accelerate the chaos. Putting a Ferrari engine in a car with a broken transmission doesn’t make you faster. It makes you crash harder.
One of the hardest lessons for executives is that you often have to invest significant resources just to learn how bad things truly are. Projects that sound simple can take months before you even get to a usable dataset. And that’s where the real work begins.
So what does it actually take to make AI work for a business? In my experience, it comes down to three things. Get these right and AI benefits compound. Get them wrong and you’ll keep funding pilots.
This is the most obvious bottleneck, but also the most underestimated. AI doesn’t just need data. It needs data you can trust, with definitions you can defend, collected in ways you can reproduce.
The gaps show up in predictable ways:
Most companies have been running on “good enough” data for years because humans can fill in the gaps. You can look at a dashboard, notice something weird, and apply context. AI can’t do that nearly as well. It will learn from the noise, treat missing data as signal, and confidently give you the wrong answer. Or it will hallucinate.
The uncomfortable truth is that you can’t shortcut this work. You have to go back, clean the foundations, align the definitions, and often rebuild processes and pipelines. It’s slow, unglamorous, and expensive. But it’s the only way the models have anything real to learn from.
Even with clean data, AI breaks in organizations with misaligned incentives. If local teams are rewarded for hitting their own numbers, they will resist shared definitions and shared truth. Everyone optimizes for their own scoreboard, even when it conflicts with what’s best for the company.
This shows up constantly:
AI forces these conflicts into the open because it requires agreement on ground truth. You can’t train a model to optimize revenue if three departments define revenue differently. And you can’t deploy an agent to save costs on a process where gaps are constantly filled by tribal knowledge.
Fixing this requires executive-level intervention. Someone has to own the hard conversations: which definition wins, who loses autonomy, what gets measured, and who is accountable. If leadership doesn’t clarify this, the system defaults to politics. The AI becomes a game mastered by those seeking promotions, but ultimately irrelevant to the business.
The AI models are now a commodity. Alignment is not.
The third bottleneck is the hardest to see and the hardest to fix: who decides, and how fast can they move?
AI changes the operating rhythm of a business. It surfaces insights faster, enables more experimentation, and shifts accountability in ways that make people uncomfortable. If your organization isn’t set up to act on what the AI tells you, the insights just pile up unused.
The cultural shift required is significant:
Leaders often underestimate how destabilizing this is. This is not just technology adoption. It’s power, control, and identity. The people who built their careers on institutional knowledge will feel threatened. The teams that prided themselves on careful planning feel pressured to move faster than they’re comfortable with.
If leadership doesn’t own this cultural change and model it from the top, it won’t happen. Organizations have inertia. Without executive ownership, they will default back to old patterns and slow decision-making.
Let me be direct about something: no matter what shiny crop of new AI models gets released, the path to an AI-first business is long and painful. It only succeeds when the business learns to operate differently.
Removing bottlenecks is hard, deeply non-trivial, and takes time. It’s not a single project or a transformation deck you present to the board. It’s a sustained, sometimes uncomfortable effort to change how the business actually runs. And that’s exactly why it’s where the value lives.
Competitive advantage in AI doesn’t come from algorithms. Everyone has access to the same models, and the performance gap between them is shrinking. The advantage comes from what the models enable you to do differently. That requires rewiring the operating system of your company:
If you don’t redesign those pathways, the models don’t compound. They get stuck at the edges. They show up as demos, shiny PR, and pilot programs that never scale, while the core business continues operating the way it always has.
The right posture is to be honest about the fact that this work is hard, and many organizations will fail at it.
But here’s the thing: the organizations that succeed at this will have an almost insurmountable advantage. Once you clear the bottlenecks, every new model that gets released makes you stronger. You’re not starting from scratch. You’re upgrading components in a system that already works. Competitors who waited for perfection will still be figuring out their data pipelines while you’re compounding gains.
AI will reward leaders who confront reality. The gap won’t be between those who “have AI” and those who don’t. It will be between those who use AI as a toy and those who use it as a force for organizational change.
So here’s my challenge: start building the foundation.
Get stuff done. Even if it doesn’t work yet, use today’s imperfect models to pressure-test your processes. Expose the data problems you’ve been ignoring. Force the hard conversations about incentives and decision rights. Build systems that create value now, even if it’s modest, so that when the next model arrives you’re not starting from zero. You’re compounding.
The hype is free. The demos are impressive, and big labs will keep releasing better and better models. But true economic value has to be earned, and it’s earned through the unglamorous work of removing friction, aligning incentives, and changing how your business operates.
The leaders who win in AI won’t be the ones with the best models. They’ll be the ones who cleared the path fastest.
The question is: are you ready?
Terms and Conditions | © 2026. All rights reserved.