NotificationAI won’t fix your business. Read why ->
business|Feb 03, 2026

AI won’t fix your business. Lessons from 15 years building AI

Alan Descoins
Alan Descoins
Chief Executive Officer (CEO)
AI won’t fix your business. Lessons from 15 years building AI

AI won’t fix your business. It won’t clean up your data, align your teams, or clarify how decisions actually get made. What it will do is expose every shortcut, inconsistency, and fragile process you’ve been operating with, often faster and more brutally than any internal audit ever could.

After 15 years building AI systems in production across industries and operating models, I’ve seen the same pattern repeat itself. When AI “fails”, it’s rarely because the models aren’t good enough. It’s because organizations aren’t ready to confront what the technology reveals about how they really work.

I was recently featured in TIME Magazine’s TIME100 AI list, and that recognition led to conversations with CEOs, CIOs, board members, investors, and operators across industries. Almost nobody was asking whether AI would matter to their business. That question is already settled.

Instead, leaders were asking questions like:

  • Where should AI live inside the organization?
  • What should we build versus buy?
  • Which use cases are “safe” to start with?

Those are reasonable questions. But after enough of these conversations, the pattern became impossible to ignore. We were spending most of our time talking about use cases, pilots, and vendors, and almost none talking about the organizational conditions required for AI to create real, compounding value.

lightbulb.svg

There’s a better question leaders should be asking instead:

What bottlenecks do we need to remove so current and future AI can generate maximum value for our business?

That shift changes everything. It moves the conversation away from what AI can do and toward what’s stopping it from working. After 15 years in this space, I can tell you this with confidence:

The organizations that learn to remove those bottlenecks faster than their competitors will win. The ones that don’t will keep funding pilots, and then wonder why AI never shows up in the P&L.

Why this moment is different for enterprise AI

In the early days of mainstream enterprise AI, roughly 2010 through 2020, building anything meaningful meant:

  • Explaining to different teams why AI was even necessary
  • Assembling specialized teams with scarce expertise
  • Spending months on data pipelines before a model could be trained
  • Convincing stakeholders to invest in uncertain outcomes
  • Accepting that most problems were simply too expensive to solve

The activation energy was enormous. You needed serious conviction and budget to even start. That constraint shaped everything: which projects got funded, which problems were worth solving, and who could participate. Faced with that level of investment, many leaders walked away and kept doing business as usual.

Today, we’re seeing two forces converge that fundamentally change the equation:

  1. The models are more capable than ever. Tasks that used to require custom architectures, extensive feature engineering, and months of training can now work out of the box with a single API call. Image classification that used to require building and training your own neural networks? A call to Gemini handles most general cases, cheaply. Natural language understanding that needed custom datasets, linguistic experts, and carefully crafted rules? Foundation models can handle it. For a huge range of problems, the capability gap between “what AI can do” and “what we need it to do” has collapsed.
  2. The cost of building is lower than ever. More capable models combined with modern agentic harnesses are making systems more able to take action, not just generate text. They can write code, edit files, and use a computer close to how a human would. The new crop of AI-powered developer tools (Claude Code, Codex, Gemini CLI, Cursor, Antigravity, and others) are collapsing the time to build from weeks to hours. Cloud infrastructure is cheaper. The tooling ecosystem is more mature. The talent pool is broader. The time between “we should build this” and “it’s running in production” has dropped by an order of magnitude.

Put together, these two forces mean you can now build more capable systems faster than ever before. Not incrementally faster. Exponentially faster.

And here’s what history tells us happens next. We don’t build the same amount with fewer people. We build exponentially more with the same people. Every time we’ve reduced the cost and effort of building software, from assembly to high-level languages, from physical servers to cloud infrastructure, teams didn’t shrink. They expanded their product surface. The three-person startup that could maintain one product now maintains four. The enterprise team that could experiment with two approaches now tries ten.

The constraint is shifting from “can we build this?” to “what should we build?” When that shift happens, the bottleneck moves into your organization: your data, your decision-making processes, and your incentives.

So what are those organizational bottlenecks? And why do they matter more than picking the right model?

When building gets easy, organizations become the constraint

The technology got easier, but the organizational work stayed hard. And that’s exactly why this moment matters.

AI is now good enough, cheap enough, and accessible enough that “can we build this?” is no longer the main constraint. The constraint is whether your organization can support it: your data, your decision-making processes, and your incentives.

This is the part most teams underestimate. Lower friction doesn’t reduce work. It moves the bottleneck. And in enterprise AI, the bottleneck almost always becomes internal.

The real reason most AI pilots fail

If you look around, it feels like AI is transforming everything at lightning speed. Top AI researchers are being paid like professional athletes. Infrastructure spending is historic. Demos are impressive. Developers are offloading more and more code to AI assistants.

But when you look at the bottom line of most enterprises, the picture is grimmer. Real, sustained value from AI in the P&L is still rare.

targetg.svg

After so much time in this space, my biggest takeaway is simple:

Leaders want AI, but most organizations are not ready to do the dirty work required to get value out of it.

Many leaders believe their companies are data-driven because they’ve invested in dashboards, platforms, and analytics teams. Meanwhile, the people closest to the data know how fragile the foundation really is. Different teams follow different rules. Key definitions change by region. Critical data was never collected, or it’s buried across systems that don’t talk to each other. And even if it’s there, we can’t access it because Joe went on vacation and he’s the only one who understands it.

These problems were always there, but you could work around them. AI forces you to confront them if you want results. The more ambitious the initiative, the more these cracks become canyons.

What this looks like in practice

Unless you are technical or have been there before, it’s not trivial to visualize why solving certain problems with AI might be so hard, or take so long.

Let me give you a concrete example from our work. A few years ago, a major international duty-free retail chain came to us with what sounded like a straightforward question:

“We run about 25 promotions at the same time across our stores. Which ones are working? Which ones should we kill?”

They couldn’t A/B test. There were too many promotions, too many confounding variables, plus commitments with brands. So they needed causal inference: estimate the impact of each promotion on sales while controlling for everything else.

On paper, the data requirement sounded simple: every item sold, at every store, over time, with flags for which promotions were active at the moment of purchase.

Here’s what actually happened:

  • It took a month and a half of iteration just to understand how to calculate the final selling price of an item. The pricing logic was far more complex than anyone had documented.
  • Four months in, the dataset we were working with still contained products with negative sales. Not returns. Negative sales. Nobody could explain why.
  • When we asked clarifying questions, the client didn’t know who to ask. The people who built the systems had left. The institutional knowledge was gone.
  • Ultimately, the client did not know the state of their own data.

Had we not had a very strong technical sponsor on the client side, the project could have been canceled by business leaders pushing for results without understanding why progress was so slow.

This wasn’t a failure of AI. The models would have worked fine if we’d had reliable data, but that is almost never the case. This was a failure of organizational infrastructure. The company had been running on duct tape and tribal knowledge for years, and it didn’t matter until they tried to do something rigorous with their data. Then it mattered a lot.

The three bottlenecks that actually determine AI success

Cases like the one above might feel like failures, but discovery has real value. You can’t fix what you don’t fully understand.

To make AI work, leaders have to be willing to defibrillate their organizations. You have to actively shock the system to expose what’s broken underneath: messy processes, inconsistent decision-making, undocumented tribal knowledge, and data nobody trusts.

No matter how capable the AI models are, if you skip this step, they can only accelerate the chaos. Putting a Ferrari engine in a car with a broken transmission doesn’t make you faster. It makes you crash harder.

One of the hardest lessons for executives is that you often have to invest significant resources just to learn how bad things truly are. Projects that sound simple can take months before you even get to a usable dataset. And that’s where the real work begins.

So what does it actually take to make AI work for a business? In my experience, it comes down to three things. Get these right and AI benefits compound. Get them wrong and you’ll keep funding pilots.

1. Data and process foundations

This is the most obvious bottleneck, but also the most underestimated. AI doesn’t just need data. It needs data you can trust, with definitions you can defend, collected in ways you can reproduce.

The gaps show up in predictable ways:

  • Different teams use different definitions for the same metric
  • Key fields were never collected, or they’re collected inconsistently
  • The data exists, but it’s scattered across systems that don’t integrate
  • The person who knows how the data works left three years ago
  • There is no catalog

Most companies have been running on “good enough” data for years because humans can fill in the gaps. You can look at a dashboard, notice something weird, and apply context. AI can’t do that nearly as well. It will learn from the noise, treat missing data as signal, and confidently give you the wrong answer. Or it will hallucinate.

The uncomfortable truth is that you can’t shortcut this work. You have to go back, clean the foundations, align the definitions, and often rebuild processes and pipelines. It’s slow, unglamorous, and expensive. But it’s the only way the models have anything real to learn from.

2. Incentives and alignment

Even with clean data, AI breaks in organizations with misaligned incentives. If local teams are rewarded for hitting their own numbers, they will resist shared definitions and shared truth. Everyone optimizes for their own scoreboard, even when it conflicts with what’s best for the company.

This shows up constantly:

  • Marketing defines “conversion” differently than Sales does
  • Regional teams use their own metrics and refuse to standardize
  • Product and Finance can’t agree on how to measure customer value
  • Teams avoid collaboration and challenge the fundamentals because they know it will expose problems they don’t want surfaced

AI forces these conflicts into the open because it requires agreement on ground truth. You can’t train a model to optimize revenue if three departments define revenue differently. And you can’t deploy an agent to save costs on a process where gaps are constantly filled by tribal knowledge.

Fixing this requires executive-level intervention. Someone has to own the hard conversations: which definition wins, who loses autonomy, what gets measured, and who is accountable. If leadership doesn’t clarify this, the system defaults to politics. The AI becomes a game mastered by those seeking promotions, but ultimately irrelevant to the business.

The AI models are now a commodity. Alignment is not.

3. Decision rights and culture

The third bottleneck is the hardest to see and the hardest to fix: who decides, and how fast can they move?

AI changes the operating rhythm of a business. It surfaces insights faster, enables more experimentation, and shifts accountability in ways that make people uncomfortable. If your organization isn’t set up to act on what the AI tells you, the insights just pile up unused.

The cultural shift required is significant:

  • Decisions that used to take weeks need to happen in days
  • Accountability shifts from “we followed the process” to “we got the outcome”
  • Experimentation becomes the norm, which means tolerating more failures
  • Stakeholders need to understand where AI can fall short

Leaders often underestimate how destabilizing this is. This is not just technology adoption. It’s power, control, and identity. The people who built their careers on institutional knowledge will feel threatened. The teams that prided themselves on careful planning feel pressured to move faster than they’re comfortable with.

If leadership doesn’t own this cultural change and model it from the top, it won’t happen. Organizations have inertia. Without executive ownership, they will default back to old patterns and slow decision-making.

The bitter lesson of enterprise AI: it’s not a technical problem

Let me be direct about something: no matter what shiny crop of new AI models gets released, the path to an AI-first business is long and painful. It only succeeds when the business learns to operate differently.

Removing bottlenecks is hard, deeply non-trivial, and takes time. It’s not a single project or a transformation deck you present to the board. It’s a sustained, sometimes uncomfortable effort to change how the business actually runs. And that’s exactly why it’s where the value lives.

Competitive advantage in AI doesn’t come from algorithms. Everyone has access to the same models, and the performance gap between them is shrinking. The advantage comes from what the models enable you to do differently. That requires rewiring the operating system of your company:

  • How decisions get made: faster, more distributed, more data-driven
  • How teams collaborate: across silos, with shared definitions and shared accountability
  • How value gets measured: with clarity on what matters and who owns it
  • How experiments get approved: with lower friction and higher tolerance for failure
  • How quickly the business can adapt: to new information, new models, and new market conditions

If you don’t redesign those pathways, the models don’t compound. They get stuck at the edges. They show up as demos, shiny PR, and pilot programs that never scale, while the core business continues operating the way it always has.

The right posture is to be honest about the fact that this work is hard, and many organizations will fail at it.

But here’s the thing: the organizations that succeed at this will have an almost insurmountable advantage. Once you clear the bottlenecks, every new model that gets released makes you stronger. You’re not starting from scratch. You’re upgrading components in a system that already works. Competitors who waited for perfection will still be figuring out their data pipelines while you’re compounding gains.

The challenge for 2026

AI will reward leaders who confront reality. The gap won’t be between those who “have AI” and those who don’t. It will be between those who use AI as a toy and those who use it as a force for organizational change.

So here’s my challenge: start building the foundation.

Get stuff done. Even if it doesn’t work yet, use today’s imperfect models to pressure-test your processes. Expose the data problems you’ve been ignoring. Force the hard conversations about incentives and decision rights. Build systems that create value now, even if it’s modest, so that when the next model arrives you’re not starting from zero. You’re compounding.

The hype is free. The demos are impressive, and big labs will keep releasing better and better models. But true economic value has to be earned, and it’s earned through the unglamorous work of removing friction, aligning incentives, and changing how your business operates.

The leaders who win in AI won’t be the ones with the best models. They’ll be the ones who cleared the path fastest.

The question is: are you ready?

Call to action

Wondering how AI
can help you?

Get the playbook

From AI hype to
business outcomes

playbook