Think Wrong: Why the Bold Path in AI Transformation Starts With Problems, Not Technology

The dangerous belief that "whatever is, is right", and why most AI initiatives fail before they begin

Does your AI strategy start with technology — or with the problem it's meant to solve? Which would you prefer?

The Trap of "Whatever Is, Is Right"

Henry Bessemer helped usher in the Industrial Revolution by inventing mass steel production. When asked how he bested competitors with more experience, more resources, and more expertise in established methods, this British inventor credited one singular advantage:

"I had no fixed ideas derived from long-established practice to control and bias my mind, and did not suffer from the general belief that whatever is, is right."

Read that last part again: "Whatever is, is right."

That's the status quo talking. It's the predictable path, forged by synaptic connections in our brains and reinforced by cultural beliefs, biases, and orthodoxies that we rarely examine. Most people don't even notice they hold this belief. It operates invisibly, feeling like common sense, pragmatism, or simply being realistic. (It's none of these things.)

In the Think Wrong methodology, this invisible force is named for what it is: the conspiracy of biology and culture against original thought. Our brains are wired to follow well-worn neural pathways because it's efficient. Organizations reward predictable execution because it feels safe. The result is that most problem-solving efforts (no matter how ambitious they claim to be) stay firmly inside the boundaries of "whatever is."

And nowhere is this trap more visible than in AI transformation.

The AI Transformation Paradox

The conventional wisdom in 2025 says: Adopt AI. Automate processes. Implement the latest models. Move fast before competitors do.

This is a thinking trap, optimizing for technology adoption without questioning direction.

Here's the paradox most organizations won't admit: starting with technology instead of problems isn't innovation. It's following. It's the predictable path dressed up as transformation, and it's why the majority of AI initiatives fail to deliver meaningful results.

The question isn't which AI tools should we implement? The question is what problem are we actually trying to solve?Most AI initiatives fail not because the technology is wrong, but because no one paused to interrogate the problem itself. The tool was chosen before the question was asked — a perfect example of "whatever is, is right" applied to emerging technology.

Bessemer's competitors had every advantage that experience and established practice could provide. They also had fixed ideas controlling and biasing their minds. Bessemer had none of that, and that absence became the advantage that revolutionized an industry.

The Bold Path Is, By Definition, Abnormal

The people remembered for altering the trajectory of human history — Galileo, Einstein, the Wright Brothers, Marie Curie, Jane Goodall — had one thing in common. In their time, they were called heretics, madmen, and lunatics. Today we use different words: pioneers, innovators, leaders.

Without these individuals and teams willing to challenge "whatever is," there would be no printing press or airplane, no civil rights movement or Cubism, no penicillin or personal computer. The people at the center of such advancements are remembered precisely because they shifted the trajectory of everything that came after.

They could think wrong.

In the Think Wrong framework, "wrong" doesn't mean incorrect — it means abnormal. The bold path has to be abnormal by definition. If it were normal, it wouldn't be bold. It would be incremental, expected, and ultimately forgettable. This is why Think Wrong practices are designed not just to generate ideas, but to defend those ideas against the well-meaning "right thinkers" who would strangle them before they mature.

In AI transformation, the abnormal path looks deceptively simple: start with the problem, not the technology.

The obvious path says everyone is implementing AI, so we need an AI strategy, so let's figure out what we can automate. The bold path asks different questions entirely: What's the actual problem we're solving? What outcome matters? Does this even require AI — or something else entirely?

This feels uncomfortable because it's supposed to. The brain craves the obvious, and right now "AI transformation" feels like survival. But survival and transformation are different games, and confusing them is how organizations end up with expensive technology implementations that solve problems nobody actually has.

Why Think Wrong Matters for AI Strategy

The Think Wrong methodology identifies six practices: Be Bold, Get Out, Let Go, Make Stuff, Bet Small, and Move Fast, each designed to address a specific moment when the status quo rears up to extinguish possibility. What makes these practices powerful isn't just their ability to generate novel solutions, but their function as deflection tools against the biological and cultural forces that pull teams back into business as usual.

When applied to AI transformation, thinking wrong means resisting the gravitational pull toward technology-first strategies. It means being willing to hold the unpopular position that the problem comes first, even when everyone around you is racing to implement the latest model. It means sitting with the discomfort of slowing down when the entire industry seems to be speeding up.

The conventional wisdom says: Be bold with AI. Adopt fast. Automate everything. But this isn't how trajectories get shifted. Bessemer didn't optimize within the existing paradigm of steel production, he questioned the paradigm itself. The move now isn't to adopt AI faster. It's to question whether AI is even the right frame for the problem at hand.

The Heretic's Advantage in a Technology-Obsessed Era

The status quo has gravity. It pulls everything toward the predictable, toward "whatever is." Right now, "whatever is" says AI is inevitable, resistance is futile, implement or die. That's not strategy, it's panic dressed as progress, and it leads organizations to invest enormous resources in solutions to problems they haven't actually defined.

Escaping that gravity requires intention and a willingness to appear wrong before being proven right. Galileo appeared wrong. Einstein appeared wrong. The Wright Brothers appeared very wrong. They weren't optimizing for approval; they were optimizing for truth.

The Think Wrong approach recognizes that the discomfort of challenging assumptions is the price of admission for genuine transformation. Organizations that want AI to actually transform their operations, not just modernize their technology stack — need to be willing to think wrong about AI itself.

This doesn't mean rejecting technology. It means refusing to let technology be the starting point. It means having the discipline to fully understand the problem before reaching for solutions. It means recognizing that the bold path in AI transformation isn't about which tools you implement, but about whether you've asked the right questions in the first place.

Thinking Wrong About What Comes Next

The bold path won't feel comfortable. It's not supposed to.

Access to AI tools isn't the bottleneck. Every organization has that now. The bottleneck is the invisible pull toward obvious solutions, the reflex to reach for technology before examining what actually needs solving. Brains and cultures are wired this way. It takes intention to work against that wiring.

Somewhere outside the normal mental pathways, in the uncomfortable space where assumptions get questioned rather than confirmed, lives the solution that will genuinely transform an organization. It's not waiting in the latest model release or the competitor's tech stack. It's waiting in a different kind of question.

Bessemer had less experience than his competitors. Less expertise. Less knowledge of established methods. What he didn't have: the belief that whatever is, is right. In an era obsessed with AI adoption, that same freedom from fixed ideas might be the most valuable advantage available.

Normal paths lead to normal outcomes. In AI transformation, normal looks like starting with technology. Abnormal looks like starting with the problem.

Think wrong.

Think Wrong principles are applied at Solve Next to help organizations navigate AI transformation by starting with problems rather than technology. For those facing AI transformation challenges where conventional thinking keeps producing conventional results, a conversation might help.

Next
Next

Funding Innovation Is Easy. Funding Uncertainty Is the Real Problem.