Why We’re So Bad at Solving Problems

(And Why It Matters More Than Ever)

This is the first post in the series “Problem First: AI-Assisted Problem Solving for Organizations That Can’t Afford to Get It Wrong.”

Clients always come to me knowing what they want. Very often, however, they don’t do enough due diligence to understand what they really need.

A few years ago, a large Midwest-based paint and coatings manufacturer asked me to help them crowdsource a redesign of an industrial pump. The pump worked beautifully in the lab, but clogged constantly when used outdoors. The engineers were convinced they had a mechanical design problem. They wanted to put a challenge out to a crowd of external solvers to get a better pump.

I asked them a simple question: were the indoor and outdoor testing conditions identical? They weren’t. The lab testing took place in summer, with indoor temperatures, even in the air-conditioned room, often in the mid-seventies. The outdoor work was in autumn, with outdoor temperatures rarely exceeding sixty degrees. Could the observed clogging have something to do with temperature rather than the pump design?

It could, and it did. The paint the engineers were loading into the pump was becoming viscous with even a small drop in temperature, choking the device. The engineers fixed the problem by adjusting the paint formulation. No redesign. No crowdsourcing. No new pump.

That story would be more satisfying if it were unusual. It isn’t. At around the same time, another client wanted to crowdsource an additive that would prevent a food product from losing sweetness during processing. I spent considerable effort persuading the client to leave the door open for solutions that went beyond additives, for example, those considering changes to the preparation process itself. “No, we can’t change the process; it’ll be too expensive!” insisted the client. To their great surprise, a solver proposed a minor, inexpensive process modification that preserved the sweetness perfectly. No additive required.

In both cases, smart, experienced professionals were ready to invest significant time and money solving the wrong problem. Not because they were incompetent—but because they never paused to ask a question: was the problem they had identified the actual problem they faced?

The Tylenol Reflex

I sometimes describe this pattern as the Tylenol reflex: reaching for a painkiller the moment you feel a headache, without asking first what’s causing the pain. If the headache is from a hangover, Tylenol is fine. If it’s a mild cold, it’ll help. But if it’s a symptom of something more serious—a chronic condition, a vascular problem, or a brain tumor—the Tylenol doesn’t just fail to help. It actively harms you by masking the signal your body is sending.

Organizations do this all the time. A product isn’t selling, so they redesign the packaging. Employee turnover spikes, so they raise salaries. A process is slow, so they automate it. Each of these responses might be correct. But none of them can be proven to be correct until someone asks: What is actually causing this? Is it the disease, or is it merely a symptom?

The pattern is so common that it barely registers as a mistake. It feels like decisiveness. It looks like action. Organizations reward people who move fast, who “are biased toward action,” who don’t get bogged down in analysis. The person who says “Wait! Are we sure we understand the problem?” is rarely the one who gets promoted.

And so, the Tylenol reflex becomes part of organizational culture: treat the symptom, move on, hope for the best. When the problem resurfaces—as it inevitably does—treat it again. The cost of this cycle is enormous, but because it’s distributed across dozens of small decisions rather than one catastrophic failure, it’s almost invisible.

The Discipline Nobody Teaches

Here is the uncomfortable truth: problem definition is a discipline, and almost nobody is trained in it.

Business schools teach strategy, finance, marketing, and operations. Engineering programs teach design, analysis, and optimization. True, medical schools teach diagnosis—and that’s perhaps the only profession that takes the problem-definition step seriously, for obvious reasons. But in most professional contexts, the emphasis falls overwhelmingly on generating solutions, not on interrogating problems.

Think about how incentives work in a typical organization. Performance reviews reward deliverables: projects completed, features shipped, campaigns launched. Nobody gets a bonus for spending three weeks redefining the problem statement. The entire apparatus of organizational life—meetings, deadlines, KPIs, quarterly goals—is built around producing outputs, not around questioning inputs.

This isn’t a new problem. Organizations have always struggled to slow down enough to define problems before solving them. But the struggle has gotten worse, not better, as the toolkit of available solutions has expanded. Consider the parade of management methodologies that have swept through organizations over the past few decades: Six Sigma, agile, design thinking, lean startup, and now artificial intelligence. Each one arrived with a powerful promise and a dazzling set of tools. And each one created a subtle gravitational pull toward the same mistake: starting with the tool and looking for problems to apply it to.

I call this the technology-centric trap, and I’ve written about it in the context of AI adoption. But the trap is far older than AI. It operates every time a new methodology becomes fashionable. The organization acquires the tool, forms a task force, identifies processes to transform—and skips the step of asking whether those processes are the right ones to focus on, or whether the problem they’re meant to address has been correctly understood in the first place.

The result is a peculiar form of organizational waste: the efficient pursuit of the wrong objective. You can run a flawless Six Sigma process on a problem that didn’t need Six Sigma. You can build a beautifully agile development pipeline for a product nobody wants. You can deploy AI to automate a workflow that should have been eliminated entirely. In each case, the execution is excellent. The diagnosis is not. In fact, it’s absent.

Puzzles and Messes

It’s worth pausing to ask why problem definition is so hard—aside from the fact that nobody teaches it and nobody rewards it.

Part of the answer lies in a distinction that most organizations never make: the difference between what I’ll call puzzles and messes.

A puzzle is a problem with a definable solution space. It may be difficult—fiendishly so—but its boundaries are knowable. An engineering challenge, a logistics optimization, a regulatory compliance question: these are puzzles. You can specify what a correct answer looks like. You can break the problem into components. Expertise, analysis, and enough computational power will eventually get you there.

A mess is different. A mess involves interdependencies that shift as you act on them. Stakeholders with conflicting interests. Emergent behaviors that can’t be predicted from the components. Feedback loops that amplify or dampen in ways that change over time. Market positioning, organizational culture change, community development, geopolitical strategy—these are messes. There is no single “correct” answer. The problem itself changes shape depending on who’s looking at it and what has been tried before.

This distinction matters because the failure modes are completely different. With a puzzle, the typical mistake is misidentifying the puzzle—like my paint company clients, who thought they had a pump design puzzle when they actually had a paint formulation puzzle. The solution space was knowable; they were just searching in the wrong one.

With a mess, the typical mistake is more fundamental: treating the mess as if it were a puzzle. Organizations crave the clarity of puzzles. They want crisp problem statements, bounded solution spaces, and measurable outcomes. So, when they encounter a mess—a complex, shifting, multi-stakeholder tangle—they instinctively reframe it as something simpler. They pick one dimension of the mess, define it as the problem, and go to work on it. The rest of the mess, unaddressed, continues to fester.

Most serious organizational challenges are messes or at least have a messy component. And most organizations approach them with puzzle-solving tools. This mismatch is one of the deepest reasons why problem-solving goes wrong: not because people lack intelligence or effort, but because they apply the wrong kind of thinking to the challenge in front of them.

Structured problem solving doesn’t eliminate the difference between puzzles and messes. But it forces you to confront it. A rigorous process asks: What kind of problem is this? What do we know, what do we assume, and what are we ignoring? Are we simplifying because simplification is warranted, or because complexity makes us uncomfortable? These questions don’t guarantee the right answer. But they dramatically reduce the odds of confidently pursuing the wrong one.

The Compounding Cost of Solving the Wrong Problem

All of this has always been true. So why does it matter more now?

Because the cost of acting on a misdiagnosed problem has fundamentally changed.

In the past, the friction of execution provided a kind of accidental safety net. Solutions took time to implement. They required budgets to be approved, teams to be assembled, and vendors to be contracted. During that lag, there were natural checkpoints—moments when someone might say, “Wait, are we sure this is the right approach?” The slowness of execution, frustrating as it was, created space for course correction.

AI has removed much of that friction. Today, organizations can generate strategic analyses, produce detailed implementation plans, build prototypes, and deploy solutions at a speed that would have been unimaginable a few years ago. That speed is genuinely transformative when pointed at the right problem. But when pointed at the wrong one, it means you arrive at a dead end faster, having spent resources and organizational energy on something that was never going to work.

The democratization of AI tools makes this especially urgent. It’s no longer just large corporations with dedicated innovation teams that can act quickly on a misdiagnosis. Small businesses, nonprofits, and solo entrepreneurs now have access to powerful AI capabilities. They can move fast. The question is whether they’re moving in the right direction.

And here is the irony: the very tool that accelerates execution—AI—is often adopted with the same Tylenol reflex that plagues every other organizational decision. Organizations ask, “Which of our problems can AI solve?” when they should be asking, “Do we actually understand our problems?” Technology changes. The underlying mistake doesn’t.

Problem Solving as Competitive Advantage

There is a positive way to frame all of this. If most organizations are bad at defining problems—and they are—then the ability to define problems correctly becomes a genuine competitive advantage. Not a theoretical one. A practical, measurable, durable edge over competitors who continue to solve the wrong problems efficiently.

My first (“golden”) rule of problem solving, drawn from years of innovation consulting, is simple: know what you want, understand what you need. What you want is the surface-level request: a better pump, a food additive, a faster process. What you need is the underlying outcome: paint that flows at any temperature, sweetness that survives processing, a workflow that serves its purpose. The gap between want and need is where most problem-solving failures live.

Closing that gap requires a process—not just good instincts, not just smart people, but a structured, repeatable method for moving from symptoms to root causes, from assumptions to evidence, from a vague sense that something is wrong to a precise understanding of what and why.

What’s Missing

But here’s the problem with saying “problem solving needs a process.” Everyone nods. Nobody disagrees. And almost nobody can tell you what that process actually looks like.

We talk endlessly about the importance of structured problem solving, yet pay remarkably little attention to what this process involves in practice: what steps it includes, what the input and output of each step should be, where organizations typically cut corners, and what the consequences of those shortcuts are. The lack of a clear, explicit map of the problem-solving process is itself part of the problem. You can’t follow a discipline that hasn’t been defined.

That’s what the next post in this series will address: a step-by-step anatomy of what rigorous problem solving looks like in practice—from the moment a problem is first felt to the moment solutions are ready to be evaluated. Not a theory. A map.

Next in the series: “The Problem-Solving Manifesto: What the Problem-Solving Process Actually Looks Like.”

Unknown's avatar

About Eugene Ivanov

Eugene Ivanov is a business and technical writer interested in innovation and technology. He focuses on factors defining human creativity and socioeconomic conditions affecting corporate innovation.
This entry was posted in AI, Innovation and tagged , , , , , . Bookmark the permalink.

Leave a comment