The Problem-Solving Manifesto

(What the Problem-Solving Process Actually Looks Like)

This is the second post in the series “Problem First: AI-Assisted Problem Solving for Organizations That Can’t Afford to Get It Wrong.”

In the first post in this series, I argued that organizations are consistently bad at solving problems—not because of the lack of talent or effort, but because they skip the most important step: understanding what problem they actually face. The Tylenol reflex, the technology-centric trap, the confusion of puzzles and messes—these are all symptoms of the same underlying failure. Organizations treat problem solving as an instinct when it should be treated as a discipline.

The response to that post confirmed something I’ve suspected for a long time: people overwhelmingly agree that problem definition matters. Almost nobody disagrees with the principle. And yet, when you ask a simple follow-up question—so what does a rigorous problem-solving process actually look like?—the room goes quiet.

That silence is the problem. We have a broad consensus that structured problem solving is important, but almost no shared understanding of what it involves. Everyone knows you should “define the problem before solving it.” Few can tell you what that means in practice: what stages the process includes, what each stage produces, what happens when you skip one, and how the stages connect to each other.

This post is my attempt to fill that gap. What follows is not a theory. It’s a map, a five-stage anatomy of what rigorous problem solving looks like, from the moment a problem is first felt to the moment solutions are ready to be acted upon. Each stage has a defined purpose, a clear output, and a predictable failure mode when skipped. The stages build on each other. Skip one, and everything downstream is compromised.

I have no doubt that other versions of this process exist, and I would welcome the chance to see and discuss them. Some may include additional stages; others may organize the work differently. But I am confident that any viable problem-solving process must include, at a minimum, the five stages outlined below. Omit any one of them, and the process won’t get streamlined. It’ll become incomplete.

Stage 1: Problem Intake and Framing

The purpose

Capture the problem as it is actually experienced—not as it has been prematurely diagnosed.

What this stage involves

Every problem-solving process begins with someone saying, in effect, “Something is wrong and I want to fix it.” That initial statement is precious. It’s also almost never the right problem definition.

The purpose of the first stage is to create space for the full picture to emerge. This means inviting a description of the issue that includes not just the headline complaint but the context surrounding it: where the problem manifests, when it started, who is affected, what has already been tried, why the prior attempts to solve the problem have been unsuccessful, and what’s at stake if it isn’t resolved.

Crucially, this stage must welcome messy, incomplete, and uncertain inputs. If people feel they need a polished problem statement before they can begin, they will—consciously or not—skip the ambiguity that often contains the most important clues.

The output

A structured problem statement that reflects three things: what is happening, where it manifests, and why it matters. This statement becomes the anchor for everything that follows. It is not yet a diagnosis. It is a clear, honest articulation of the situation as currently understood.

What goes wrong when you skip it

This is the stage most organizations believe they perform. They actually don’t. What typically passes for problem framing is a meeting where someone in authority declares what the problem is, and everyone else nods. The declaration feels like framing. It isn’t. It’s a premature conclusion dressed up as a starting point.

Consider how many corporate initiatives begin with a solution embedded in the problem statement: “We need to improve our digital presence” (solution: digital). “We need to reduce headcount in the operations division” (solution: layoffs). “We need an AI strategy” (solution: AI). In each case, the “problem” has already been defined in terms of a preferred answer. The framing stage, if it happens at all, becomes a formality, a rubber stamp on a decision that was made before the process began.

The cost is that the actual problem, the one hiding behind the executive’s confident declaration, never gets examined.

Stage 2: Clarification, Assumptions, and Boundaries

The purpose

Reduce ambiguity by distinguishing what is known from what is assumed—and surface the constraints that will shape any viable solution.

What this stage involves

Once a problem has been framed, it needs to be pressure-tested. This means asking targeted follow-up questions to resolve major unknowns, identify constraints—organizational, legal, cultural, financial, technical—and, most importantly, draw a sharp line between facts and assumptions.

The distinction between facts and assumptions is the single most underrated element of problem solving. Organizations routinely treat assumptions as facts, especially when those assumptions are long-held, widely shared, or endorsed by senior leadership. The longer an assumption has gone unquestioned, the more solid it appears—and the more dangerous it becomes.

The output

A confirmed working frame, which includes: the key assumptions underlying the problem statement (explicitly labeled as assumptions, not facts), the known constraints that any solution must respect, and the areas of genuine uncertainty that remain. This output protects every stage that follows. If the assumptions are wrong, it’s better to discover it at this stage than after solutions have been generated and resources committed.

What goes wrong when you skip it

The paint pump. The food additive. Both stories from the first post in this series are textbook examples of what happens when Stage 2 is skipped. In the paint pump case, the engineers held an unexamined assumption: the problem was mechanical. Nobody asked whether the indoor and outdoor testing conditions were truly equivalent. Nobody distinguished between what was known (the pump clogs outdoors) and what was assumed (the clogging is caused by the pump design). The assumption shaped the entire problem definition—and nearly led to a costly crowdsourcing campaign aimed at solving a problem that didn’t exist.

The food company’s assumption was equally invisible and equally costly: the preparation process was untouchable. “We can’t change the process; it’ll be too expensive!” This wasn’t a fact. It was a belief—one that, once surfaced and challenged, turned out to be wrong. The actual solution was a minor, inexpensive process change.

The pattern is always the same: an assumption that nobody identifies as an assumption silently narrows the solution space, often eliminating the best answer before the search even begins.

Stage 3: Root Cause Analysis

The purpose

Move from symptoms to plausible underlying causes.

What this stage involves

This is the analytical center of gravity of the entire process. With a well-framed problem and tested assumptions in hand, the task now is to ask: why is this happening? Not just the proximate cause, but the structural, process-level, human, and strategic factors that allow the problem to persist.

Rigorous root cause analysis does several things that distinguish it from casual diagnosis. First, it generates multiple hypotheses, not just one. The goal is not to converge prematurely on a single explanation but to map the plausible causal landscape. This is what distinguishes root cause analysis from simpler diagnostic methods like the popular 5 Whys technique. The 5 Whys drives toward a single explanation as quickly as possible, useful for straightforward, linear failures, but dangerously reductive when the problem has multiple contributing causes operating at different levels. Root cause analysis resists this premature convergence.

Second, root cause analysis assigns confidence levels: some root causes will be well-supported by evidence; others will be tentative, requiring further investigation. Third, it explicitly resists the temptation to jump to solutions.

This last point deserves emphasis. The gravitational pull toward solutioning is strongest at precisely this stage, because the root causes themselves often suggest obvious fixes. But “obvious” and “correct” are not synonyms. A root cause analysis that collapses into solution generation has failed at its primary task.

The output

A root cause map, which includes: primary root causes, contributing factors, and any open questions that further investigation might resolve. This map is what makes the difference between solutions that address symptoms and solutions that address disease.

What goes wrong when you skip it

When organizations skip or rush root cause analysis, they get solutions that treat symptoms. And symptoms, once treated, come back.

The pattern is familiar to anyone who has watched organizations cycle through repeated “fixes” for the same persistent problem. Employee engagement is low, so the company launches a wellness program. Engagement stays low. They add flexible hours. Still low. They redesign the office. Still low. Each intervention addresses a plausible surface cause. None reaches the root—which might be a toxic management culture, a misalignment between the company’s stated values and its actual practices, or a structural problem with how work is organized. Without root cause analysis, each new initiative is another round of Tylenol: temporarily soothing, fundamentally useless.

IBM’s recent research on AI agent deployments reveals the same pattern at the technology level. Many AI implementations stall after the pilot phase, not because the technology fails, but because organizations try to force-fit advanced tools onto workflows whose underlying problems were never diagnosed. Technology works. The process it was applied to was broken from the start.

Stage 4: Solution Generation

The purpose

Generate diverse, actionable, and non-obvious options that are directly mapped to the root causes identified in the previous stage.

What this stage involves

If the first three stages have been done well, solution generation becomes a fundamentally different exercise than what most organizations are used to. Instead of brainstorming in a vacuum—“what could we do about this?”—the question becomes sharply focused: “given these specific root causes, these constraints, and these assumptions, what interventions would address the actual drivers of the problem?”

Good solution generation has three characteristics. First, the solutions are mapped to root causes. Every proposed solution should be traceable back to a specific cause it addresses. Solutions that can’t be linked to a diagnosed root cause are guesses, however sophisticated they may appear. Second, the solutions are varied in ambition. A useful solution set includes incremental options (low risk, quick implementation), moderate options (meaningful change with manageable disruption), and bold options (transformative but demanding). This range is important because organizations need to choose based on their appetite for risk and available resources. Third, the solutions are honest about trade-offs. Every solution has costs, risks, and second-order effects. Presenting options without trade-offs isn’t optimism; it’s malpractice.

The output

A portfolio of solution directions, each specifying what root cause it addresses, why it might work, and what trade-offs and risks it carries.

What goes wrong when you do it badly

Solution generation is, paradoxically, the stage organizations are most comfortable with—and the one where the most established techniques exist, from brainstorming and design sprints to crowdsourcing (and other open innovation approaches). The problem is rarely a lack of methods. It’s that these methods are deployed without the foundation that the first three stages provide, which means they generate solutions untethered from a properly diagnosed problem.

Two failure modes dominate at this stage. The first is what I call best-practice dumping: offering generic industry solutions that aren’t connected to the specific problem’s root causes. “Companies in your industry typically do X” is not a solution to your problem; it’s a solution to someone else’s. The second failure mode is single-answer bias: converging on one recommendation before alternatives have been genuinely explored. This often happens when the person generating solutions has a favorite methodology, a pre-existing relationship with a vendor, or simply a strong intuition. Intuition is valuable. But a process that produces only one option hasn’t explored the solution space; it’s merely confirmed a prior preference.

Stage 5: From Analysis to Action

The purpose

Translate the problem-solving work into committed next steps—so that good analysis doesn’t die in a slide deck.

What this stage involves

This is the stage that separates organizations that think well from organizations that act well on their thinking. The previous four stages can produce a superb diagnosis and a compelling set of solutions. None of that matters if the work stops at the report.

Stage 5 is where the analysis meets organizational reality. It asks: which solutions should be pursued, in what sequence, and by whom? It requires honest conversation about priorities, resources, timelines, and accountability. Specifically, it involves three activities. First, stress-testing the preferred solutions: probing assumptions, anticipating implementation barriers, and identifying what could go wrong. Second, sequencing and prioritizing: determining which actions to take first based on impact, feasibility, and dependencies. Third, assigning ownership: ensuring that every committed action has a named person responsible for it, a timeline, a clear definition of what success looks like, and the resources—budget and people—allocated to carry it out. An initiative launched without dedicated resources isn’t a commitment; it’s a wish.

The output

An action plan naming owners, allocating resources, defining timelines, and establishing success criteria. Not a menu of possibilities, but a set of commitments.

What goes wrong when you skip it

This is perhaps the most quietly devastating failure in problem solving: the analysis-to-action gap. Organizations invest real effort in understanding a problem, generate thoughtful solutions—and then nothing happens. The findings sit in a document that gets circulated, praised, and ignored. Six months later, someone asks why the problem hasn’t been addressed, and the cycle starts again from scratch.

The analysis-to-action gap is not a failure of will. It’s a failure of process. When a problem-solving effort ends with “here are some options to consider,” it draws the process boundary in the wrong place. The most failure-prone moment—the transition from analysis to action—is left outside the structured process, handled ad hoc, with no discipline applied to it.

Stage 5 exists to bring that transition inside the process itself: to ensure that the same rigor applied to diagnosing the problem and generating solutions extends all the way through to deciding, resourcing, and executing.

The Chain, Not the Links

It’s tempting to treat these five stages as a checklist—five boxes to tick on the way to a solution. That would miss the point. The power of the process lies not in the individual stages but in their connection. Each stage produces something that the next stage depends on. Skip Stage 2, and Stage 3 will operate on unexamined assumptions. Rush Stage 3, and Stage 4 will generate solutions to the wrong causes. Omit Stage 5, and the entire effort evaporates.

The chain matters more than the links.

And here is what makes this more than an academic exercise: the organizations that need this process most—the ones operating with thin margins, limited staff, and no room for wasted effort—are the ones least likely to have it. Small and mid-sized enterprises. Nonprofits. Mission-driven organizations working under constant resource pressure.

These organizations can’t afford to solve the wrong problem twice. They can’t absorb the cost of the Tylenol reflex. They need a process that is rigorous without being cumbersome, structured without being bureaucratic, and accessible without requiring a dedicated innovation department.

That’s what the next post in this series will address: why SMEs and nonprofits face a unique problem-solving deficit—and why the current moment, with AI tools rapidly becoming available, makes closing that deficit both more urgent and more possible than ever before.

Next in the series: “The Organizations That Need Problem Solving Most Are the Ones Doing It Least.”

Posted in Innovation, Problem-solving | Tagged , , , , , , , | Leave a comment

Why We’re So Bad at Solving Problems

(And Why It Matters More Than Ever)

This is the first post in the series “Problem First: AI-Assisted Problem Solving for Organizations That Can’t Afford to Get It Wrong.”

Clients always come to me knowing what they want. Very often, however, they don’t do enough due diligence to understand what they really need.

A few years ago, a large Midwest-based paint and coatings manufacturer asked me to help them crowdsource a redesign of an industrial pump. The pump worked beautifully in the lab, but clogged constantly when used outdoors. The engineers were convinced they had a mechanical design problem. They wanted to put a challenge out to a crowd of external solvers to get a better pump.

I asked them a simple question: were the indoor and outdoor testing conditions identical? They weren’t. The lab testing took place in summer, with indoor temperatures, even in the air-conditioned room, often in the mid-seventies. The outdoor work was in autumn, with outdoor temperatures rarely exceeding sixty degrees. Could the observed clogging have something to do with temperature rather than the pump design?

It could, and it did. The paint the engineers were loading into the pump was becoming viscous with even a small drop in temperature, choking the device. The engineers fixed the problem by adjusting the paint formulation. No redesign. No crowdsourcing. No new pump.

That story would be more satisfying if it were unusual. It isn’t. At around the same time, another client wanted to crowdsource an additive that would prevent a food product from losing sweetness during processing. I spent considerable effort persuading the client to leave the door open for solutions that went beyond additives, for example, those considering changes to the preparation process itself. “No, we can’t change the process; it’ll be too expensive!” insisted the client. To their great surprise, a solver proposed a minor, inexpensive process modification that preserved the sweetness perfectly. No additive required.

In both cases, smart, experienced professionals were ready to invest significant time and money solving the wrong problem. Not because they were incompetent—but because they never paused to ask a question: was the problem they had identified the actual problem they faced?

The Tylenol Reflex

I sometimes describe this pattern as the Tylenol reflex: reaching for a painkiller the moment you feel a headache, without asking first what’s causing the pain. If the headache is from a hangover, Tylenol is fine. If it’s a mild cold, it’ll help. But if it’s a symptom of something more serious—a chronic condition, a vascular problem, or a brain tumor—the Tylenol doesn’t just fail to help. It actively harms you by masking the signal your body is sending.

Organizations do this all the time. A product isn’t selling, so they redesign the packaging. Employee turnover spikes, so they raise salaries. A process is slow, so they automate it. Each of these responses might be correct. But none of them can be proven to be correct until someone asks: What is actually causing this? Is it the disease, or is it merely a symptom?

The pattern is so common that it barely registers as a mistake. It feels like decisiveness. It looks like action. Organizations reward people who move fast, who “are biased toward action,” who don’t get bogged down in analysis. The person who says “Wait! Are we sure we understand the problem?” is rarely the one who gets promoted.

And so, the Tylenol reflex becomes part of organizational culture: treat the symptom, move on, hope for the best. When the problem resurfaces—as it inevitably does—treat it again. The cost of this cycle is enormous, but because it’s distributed across dozens of small decisions rather than one catastrophic failure, it’s almost invisible.

The Discipline Nobody Teaches

Here is the uncomfortable truth: problem definition is a discipline, and almost nobody is trained in it.

Business schools teach strategy, finance, marketing, and operations. Engineering programs teach design, analysis, and optimization. True, medical schools teach diagnosis—and that’s perhaps the only profession that takes the problem-definition step seriously, for obvious reasons. But in most professional contexts, the emphasis falls overwhelmingly on generating solutions, not on interrogating problems.

Think about how incentives work in a typical organization. Performance reviews reward deliverables: projects completed, features shipped, campaigns launched. Nobody gets a bonus for spending three weeks redefining the problem statement. The entire apparatus of organizational life—meetings, deadlines, KPIs, quarterly goals—is built around producing outputs, not around questioning inputs.

This isn’t a new problem. Organizations have always struggled to slow down enough to define problems before solving them. But the struggle has gotten worse, not better, as the toolkit of available solutions has expanded. Consider the parade of management methodologies that have swept through organizations over the past few decades: Six Sigma, agile, design thinking, lean startup, and now artificial intelligence. Each one arrived with a powerful promise and a dazzling set of tools. And each one created a subtle gravitational pull toward the same mistake: starting with the tool and looking for problems to apply it to.

I call this the technology-centric trap, and I’ve written about it in the context of AI adoption. But the trap is far older than AI. It operates every time a new methodology becomes fashionable. The organization acquires the tool, forms a task force, identifies processes to transform—and skips the step of asking whether those processes are the right ones to focus on, or whether the problem they’re meant to address has been correctly understood in the first place.

The result is a peculiar form of organizational waste: the efficient pursuit of the wrong objective. You can run a flawless Six Sigma process on a problem that didn’t need Six Sigma. You can build a beautifully agile development pipeline for a product nobody wants. You can deploy AI to automate a workflow that should have been eliminated entirely. In each case, the execution is excellent. The diagnosis is not. In fact, it’s absent.

Puzzles and Messes

It’s worth pausing to ask why problem definition is so hard—aside from the fact that nobody teaches it and nobody rewards it.

Part of the answer lies in a distinction that most organizations never make: the difference between what I’ll call puzzles and messes.

A puzzle is a problem with a definable solution space. It may be difficult—fiendishly so—but its boundaries are knowable. An engineering challenge, a logistics optimization, a regulatory compliance question: these are puzzles. You can specify what a correct answer looks like. You can break the problem into components. Expertise, analysis, and enough computational power will eventually get you there.

A mess is different. A mess involves interdependencies that shift as you act on them. Stakeholders with conflicting interests. Emergent behaviors that can’t be predicted from the components. Feedback loops that amplify or dampen in ways that change over time. Market positioning, organizational culture change, community development, geopolitical strategy—these are messes. There is no single “correct” answer. The problem itself changes shape depending on who’s looking at it and what has been tried before.

This distinction matters because the failure modes are completely different. With a puzzle, the typical mistake is misidentifying the puzzle—like my paint company clients, who thought they had a pump design puzzle when they actually had a paint formulation puzzle. The solution space was knowable; they were just searching in the wrong one.

With a mess, the typical mistake is more fundamental: treating the mess as if it were a puzzle. Organizations crave the clarity of puzzles. They want crisp problem statements, bounded solution spaces, and measurable outcomes. So, when they encounter a mess—a complex, shifting, multi-stakeholder tangle—they instinctively reframe it as something simpler. They pick one dimension of the mess, define it as the problem, and go to work on it. The rest of the mess, unaddressed, continues to fester.

Most serious organizational challenges are messes or at least have a messy component. And most organizations approach them with puzzle-solving tools. This mismatch is one of the deepest reasons why problem-solving goes wrong: not because people lack intelligence or effort, but because they apply the wrong kind of thinking to the challenge in front of them.

Structured problem solving doesn’t eliminate the difference between puzzles and messes. But it forces you to confront it. A rigorous process asks: What kind of problem is this? What do we know, what do we assume, and what are we ignoring? Are we simplifying because simplification is warranted, or because complexity makes us uncomfortable? These questions don’t guarantee the right answer. But they dramatically reduce the odds of confidently pursuing the wrong one.

The Compounding Cost of Solving the Wrong Problem

All of this has always been true. So why does it matter more now?

Because the cost of acting on a misdiagnosed problem has fundamentally changed.

In the past, the friction of execution provided a kind of accidental safety net. Solutions took time to implement. They required budgets to be approved, teams to be assembled, and vendors to be contracted. During that lag, there were natural checkpoints—moments when someone might say, “Wait, are we sure this is the right approach?” The slowness of execution, frustrating as it was, created space for course correction.

AI has removed much of that friction. Today, organizations can generate strategic analyses, produce detailed implementation plans, build prototypes, and deploy solutions at a speed that would have been unimaginable a few years ago. That speed is genuinely transformative when pointed at the right problem. But when pointed at the wrong one, it means you arrive at a dead end faster, having spent resources and organizational energy on something that was never going to work.

The democratization of AI tools makes this especially urgent. It’s no longer just large corporations with dedicated innovation teams that can act quickly on a misdiagnosis. Small businesses, nonprofits, and solo entrepreneurs now have access to powerful AI capabilities. They can move fast. The question is whether they’re moving in the right direction.

And here is the irony: the very tool that accelerates execution—AI—is often adopted with the same Tylenol reflex that plagues every other organizational decision. Organizations ask, “Which of our problems can AI solve?” when they should be asking, “Do we actually understand our problems?” Technology changes. The underlying mistake doesn’t.

Problem Solving as Competitive Advantage

There is a positive way to frame all of this. If most organizations are bad at defining problems—and they are—then the ability to define problems correctly becomes a genuine competitive advantage. Not a theoretical one. A practical, measurable, durable edge over competitors who continue to solve the wrong problems efficiently.

My first (“golden”) rule of problem solving, drawn from years of innovation consulting, is simple: know what you want, understand what you need. What you want is the surface-level request: a better pump, a food additive, a faster process. What you need is the underlying outcome: paint that flows at any temperature, sweetness that survives processing, a workflow that serves its purpose. The gap between want and need is where most problem-solving failures live.

Closing that gap requires a process—not just good instincts, not just smart people, but a structured, repeatable method for moving from symptoms to root causes, from assumptions to evidence, from a vague sense that something is wrong to a precise understanding of what and why.

What’s Missing

But here’s the problem with saying “problem solving needs a process.” Everyone nods. Nobody disagrees. And almost nobody can tell you what that process actually looks like.

We talk endlessly about the importance of structured problem solving, yet pay remarkably little attention to what this process involves in practice: what steps it includes, what the input and output of each step should be, where organizations typically cut corners, and what the consequences of those shortcuts are. The lack of a clear, explicit map of the problem-solving process is itself part of the problem. You can’t follow a discipline that hasn’t been defined.

That’s what the next post in this series will address: a step-by-step anatomy of what rigorous problem solving looks like in practice—from the moment a problem is first felt to the moment solutions are ready to be evaluated. Not a theory. A map.

Next in the series: “The Problem-Solving Manifesto: What the Problem-Solving Process Actually Looks Like.”

Posted in AI, Innovation | Tagged , , , , , | 1 Comment

Don’t Bring Me Chickens or Eggs — Build Me a Farm

Innovation managers love to hate the line “Don’t bring me problems, bring me solutions.”

They’ll lecture you about root cause analysis. They’ll quote Einstein: “If I had only one hour to save the world, I would spend fifty-five minutes defining the problem, and only five minutes finding the solution.” They’ll insist that problem definition must come before solution generation.

I agree with them — mostly. You can’t cure a disease unless you diagnose its real cause. My own experience says that 80% of failed problem-solving efforts fail because the problem wasn’t properly defined. Only 20% fail because of poor execution or the wrong team.

But here’s where I part ways with the “problem-first” zealots: I refuse to replace one orthodoxy with another.

Taking sides in the “problem vs. solution” debate is like arguing whether the chicken or the egg came first. It’s the wrong question entirely.

What we actually need is a sustained problem-solving process — not a debate about which comes first.

The Real Answer: Build a System

With such a process in place, the question of what’s more important becomes irrelevant. Instead of endless philosophical arguments, you create a rhythm:

First, you constantly hunt for problems — both the ones you already know and the ones just emerging. You define them clearly, specifically, in actionable terms.

Then comes solution generation. Brainstorming. Co-creation with customers. Internal crowdsourcing. External expert networks. Whatever tool fits your context.

You select the best solutions. You implement them. You learn from what worked and what didn’t.

And here’s the crucial part: when one problem gets solved, it doesn’t create a vacuum. The next problem is already waiting — or it emerges from implementing your new solution. The cycle continues.

This isn’t about choosing eggs or chickens. It’s about building a farm that produces both, continuously.

Why a Portfolio Approach Wins

Think of it as maintaining a portfolio of problems-to-be-solved, constantly refreshed and prioritized.

This approach extracts the best from everyone on your team. Some people are gifted at spotting trends and sensing trouble before it arrives. Others excel at finding elegant fixes to messy situations. Most corporate cultures force people to choose one lane or the other.

But with a constant flow of problems and solutions moving through your system, everyone finds their sweet spot. The problem-spotters stay engaged because there’s always demand for what they see. The solution-builders stay motivated because there’s always something to fix.

More importantly, neither side gets to claim moral superiority. The system needs both. The system rewards both.

No more “bring me solutions” versus “define the problem first” turf wars. No more artificial sequencing. Just a continuous engine that converts challenges into progress.

As for managers still stuck on the old slogans, here’s my advice: try this line instead: “Bring me problems, then solutions, then problems again…”

Or if anyone can propose a shorter version of the same, I’m all ears.

Posted in Innovation, Portfolio Management | Tagged , , , , , , , , | Leave a comment

What If Failing Fast Is Just Failing Wrong?

As Lewis Carroll once said, “If you don’t know where you are going, any road will get you there.”

I think of this wisdom every time I hear the gospelers of the “fail-fast-fail-often” creed. I suspect that their easy acceptance of failure — and rush to celebrate it — often stems from an inability to define success.

Here’s the thing: if you don’t know what success looks like, every attempt registers as a failure. (Or worse, as politicians demonstrate daily, when you don’t know what you’re doing, every attempt can be spun as a success. But that’s another rant.)

Start with Strategy

As Andy Binns and Andreas Brandstetter argue in a recent book, innovation starts with a clearly articulated goal — a North Star that lays out the firm’s strategic ambitions and guides its actions.

With the North Star approach, success isn’t measured by the number of tries but by the steps that bring you closer to that established goal. As Andrew likes to say: it’s not about how often you fail but about how much you learn — and those two things aren’t the same. Many people and firms fail repeatedly simply because they don’t learn from previous failures. Nothing to celebrate there, in my humble opinion.

The Science of Learning from Mistakes

A 2019 paper in Nature studied the role of difficulty in learning. The findings? Maximum learning happens when the optimal training accuracy is about 85%, that is, when the error rate is around 15%.

In other words, to learn effectively, you should be right five times more often than you’re wrong.

So much for failing often.

Software Is Not Everything

We need to remember that many contemporary “rules” of innovation come from Agile software development. Sure, when designing software, you don’t have time or money for extensive customer research on every feature. You run an A/B test instead, and boom — you know what users prefer.

In this case, yes, progress can be measured by the number of tested combinations. The more, the better. And the faster you reject inferior options, the better, too. You “fail” faster? Good for you.

But not all innovation works like software development.

Take drug development. The ultimate proof that a candidate drug works (is a success, in other words) comes only in a Phase III clinical trial, which costs about $1 billion to run. With failure rates of Phase III clinical trials exceeding 50%, is there any reason to celebrate a billion-dollar failure?

Or consider creative writing. Writers can’t share early drafts with future readers. They write from the beginning to the end, publish, and then — only then — find out whether they’ve written a Pulitzer contender or warehouse filler.

Lessons from the Lab

Experimental science doesn’t measure success by failures either.

A scientist starts by formulating a hypothesis — their vision of a problem. They design an experiment to test it. If confirmed (always the preferred outcome), they formulate a new, advanced hypothesis based on the new knowledge. The process repeats.

If the hypothesis proves incorrect, they return to the drawing board and try a better one.

Failures do happen. But in science, failure is a mistake in experimental design or a screwup in implementation. It’s an embarrassment you would want to hide from your boss and your colleagues, not something to celebrate.

A good scientist develops better hypotheses, designs experiments that bring 100% clarity, and makes few mistakes when running them. Good scientists celebrate successes, not failures.

Making Innovation Repeatable

Innovation managers can learn from the science playbook. Place hypothesis-driven experimentation at the center of your innovation process.

But before experimentation, you need a few things:

An innovation strategy. Innovation processes. Metrics. Training. Incentives.

This is what makes the innovation process predictable and repeatable — at least more so than winning the lottery.

Posted in Innovation | Tagged , , , , , | Leave a comment

Don’t Blame the Black Box: Why We Avoid AI Explanations

There’s a Russian proverb that cuts straight to the heart of human nature: Having an ugly face, don’t blame the mirror (На зеркало неча пенять, коли рожа крива).

We like to blame LLM models for their lack of transparency. Calls for Explainable AI get louder every single day. Politicians demand it, regulators require it, and businesses claim it’s essential for winning consumer trust. And yet, when an AI tool provides us with a transparent explanation of its reasoning, we tend to ignore this explanation, especially if it makes us uncomfortable or, worse, if it threatens to undermine our financial gains.

The Loan Officer Experiment: Seeking Predictions, Avoiding Truth

New research from Harvard Business School highlights this unsettling truth in two experiments.

The first experiment placed participants in the role of loan officers at a private U.S. lender. Their task was straightforward: allocating a real, interest-free $10,000 loan between two unemployed borrowers. An AI system had classified one borrower as low risk and the other as high risk. Participants could access the AI’s risk predictions and also choose whether to view an explanation of how the model reached its assessment.

The results were striking. Roughly 80% of participants eagerly accepted the risk scores: they wanted the AI’s predictions to help them make profitable decisions. But only about 45% chose to see the explanations. The gap widened dramatically when participants’ financial incentives were aligned with the lender’s interests: they earned more money if the loans were repaid. (Lender-aligned participants were about 10 percentage points more likely to skip explanations than neutrally compensated participants.) These participants were even more likely to seek out predictions, but significantly more likely to avoid explanations, particularly when told that those explanations might involve considerations of race and gender.

The pattern was clear: when financial incentives conflict with fairness concerns, people don’t just make questionable decisions; they strategically avoid information that would force them to confront the ethical dimensions of those choices.

Critically, this wasn’t about disliking extra information in general. When race and gender information was removed from explanations and replaced with arbitrary technical details, the gap in explanation avoidance between different incentive conditions almost vanished. People weren’t shunning explanations as such; they were avoiding what the explanations might reveal about discrimination and their own profit-maximizing behavior.

The Risk Experiment: Failing to See What Helps

The second experiment removed moral conflicts entirely to focus on pure decision quality. Here, participants evaluated a loan application that an AI had labeled “high risk” because of a two-year employment gap in the borrower’s work history. The researchers first asked participants how much they would be willing to pay for an explanation that would reveal whether the employment gap was indeed the primary driver of the AI’s high-risk classification.

Then came the crucial twist. Participants received free private information: the employment gap resulted from the borrower pursuing a full-time professional certificate, a benign reason that shouldn’t increase credit risk (unlike, say, a job termination). This private information should have made the AI explanation significantly more valuable: If participants knew that the AI’s high-risk label stemmed from the employment gap, and they also knew the gap resulted from pursuing education rather than being fired, they could integrate both pieces of information to reach a more accurate risk assessment.

Logic suggests that after receiving this private information, participants should value the AI explanation more highly, not less. But that’s not what happened. When asked for the second time about their willingness to pay (that is, after receiving the private information about the certificate) valuations actually dropped by 26%. People systematically failed to recognize that the explanation would help them integrate their own knowledge with the AI’s output to make a better decision.

Only when researchers explicitly walked participants through the logic, spelling out exactly how the private information and AI explanation could be combined, did the valuations increase. This revealed a novel behavioral bias: people don’t naturally see when explanations would be most useful to them, even when there’s no moral conflict involved.

The Black Box We Refuse to Open

We often complain that LLM models are like a black box and criticize AI Labs for creating them. The metaphor has become ubiquitous in debates about artificial intelligence: mysterious algorithms making consequential decisions while we’re left to wonder what’s happening inside.

But this research reveals an uncomfortable irony. When an AI algorithm gives us an opportunity to open the lid of that black box—to peer inside and understand its reasoning—we hesitate. We look away. Sometimes it’s because we’re lazy and don’t want to make the cognitive effort. More often, it’s because we don’t want to see what’s there.

In high-stakes decisions spanning credit, hiring, pricing, healthcare, and safety, people may eagerly consume AI predictions while quietly avoiding the explanations that would expose uncomfortable trade-offs or discriminatory patterns. That avoidance can skew outcomes, undermine fairness, and create hidden risks for every organization. Meanwhile, even well-intentioned professionals may systematically under-invest in explanations that would improve their forecasting by helping them combine their domain expertise with AI insights.

Building transparent AI systems is necessary but not sufficient. The real challenge isn’t engineering better explanations or making black boxes more transparent; it’s ensuring that people actually use the transparency that’s already available. Organizations must design decision-making environments and incentive structures that encourage opening the lid, even when what’s inside might be uncomfortable.

Because the black box isn’t the problem. The problem is our unwillingness to look inside.

Posted in AI | Tagged , , , , | Leave a comment

The AI Paradox: Why SMEs Might Be Losing Ground Just When They Thought They’d Caught Up

Three months ago, I declared AI the great equalizer for small and medium enterprises. Today, I’m not so sure. In fact, I’m worried we might be celebrating prematurely — and that the very technology promising to level the playing field could actually widen the gap between SMEs and their larger competitors.

The September Dream: AI as the Great Democratizer

Back in September, I published “Unlocking SME Innovation: Why AI-Based Problem-Solving is the Great Equalizer,” where I celebrated what seemed like a generational opportunity for SMEs. My argument was straightforward and optimistic: advances in AI, particularly large language models, were dramatically reducing the cost of domain expertise. Tools like ChatGPT, Claude, and Gemini were giving everyone access to knowledge that just years ago was exclusive to large, resource-rich organizations.

The promise was intoxicating. SMEs could now engage in sophisticated scenario planning, competitive analysis, and innovation forecasting — capabilities previously reserved for corporations with dedicated strategy teams. Where SMEs once waited weeks for external consultants or struggled in isolation, AI provided immediate domain expertise, alternative approaches, and consequence analysis before committing resources.

I believed — and still want to believe — that agility can become more valuable than resources, and creative problem-solving can trump bureaucratic processes. But recently, I’ve encountered a paradox that fundamentally challenges this optimistic vision.

The Knowledge Dichotomy: When More Becomes Less

To understand the paradox, we need to distinguish between two types of knowledge: explicit and tacit.

Explicit knowledge is information that can be easily codified, documented, and transferred. It’s the data in your reports, the insights in your dashboards, the processes in your manuals, and the analyses in your presentations. This is precisely what AI excels at generating. LLMs can analyze market trends, produce competitive intelligence, create strategic frameworks, and synthesize information from vast datasets — all at unprecedented speed and minimal cost.

Tacit knowledge, by contrast, is deeply embedded in experience, intuition, and context. It’s the expert judgment honed over years of practice, the creative problem-solving that comes from pattern recognition across multiple situations, the ability to read a room and build relationships, and the organizational culture that shapes how decisions get made.

Here’s where AI turns the tables: by making explicit knowledge abundant, cheap, and universally accessible, AI simultaneously commoditizes it. If your AI tools can generate sophisticated market analysis, so can your competitors.’ If you can produce detailed competitive intelligence reports, so can everyone else in your industry. Explicit knowledge, once a source of competitive advantage, becomes a common baseline rather than a differentiator.

And as explicit knowledge loses its strategic value, tacit knowledge becomes the critical differentiator. The paradox is complete: AI democratizes explicit knowledge while elevating the importance of the very thing that can’t be democratized — human experience, judgment, and intuition embedded within organizations.

The SME Threat: Winning the Battle, Losing the War

This paradox poses a particularly acute threat to SMEs, and it’s one I didn’t fully appreciate in September.

Yes, SMEs can now generate the same volume of explicit knowledge as their larger competitors. They can produce equally sophisticated analyses, reports, and strategic frameworks. “We have access to the same knowledge as you guys!” they might justifiably claim.

But here’s the problem: explicit knowledge is only half the equation—and increasingly, it’s the least important half.

Large organizations possess something SMEs often lack: a critical mass of accumulated tacit knowledge. They have teams of experienced professionals who’ve navigated multiple market cycles, managed countless customer relationships, and learned through trial and error what works and what doesn’t. They have established decision-making processes refined over decades, institutional memory that prevents repeated mistakes, networks of expertise that span functions and geographies, and organizational cultures that know how to translate insights into execution.

This tacit knowledge infrastructure is what turns data into decisions, and decisions into results. It’s the interpretive layer that determines which AI-generated insights matter and which don’t, the judgment that knows when to act boldly and when to proceed cautiously, and the execution capability that transforms analysis into competitive action.

So, here’s the cruel irony: by democratizing explicit knowledge, AI may widen the gap between SMEs and larger players. SMEs gain access to knowledge abundance but lack the tacit knowledge infrastructure to leverage it effectively. They’re drowning in insights but starving for wisdom.

Fighting Back: Building Tacit Knowledge at Scale

Should SMEs surrender to this paradox? Absolutely not. But they need to be strategic about how they compete in an AI-augmented world.

First, SMEs must recognize that their competitive advantage won’t come from AI-generated knowledge itself — it will come from how they apply that knowledge through their unique tacit knowledge capabilities. This requires intentional investment in building organizational wisdom, not just accessing information.

Second, SMEs should focus on what they can do better than large organizations: developing deep, contextual understanding of their specific customers and markets. Large companies have breadth; SMEs can have depth. Know your customers not just through data, but through relationships, repeated interactions, and intuitive understanding of their unstated needs.

Third, create tight-knit, high-trust teams where tacit knowledge flows naturally. In smaller organizations, this is easier to achieve than in large bureaucracies. Use this structural advantage to build learning cultures where experience is shared, mistakes are discussed openly, and collective judgment improves continuously.

Fourth, implement deliberate knowledge transfer mechanisms — mentoring programs, case study discussions, post-project reviews — that capture and disseminate tacit knowledge across your organization. Don’t let experience remain siloed in individual heads.

Finally, use AI strategically to augment your tacit knowledge, not replace it. Let AI handle the explicit knowledge generation: data analysis, report creation, and pattern identification. This frees your people to focus on interpretation, judgment, and creative application — the tacit knowledge work where you can still differentiate.

The Window Is Closing

The adoption window for AI is compressing rapidly. Following historical patterns, we likely have only 3-4 years until peak adoption in 2028-2029. SMEs that spend these precious years simply celebrating access to AI-generated explicit knowledge will find themselves competitively disadvantaged despite being technologically enabled.

The winners will be those who recognize the paradox and act on it now: embracing AI for what it does best while urgently building the tacit knowledge capabilities that AI cannot replicate.

The great equalizer might not be so equal after all. But for SMEs willing to play a different game — one focused on wisdom rather than just information — the opportunity remains extraordinary.

I’m grateful to Daniel Martinez Villegas, whose recent presentation at the Berkeley Open Innovation Seminar drew my attention to the explicit vs. tacit knowledge dichotomy.

Posted in AI, Innovation | Tagged , , , , , , , , , , , | Leave a comment

Putting the Cart Before the Horse: What We’re Getting Wrong About AI

The debate about artificial intelligence has become exhaustingly predictable.

On one side, we have doomsayers who celebrate every misstep—a misdrawn map of Europe, a miscounted number of r’s in “blueberry”—as proof that AI is fundamentally flawed. The word “hallucination” has been weaponized to dismiss technology that, despite its imperfections, has made extraordinary strides in reliability. On the other side, we have enthusiasts, armed with an ever-expanding toolkit of specialized models and applications, who rush to integrate AI into every conceivable business process.

Both camps, I would argue, are missing the point.

The skeptics’ position barely warrants discussion. Yes, AI makes mistakes. So do humans—with alarming regularity. The relevant question isn’t whether AI is perfect, but whether it’s useful. And on that measure, the evidence is overwhelming. Major language models have dramatically reduced their error rates, and their capabilities continue to expand at a pace that would have seemed impossible just years ago. Dismissing this technology because it occasionally stumbles is like rejecting automobiles because they can’t navigate every dirt path that a horse can.

But here’s where it gets interesting: even among those who embrace AI’s potential, most are approaching its implementation backwards. I call this the technology-centric trap. The thinking goes something like this: “We have these amazing AI tools available. Which of our existing business processes can we automate with them?” It’s a natural question, especially given the dizzying array of AI applications flooding the market, each promising to revolutionize some aspect of operations.

The problem is that this approach assumes our current business processes are fundamentally sound—that they just need a technological upgrade to run faster and cheaper. But what if the processes themselves are the problem? What if they’re outdated, inefficient, or built on assumptions that no longer hold in today’s environment? Bolting AI onto broken workflows doesn’t fix them; it just automates dysfunction at machine speed.

The correct sequence is elegantly simple, though harder to execute: identify the problem first, then find the solution. Not the other way around.

This isn’t theoretical musing. My experience with crowdsourcing taught me this lesson clearly. Successful crowdsourcing doesn’t start with assembling a crowd and asking what they can solve. It starts with identifying a specific problem, tracing it to its root cause, and defining it with precision. Only then do you present it to potential solvers. Skip those preliminary steps, and you’ll get solutions to the wrong problems—or no workable solutions at all.

The same principle applies to AI integration. Before asking which AI tools you should deploy, ask: What processes are genuinely holding us back? Where are the bottlenecks that constrain growth? Which workflows were designed for a different era and have simply persisted out of habit? These questions require honest, sometimes uncomfortable introspection about how your organization operates versus how it should operate.

Only after answering these questions does it make sense to survey the AI landscape. If appropriate tools exist, deploy them. If they don’t, consider building them or adapting what’s available. But the technology choice flows from the problem definition, not the reverse.

IBM’s recent paper on AI agent architecture makes this point compellingly. Their analysis reveals that many AI agent deployments stall after the pilot phase not because the technology fails, but because organizations are trying to force-fit advanced AI onto fundamentally broken workflows. Technology works fine; the underlying processes don’t.

This isn’t about being anti-technology or advocating for needless delays. It’s about being strategic. AI offers unprecedented opportunities to reimagine how work gets done, but only if we’re willing to question the status quo first. The businesses that will truly benefit from AI aren’t those that deploy the most tools the fastest. They’re the ones that take the harder path: examining their operations critically, identifying what needs to change, and then—and only then—leveraging AI to build something better.

The future belongs not to those who automate the present, but to those who redesign it first.

Posted in AI, Crowdsourcing | Tagged , , , , , , , , | 2 Comments

Unlocking SME Innovation: Why AI-Based Problem-Solving is the Great Equalizer

In mid-October 2009, I was visiting with a client, a large, Midwest-based paint and coating manufacturing company.

As part of their product development process, the company’s engineers built a powerful outdoor pump to paint industrial buildings. The pump worked beautifully in indoor testing, but when the engineers tried to use it outdoors, the pump started to clog frequently, making it essentially useless. The client wanted me to help them run a crowdsourcing campaign aimed at redesigning the pump.

When speaking with my counterparts at the client’s innovation group, I pointed out to them that the indoor and outdoor conditions they used to test the pump weren’t identical: the indoor testing was done during the summer, with the temperature even in the air-conditioned lab often reaching the mid-seventies, while the outdoor temperature in the Midwest at this time of year rarely hit the 60°F mark. Could it be that the clogging was somehow caused by the temperature shift?

My hunch turned out to be correct. The problem was not the pump design. The problem was the paint: it was rapidly becoming viscous with a small drop in temperature, causing the pump to clog. The engineers fixed the problem by simply adjusting the paint formulation.

As an innovation manager, I like to remind my clients that the most important part of the problem-solving process is to correctly define the very problem they’re trying to solve.

The sad reality is that many large organizations, both corporate and non-profit, fail to identify the root cause of their problems. Instead, they immediately start looking for something—anything!—that may look like a solution.

To me, this is equivalent to taking Tylenol to relieve a headache even before knowing what caused it: hangover, mild cold, chronic migraine, or advanced glioblastoma.

The situation is even worse for small- and mid-sized companies (SMEs). They’re under constant pressure to innovate, but often lack the dedicated innovation departments, large budgets, and internal resources that their larger competitors rely on. While traditional consulting firms primarily cater to enterprise-level clients, SMEs are often left underserved, leaving their internal problem-solving capabilities ad hoc at best.

The AI Revolution: A Generational Moment for SMEs

Advances in AI, particularly large language models (LLMs), dramatically reduce the cost of domain expertise. By using tools like ChatGPT, Claude, or Gemini, everyone can now tap into knowledge that just a few years ago was accessible only to large and resource-rich organizations.

This presents a generational opportunity to level the playing field. The AI-based tools can stimulate the creative process, energize problem-solving, and support decision-making at SMEs with unprecedented speed and affordability.

What we’re witnessing isn’t simply an upgrade to existing business tools—it’s a fundamental shift in how problems get solved. Consider the cognitive cleanup that AI enables: where SMEs once struggled to sift through mountains of data, identify patterns, or generate multiple solution pathways, AI tools can now process complexity in real-time, offering structured thinking frameworks and systematic approaches to innovation challenges.

This transformation enables real-time business unblocking. When an SME faces a technical hurdle, market challenge, or operational bottleneck, AI tools can immediately provide relevant domain expertise, suggest alternative approaches, and help teams think through consequences before committing resources. The days of waiting weeks for external consultants or struggling in isolation are rapidly ending.

The emergence of problem-solving intelligence through AI represents more than efficiency gains—it’s about democratizing strategic thinking itself. SMEs can now engage in sophisticated scenario planning, competitive analysis, and innovation forecasting that were previously the exclusive domain of large corporations with dedicated strategy teams.

What makes this moment truly generational is the compound effect: as AI tools become more sophisticated and SMEs become more adept at leveraging them, the competitive advantages traditionally held by larger organizations begin to erode. Agility becomes more valuable than resources. Creative problem-solving trumps bureaucratic processes.

The Future of Innovation Services

This is also the moment to redefine traditional consulting by combining human expertise with AI tools and bringing cutting-edge innovation practices to SMEs across industries. It’s time to introduce AI-augmented innovation services for SMEs.

The new paradigm isn’t about replacing human insight with artificial intelligence—it’s about amplifying human creativity and judgment with AI’s processing power and knowledge synthesis capabilities. This hybrid approach enables SMEs to punch above their weight class, competing not just on price or niche expertise, but on the quality and speed of their innovation processes.

It’s not a transient trend. It’s a blueprint for the next generation of SME decision-making. The organizations that embrace this shift now will find themselves equipped with sustainable competitive advantages that compound over time, while those that hesitate risk being left behind in an increasingly AI-augmented business landscape.

Posted in AI, Innovation | Tagged , , , , , , , , , | 1 Comment

The Brainstorming Renaissance: How GenAI Tools Are Rewriting the Rules of Creativity

What if the best idea in your next big innovation meeting didn’t come from your star designer, but from a chatbot?

This isn’t a futuristic thought experiment; it’s happening now. Generative AI tools like ChatGPT, Midjourney, and Stable Diffusion are infiltrating brainstorming sessions, product design sprints, and even poetry readings. They’re not just helping — they’re outperforming human contributors on key metrics like speed, idea quality, and production cost.

As the ideation landscape is redrawn, it raises profound questions: Are AI-generated ideas better than human ones? Who benefits most from these tools — the seasoned expert or the curious novice? And more provocatively, is this the death of creativity, or its long-overdue rebirth?

Let’s unpack this creative renaissance in two acts.

Act I. GenAI vs. Human Brains: The Battle of Ideas

Quality, Novelty, and Feasibility: The Metrics That Matter

The old belief that “creativity is uniquely human” is quickly eroding. A landmark 2023 study by Girotra and colleagues compared ideas generated by ChatGPT-4 with those brainstormed by students at an elite university. 

The task? Inventing commercially viable products. The results? Staggering.

ChatGPT-4 produced ideas with higher average quality, measured by consumer purchase intent. It also dominated the high-performance tier — 35 of the top 40 ideas came from the model, not the humans. And it did all this at 40 times lower cost than its human counterparts.

Similarly, Meincke et al. (2024) showed that when GPT-4 was fed a few high-quality examples (a technique known as few-shot prompting), its outputs significantly outpaced those from human ideators across multiple dimensions of perceived value, though humans still edged out the machine on idea novelty.

This novelty gap has consistently surfaced across domains. In innovation tasks, artistic expression, and even scientific ideation, humans tend to produce slightly more novel ideas. But here’s the twist: being novel doesn’t always mean being better.

In real-world innovation, novelty without feasibility might be just noise. That’s where GenAI shines — balancing utility with surprise. In the words of Joosten et al. (2024), AI-generated ideas often have higher customer benefit and overall value, even when they are only moderately novel.

Similar things happen in the art world. When human evaluators were asked to judge whether a poem was written by a human or ChatGPT-3.5, they failed to tell the difference, and often preferred the AI version. The reason? AI poetry was rated higher on rhythm and beauty, two key markers of aesthetic impact.

The creative playing field isn’t just leveling — it’s shifting.

Speed and Cost: The Unfair Advantage of GenAI

Creativity has always come at a cost: time, energy, expertise. Generative AI blows this equation wide open.

In a 2024 study by Boussioux et al., AI generated high-quality business ideas at a fraction of the time and cost compared to human crowdsourcing. Human-generated solutions cost $2,555 and 2,520 hours. GPT-4 produced comparable (and in many cases better) ideas in 5.5 hours and for only $27.

In artistic domains, the same pattern holds. Zhou and Lee (2024) analyzed over 4 million artworks and found that artists using GenAI tools experienced a 25% increase in productivity and a 50% boost in engagement metrics like likes and shares. GenAI didn’t just amplify quantity; it elevated quality, especially when human artists actively filtered and curated the outputs.

But this productivity surge comes with a subtle risk: homogenization. Studies consistently show that GenAI outputs, particularly when used en masse, tend to be more similar to each other. The diversity of ideas — that raw, unpredictable chaos of human thought — gets smoothed out by the statistical instincts of the machine.

Prompt engineering can mitigate this to an extent. Techniques like chain-of-thought reasoning or persona-driven prompts have shown promise in boosting AI’s creative variance. But for now, GenAI is a volume weapon, not a chaos engine.

Act II. Who Gains More? Novices vs. Experts in the GenAI Era

The Democratization of Ideation

In many ways, GenAI is the great equalizer.

Doshi and Hauser (2024) found that low-creativity participants improved their storytelling by 11% when given access to AI ideas. Not only did their performance increase, but the creative gap between novices and high performers virtually disappeared. AI raised the floor without lowering the ceiling.

This has profound implications for innovation. Students, junior employees, or people outside traditional innovation roles can now participate meaningfully in ideation. As Girotra and Meincke’s work suggests, with a few examples and a well-engineered prompt, anyone can contribute viable, high-quality ideas.

Art mirrors this trend. In AI-assisted haiku creation, collaborative efforts between humans and machines consistently outperformed both pure AI and pure human poems in aesthetic evaluations. GenAI helps amplify latent creativity, especially for those who lack formal training or confidence.

In short, GenAI levels the playing field.

The Expert Paradox: When Experience Gets in the Way

Ironically, experienced professionals don’t always benefit from GenAI — and in some cases, it may undermine their performance.

A striking example comes from a study by Eisenreich et al. (2024). When experts were shown AI-generated ideas for inspiration, they performed worse than either “pure” AI or “pure” human ideators. Why? The explanation seems to be anchoring — AI outputs may constrain creative thinking rather than catalyze it among seasoned minds.

This insight challenges the assumption that more expertise means better outcomes when using AI tools. Instead, it suggests a new skill is required: the ability to effectively collaborate with AI via guiding curating, and edited, but without being creatively boxed in.

Artists face the same challenge. In visual domains, Zhou and Lee (2024) found that those who simply plugged ideas into AI tools produced more generic work. But artists who curated and refined AI outputs saw the biggest boosts in evaluations and audience engagement.

The future expert isn’t just a creator. They’re a creative director, orchestrating a human-machine ensemble to push boundaries rather than settle into comfortable patterns.

Conclusion: From Brainstorming to Brainhacking

We are witnessing a historic shift — not just in how ideas are generated, but in who gets to generate them and what those ideas look like.

GenAI tools have redefined the ideation process. They produce more, faster, and often better. They empower novices, disrupt experts, and challenge our deepest assumptions about creativity. Yet they also introduce risks: homogenization, bias, and the temptation to outsource too much of our thinking to machines.

The challenge isn’t to resist GenAI, but to use it wisely. To know when to prompt and when to pause. To explore widely, then filter ruthlessly. To let GenAI flood the canvas, but retain the brush.

So the next time you need a breakthrough idea, don’t just think outside the box. Ask your favorite bot what it thinks the box should be made of.

Posted in AI, Creativity, Innovation | Tagged , , , , , , , , , | Leave a comment

The End of the Crowd? (Why AI Won’t Fully Replace Human Crowdsourcing — Yet)

AI has already claimed its seat at the innovation table — and it didn’t even knock. It barged in, armed with large language models (LLMs) like GPT-4, reshaping how companies ideate, prototype, and solve problems. 

With astonishing speed and minimal cost, these tools are outperforming humans in tasks ranging from code generation to business model design. So, here’s the billion-dollar question: if AI can already outperform human crowds in many areas, is traditional crowdsourcing about to die?

A compelling study by Boussioux et al. (2024), titled “The Crowdless Future? Generative AI and Creative Problem Solving,” puts this debate into sharp focus. Their experiment pitted human-generated business ideas against those created using a human-AI hybrid approach. The results? AI-assisted solutions, especially when guided through strategically refined prompts, scored significantly higher in value, including financial and environmental impact, and overall quality. And they came with a price tag of just $27 compared to over $2,500 for the human-only submissions.

Translation? AI isn’t just good at creative problem-solving. It’s lean, scalable, and often better than the crowd, at least when measured by implementation potential and perceived value.

But if AI is that efficient, why aren’t we declaring the death of crowdsourcing right now?

While AI may outpace us humans in cost and consistency, there are at least four powerful reasons why traditional human crowdsourcing is far from obsolete.

Novelty: The Spark of the Unexpected

Boussioux et al. found that human-generated ideas consistently ranked higher in novelty, especially at the upper end of the scale. In other words, when you’re looking for that one-in-a-million idea — the weird, wild, breakthrough concept that no dataset can predict — humans may still have the edge.

AI models, no matter how advanced, are trained on what has been, not what could be. Their “creativity” is fundamentally synthetic — it’s a remix of the past. Human crowds, on the other hand, bring serendipity, fringe thinking, and unpredictable combinations. And in innovation, sometimes it’s one crazy idea, not a dozen “good” ones, that changes everything.

Ownership: Who Gets the Credit (and the IP)?

With AI-generated content, the question of intellectual property is still a legal and ethical minefield. If an LLM produces a groundbreaking idea based on prompts from your team, who owns the output? Your team? The model’s creators? The crowd of internet texts that the model was trained on?

Crowdsourcing sidesteps this ambiguity. A human contributor generates a breakthrough idea and signs an agreement transferring all IP rights to this idea to the crowdsourcing campaign sponsor in exchange for a reward, all in a legally transparent and unambiguous way. For organizations wary of future legal headaches, sticking with human solvers may feel like a safer bet, at least until AI governance frameworks catch up.

Marketing Value: Crowdsourcing as Innovation Theater

Let’s be honest: not all crowdsourcing is about getting the best ideas. Sometimes, it’s about signaling. When a company launches an open innovation contest — say, “Reimagine the Future of Food” — it’s making a statement: We’re listening to our customers. We’re cutting-edge. We’re engaged. Investors love this!

An AI prompt doesn’t generate press releases, Instagram buzz, or goodwill. But a vibrant campaign with real people submitting ideas does. For companies looking to boost their image as forward-thinking and innovative, the crowd still offers a potent narrative tool.

Community: It’s Not Just About the Ideas

Crowdsourcing doesn’t just produce solutions — it builds communities. When done right, it creates a network of passionate participants who care about a problem, become brand advocates, and sometimes even co-founders of spinoff ventures.

AI, by contrast, is transactional. It doesn’t care. It doesn’t get excited. It won’t show up at your hackathon or promote your brand on social media. That human energy — the sense of being part of something bigger — is still irreplaceable.

So, will AI replace crowdsourcing?

In many ways, it already has — for tasks where speed, scale, and strategic value matter most. But for organizations chasing radical novelty, craving emotional connection, or navigating uncertain legal waters, the human crowd still has a job to do.

Maybe the future isn’t crowdless — it’s crowdsmart. A hybrid world where AI augments, not replaces, the wisdom of the crowd. Where LLMs help us sift, refine, and accelerate, but humans still supply the spark.

In the end, it’s not AI vs. the crowd. It’s AI + the crowd. And when those two forces align, innovation doesn’t just scale — it soars.

Bold claim? Perhaps. But when the sparks fly from both silicon and soul, that’s when real innovation begins.

Posted in AI, Crowdsourcing, Innovation | Tagged , , , , | Leave a comment