AI as Problem-Solving Partner: Doing It Right

This is the fourth post in the series “Problem First: AI-Assisted Problem Solving for Organizations That Can’t Afford to Get It Wrong.”

I ended the previous post with a claim that deserves scrutiny: that the real value of AI for SMEs and nonprofits isn’t in generating answers but in improving the quality of their questions. That AI, used within a structured problem-solving process, becomes the thinking partner these organizations have never been able to afford.

It’s a bold statement. But what does it mean in practice?

This is the post where the series comes together. In Post 1, I argued that organizations are bad at solving problems because they skip problem definition. In Post 2, I laid out a five-stage process for doing it right. In Post 3, I argued that SMEs and nonprofits face a structural deficit that makes the problem-solving gap especially dangerous for them. Now the question is: how does AI fit into the process—not as a general concept, but stage by stage, as a specific working tool?

The answer, I’ve come to believe, is not the one most people would expect. AI’s equalizing power isn’t that it gives small organizations the same answers as large ones. It’s that it gives them the ability to ask better questions. And that ability, when embedded in a disciplined process, is a game changer.

The Interlocutor, Not the Oracle

The most common way organizations use AI today is as an answer machine. You give it a question, it gives you an answer. Need a market analysis? AI produces one. Need a competitive landscape? Done. Need a strategic framework? Here are three.

This is useful but limited—and, for organizations with a problem-solving deficit, potentially dangerous. If you haven’t defined your problem correctly, AI will efficiently generate sophisticated answers to the wrong question. It will do so fluently, confidently, and at very low cost. The output will look polished, even brilliant on occasions. And it will point you in the wrong direction.

The alternative is to use AI not as an oracle that delivers answers, but as an interlocutor that improves your thinking. An interlocutor pushes back. It asks clarifying questions. It surfaces assumptions you didn’t know you were making. It proposes alternative framings you hadn’t considered. It doesn’t replace your judgment; it sharpens it.

This is the mode that matters for structured problem solving. And it maps directly onto the five-stage process I described in Post 2. Let me walk through what AI-as-interlocutor looks like at each stage.

AI Across the Five Stages

Stage 1: Problem Intake and Framing. The first stage asks: what is actually happening, where it manifests, and why it matters. AI’s contribution here is not to define the problem for you—that would be precisely the premature diagnosis the process is designed to prevent. Instead, AI can help you expand the problem description before you narrow it. You describe the situation; AI asks follow-up questions that probe dimensions you may not have considered. It can suggest alternative ways to frame the same set of symptoms. It can flag when your problem statement already contains an embedded solution—“We need an AI strategy”—and prompt you to separate the symptom from the assumed remedy. The human team brings the lived experience of the problem. AI brings the discipline of not accepting the first framing as the final one.

Stage 2: Clarification, Assumptions, and Boundaries. This is where AI may add the most value for resource-constrained organizations. The central task of Stage 2 is to distinguish facts from assumptions and surface constraints. AI is exceptionally good at this—not because it knows your organization better than you do, but because it has no stake in your assumptions. It doesn’t share the institutional belief that “we can’t change the process” or that “the problem is obviously mechanical.” It can systematically ask: what evidence supports this claim? Is this a fact or a belief? What would change if this assumption turned out to be wrong? For small teams that have been living inside a problem for months or years, this external pressure-testing is invaluable. AI becomes the colleague who wasn’t in the room when the assumption was first formed—and therefore doesn’t treat it as a settled truth.

Stage 3: Root Cause Analysis. Root cause analysis requires generating multiple hypotheses across structural, process-level, human, and strategic dimensions. This is analytically demanding work that small organizations rarely have the bandwidth to do thoroughly. AI can help by systematically exploring causal pathways that a small team might not have the time or expertise to map on its own. It can propose root-cause hypotheses the team hasn’t considered, assign preliminary confidence levels, and—critically—resist the gravitational pull toward premature solutioning that derails most diagnostic efforts. AI doesn’t replace the team’s domain knowledge; it extends the team’s analytical reach.

Stage 4: Solution Generation. Once root causes are identified, AI can generate solution options that are directly mapped to those causes—varied in ambition (incremental to bold), conscious of stated constraints, and honest about trade-offs. This is where AI’s breadth of knowledge becomes genuinely useful: it can draw on approaches from adjacent industries, analogous problems, and diverse disciplines that a small team might never encounter. The key difference from typical AI use is that solution generation happens after diagnosis, not instead of it. Solutions are tethered to root causes, not floating freely.

Stage 5: From Analysis to Action. AI can support the transition from thinking to doing by stress-testing preferred solutions: probing assumptions, modeling implementation scenarios, anticipating barriers, and identifying second-order effects. It can help sequence actions by mapping dependencies and estimating resource requirements. What AI cannot do—and this is important—is assign ownership, allocate budgets, or make the political decisions that implementation requires. Stage 5 is where human leadership is most irreplaceable. AI prepares the ground; people commit to the path.

Building Tacit Knowledge, One Decision at a Time

There is a secondary benefit of using AI this way that goes beyond any single problem-solving exercise.

In the previous post, I discussed the challenge that SMEs and nonprofits face in retaining and systematizing tacit knowledge, the accumulated judgment and institutional memory that large organizations build over decades. Small organizations possess real expertise, often hard-won over years of frontline experience. But their small staff, high turnover, and constant operational pressure make it difficult to capture that knowledge in ways that survive individual departures or scale beyond individual memory.

AI-assisted problem solving creates a partial scaffold for this. When a small team uses AI to walk through scenario analysis, consequence mapping, or assumption testing, it is externalizing reasoning that would otherwise remain implicit. The process produces artifacts—documented assumptions, root cause maps, decision rationales—that persist beyond the exercise. Over time, these artifacts become a form of institutional memory: a record of how the organization thought through its most important challenges.

This doesn’t replace tacit knowledge. Nothing can substitute for the judgment of an experienced professional who has spent years understanding a community or a market. But it creates a structured way to accumulate, share, and preserve the thinking that informs decisions—so that when staff leave (as they inevitably do in small organizations), the reasoning doesn’t leave with them.

Problem First, Tool Second

Everything I’ve described above depends on one operating principle: AI enters the process after the human team has done the initial work of framing the challenge. Not before. Not instead.

This is the “problem first, tool second” rule that I’ve been advocating since the first post in this series, and it applies to AI with special force. The technology-centric trap—starting with the tool and looking for problems to apply it to—is more seductive with AI than with any previous technology, precisely because AI is so capable and so versatile. It can do so many things that the temptation to let it lead is almost irresistible.

Resist it. AI is most powerful when it enters a process that already has direction. It accelerates thinking that has already begun. It deepens the analysis that has already been framed. It pressure-tests conclusions that humans have already reached provisionally. Without that human foundation, AI generates output—sometimes impressive output—but not insight.

To revisit the paint pump story one last time: imagine the engineers had turned to AI before questioning their own assumptions. They would have described a clogging pump, and AI would have generated a dozen redesign concepts, each more sophisticated than the last. The output would have been technically excellent and entirely beside the point. The problem was never the pump. AI couldn’t have known that—but a structured process that began with assumption-testing would have surfaced it in minutes.

The Equalizer, Reconsidered

Early in my thinking about AI and innovation, I called AI “the great equalizer” for small organizations. I later questioned that claim when I recognized the paradox of explicit and tacit knowledge. Now, having worked through this series, I’ve arrived at a more nuanced position.

AI is an equalizer—but not because it gives SMEs and nonprofits the same capabilities as large corporations. It’s an equalizer because it gives them something more fundamental: a disciplined way to think through problems that they’ve never had before.

Large organizations have strategy departments, experienced leadership teams, and decades of institutional learning to draw on when they face a complex challenge. They don’t always use these resources well—as this series has documented—but they have them.

SMEs and nonprofits, for the most part, don’t. AI, embedded in a structured process, begins to close that gap. Not by replacing human judgment, but by giving small teams the scaffolding to exercise their judgment more rigorously.

This is not about making small organizations look like large ones. It’s about giving them the ability to think like well-resourced ones—to define problems with precision, diagnose root causes rather than treat symptoms, and deploy their scarce resources against the right targets.

The series is called “Problem First” for a reason. The tool matters. The discipline matters more. And for organizations that can’t afford to get it wrong, the combination of the two—a structured process, powered by AI, led by humans—is no longer a luxury. It’s the way forward.

This is the fourth post in the “Problem First” series. Previous posts: Post 1 — Why We’re So Bad at Solving Problems. Post 2 — The Problem-Solving Manifesto. Post 3 — The Organizations That Need Problem Solving Most Are the Ones Doing It Least.

Unknown's avatar

About Eugene Ivanov

Eugene Ivanov is a business and technical writer interested in innovation and technology. He focuses on factors defining human creativity and socioeconomic conditions affecting corporate innovation.
This entry was posted in AI, Creativity, Innovation, Nonprofits, Problem-solving and tagged , , , , , , , , . Bookmark the permalink.

Leave a comment