
Integrating AI into business practice has gone from a fringe conversation to a boardroom imperative. From productivity gains to fears of de-skilling, the debate is divisive—some see AI as a game-changer for human potential; others worry it’s a slippery slope toward dependence and displacement.
Negative sentiments notwithstanding, the real question is not whether to use AI, but how to use it wisely. Four cutting-edge studies provide a nuanced view of this evolving frontier, shedding light on when AI helps, when it hinders, and how it may redefine not just work—but workers and teams themselves.
Inside the Frontier: When AI Knows What It’s Doing
The “jagged technological frontier” isn’t just a catchy metaphor—it’s the heart of a massive field experiment run with 758 consultants at the Boston Consulting Group. Researchers introduced GPT-4 to professionals tasked with solving complex business problems.
The key insight? AI is only effective when operating within its capabilities—or “inside the frontier.” These are tasks that AI can complete reliably: structured analysis, clear communication, or ideation based on known patterns. “Outside the frontier” lies the domain of ambiguity, tacit knowledge, and judgment—and it’s here where AI stumbles and sometimes misleads.
The study’s surprising twist was who benefited the most from using AI tools. It wasn’t the top performers, but the consultants with below-average baseline performance. For these individuals, AI acted as an accelerant—boosting quality by over 40% and productivity by 25%. In contrast, for tasks outside the AI’s comfort zone, consultants with AI were 19% less likely to deliver the correct solution. These findings don’t argue against AI—they reveal its shape. Like any tool, AI is powerful only when used in the right context. Success comes from recognizing where AI’s frontier lies and then adapting accordingly.
Too Smart to Help? When Better AI Backfires
What happens when AI becomes too competent?
In a striking counterpoint to exuberant techno-optimism, a 2022 work by Dell’Acqua earlier explores a phenomenon the author dubbed “falling asleep at the wheel.” In a field experiment with 181 professional recruiters, participants evaluated resumes with AI assistance. But this time, the quality of the AI tool varied—some recruiters received high-accuracy recommendations, others received low-accuracy ones.
Counterintuitively, the recruiters using lower-performing AI tools made better decisions. They were more engaged, spent more time reviewing resumes, and were more likely to challenge AI suggestions. Meanwhile, highly accurate AI caused the human effort to drop. Recruiters deferred too quickly to machine judgment and became less accurate in their assessments.
This wasn’t a fluke—it was particularly true for experienced professionals, whose own skills were diluted by over-reliance on the algorithm.
The takeaway is clear: high-quality AI can displace rather than augment human expertise. In such settings, algorithmic excellence may seduce users into disengagement, suppressing their cognitive muscle memory. Maximizing joint performance may sometimes require less powerful AI—at least when keeping humans in the loop is critical.
Smarter Isn’t Always Better—But Sometimes It Is
Otis and colleagues offer a compelling twist to this narrative. In a randomized trial involving 640 Kenyan entrepreneurs, participants received business advice either from a traditional guidebook or via a GPT-4-based AI mentor on WhatsApp. Unlike the recruiter study, this AI tool helped top performers—boosting revenue and profits by over 20%. But it harmed low performers, who saw their performance dropping by about 10%.
Why this contradiction? It comes down to task selection and user discretion. Entrepreneurs had autonomy in when and how to use the AI, and high performers asked better questions on more manageable tasks. In contrast, low performers sought help on complex, ill-structured problems—those outside the AI’s frontier—leading to bad advice and worse outcomes.
This study makes more nuanced the notion that better AI leads to disengagement. It shows that it’s human judgment about what AI can and cannot do that is the real driver of success. When users are savvy about AI’s limitations, even powerful systems can be transformative. When they’re not, AI becomes a mirage—confidence without clarity.
Teaming Up with the Machine: A New Era of Collaboration
If the first three studies examined AI as a co-pilot for individuals, the just-published experiment conducted by a Harvard/Wharton team reimagines AI as a collaborator for entire teams. Conducted with 776 professionals at Procter & Gamble, the study asked: can AI fill the collaborative roles typically occupied by humans?
Participants were randomly assigned to four groups: individuals working solo, human teams of two, individuals with AI, and human teams with AI. All tackled real product development challenges. The results were eye-opening: individuals with AI matched the output of human teams. Even more striking, teams with AI outperformed all others, including human-only teams.
AI’s impact wasn’t just in better performance: it flattened functional silos. Without AI, participants generated ideas aligned with their functional background: R&D workers generated more technical proposals, and commercial workers more business-oriented. With AI, all produced more balanced solutions, regardless of background. Emotional benefits were evident too—users reported more positive feelings and less frustration when working with AI.
The implication of this study is profound: AI isn’t just a tool; it’s evolving into a cybernetic teammate, one that enhances creativity, bridges knowledge gaps, and even mimics the social glue of teamwork. (Who could predict this even a couple of years ago?)
This shift could redefine how we structure teams, allocate expertise, and manage work across the enterprise. The age of the solitary “AI-enhanced worker” is giving way to something richer—and potentially more disruptive.
The AI Edge Depends on the Human Hand
Across four major field studies, a clear pattern emerges. AI can supercharge performance—but only when we understand how, when, and who should use it. It’s not the intelligence of the algorithm that matters most; it’s the alignment between task, user, and tool.
GPT-4 boosted underperformers—if tasks were within their skill set. High-quality AI backfired—if users relied on it blindly. Entrepreneurial outcomes varied—based on users’ understanding of AI’s strengths and limits. And now, AI isn’t just augmenting individuals—it’s enhancing teams.
As businesses race to adopt generative AI, the lesson is both simple and sobering: AI is only as good as the people who know how to use it. And isn’t it a case for all other tools?
So, are we ready to treat AI not just as a tool, but as a teammate? As a manager? Feel free to scratch out the last question.