Skip to content
Analysis
The GenAI Paradox, Part 2: From Boardroom Hope to Operational Reality
Mar 30, 2026 - Ethan Seow

The GenAI Paradox, Part 2: From Boardroom Hope to Operational Reality

The C-suite is betting on Agentic AI as the next silver bullet. On the ground, agents enter infinite loops, burn through API credits, and Klarna's automation-first strategy reversed course.


This is Part 2 of a five-part series. Read Part 1: The Great Divide | Part 3: The Invisible Workforce Crisis | Part 4: Shadow AI and the Singapore Model | Part 5: Trust, Governance, and What Comes Next


Leadership Sentiment: The View from the C-Suite

The FOMO-Defensive Strategy

Despite elusive ROI, investment continues to surge. This behaviour is driven largely by a “Fear Of Missing Out” and a defensive strategic posture. Deloitte’s 2025 survey confirms that investment is growing, driven by the fear of falling behind competitors rather than clear, calculated returns — 85% of organisations increased AI investment in the past year, and 91% plan to increase further. IBM’s 2025 CEO Study found that 61% of CEOs believe competitive advantage depends on having the most advanced Generative AI, creating a self-reinforcing cycle of investment despite the lack of immediate payoff.

The Shift to Agentic AI as the New Hope

As the limitations of chat-based GenAI interfaces become apparent, leadership attention has pivoted aggressively toward “Agentic AI.”

The Promise. Agents promise to move beyond content generation to task execution — planning, reasoning, and acting autonomously to achieve business goals. This represents a shift from “chatting with data” to “acting on data.”

Adoption Rates. McKinsey reports that 62% of organisations are experimenting with AI agents, though only 23% have begun scaling them.

The “Silver Bullet” Trap. There is a growing danger that leaders view Agentic AI as a silver bullet to solve the productivity stagnation of GenAI. Experts warn that chasing agentic capabilities without fixing underlying data fragmentation and process issues will lead to “sterile, less differentiated experiences” and further pilot failures.

The Trust and Governance Dilemma

Trust remains a critical bottleneck. Fewer people believe AI companies will safeguard their data, and concerns regarding fairness and bias persist. For executives, the challenge is balancing the pressure to innovate with the rising tide of AI-related incidents and regulatory scrutiny. The “New Gavel” effect predicts that executives will increasingly be held personally responsible for rogue AI actions, shifting AI risk from an IT problem to a board-level liability.


The Operational Reality: Agentic AI and Technical Failure Modes

While the C-suite envisions a future of autonomous agents, the operational reality on the ground in 2025 is fraught with technical fragility and integration nightmares.

The Reality of Agent Performance

Agentic AI pilots in 2025 have revealed significant limitations in the technology’s readiness for complex, high-stakes enterprise environments.

Infinite Loops and Cost Spirals. In technical communities, developers report agents entering “infinite loops,” where an agent attempts to fix an error, fails, retries, and burns through significant API credits without human intervention.

Memory and Context Deficits. Agents often lack robust long-term memory. Without the ability to retain context over long execution horizons, they struggle to learn from mistakes or adapt to specific user preferences, rendering them “stateless” and inefficient for continuous workflows.

The “95% Recall” Trap. Chaining multiple stochastic components — where each step has a less than 100% success rate — leads to compounding errors. If an agentic workflow requires five steps, each with 95% accuracy, the total system reliability drops to roughly 77%. In enterprise processes requiring near-perfect execution (financial reconciliation, compliance reporting), this unreliability is unacceptable. This compounding error problem is what we identified in Sovereign Command as a core architectural challenge — and why governance cannot be an afterthought bolted onto autonomous systems.

Klarna: A Cautionary Tale

The experience of Swedish fintech Klarna serves as a primary case study for the risks of aggressive AI substitution.

In February 2024, Klarna announced that its AI customer service assistant was handling the equivalent workload of 700 outsourced agents. Separately, the company reported saving $10 million in marketing costs by using AI for image generation instead of agencies. The customer service AI was projected to drive a $40 million annual profit improvement.

Then came the reversal.

Quality Degradation. CEO Sebastian Siemiatkowski admitted that “cost unfortunately seems to have been a too-predominant evaluation factor,” resulting in “lower quality” outcomes. Customers complained about interactions with what they described as “slop-spinning algorithms.”

The Pivot Back to Humans. By May 2025, Klarna initiated a recruitment drive to bring humans back into the loop, ending a year-long hiring freeze. Siemiatkowski described the new model as an “Uber-type setup” targeting students and rural workers for flexible, remote customer service roles — a human fallback to ensure quality and empathy.

The Klarna trajectory — from triumphant AI-first announcement to public quality admission to rehiring drive in barely twelve months — is not an indictment of AI in customer service. It is an indictment of automation strategies that optimise for cost without governance architecture to maintain quality.

Vendor Fragmentation and Data Moats

Implementation is further hampered by a fragmented vendor ecosystem. Major players like Salesforce, Microsoft, and Workday are building “walled gardens,” creating interoperability challenges. CIOs report that agents built in one ecosystem cannot effectively interact with workflows in another, preventing the vision of a unified, autonomous enterprise.

About the Author

Ethan Seow is a Centre for AI Leadership Co-Founder and Cybersecurity Expert. He is ISACA Singapore’s 2023 Infosec Leader, ISC2 2023 APAC Rising Star Professional in Cybersecurity, TEDx and Black Hat Asia speaker, educator, culture hacker and entrepreneur with over 13 years in entrepreneurship, training and education.

This is Part 2 of a five-part series. Continue to Part 3: The Invisible Workforce Crisis.