Skip to content
Analysis
The GenAI Paradox, Part 3: The Invisible Workforce Crisis
Mar 30, 2026 - Ethan Seow

The GenAI Paradox, Part 3: The Invisible Workforce Crisis

Junior roles frozen, middle managers becoming therapists, and a generation of CS graduates competing against tools that are faster and cheaper. The workforce impact no one in the C-suite wants to quantify.


This is Part 3 of a five-part series. Read Part 1: The Great Divide | Part 2: Boardroom Hope vs Operational Reality | Part 4: Shadow AI and the Singapore Model | Part 5: Trust, Governance, and What Comes Next


The impact of AI on the workforce is defined by a widening gulf between the “official” narrative of empowerment and the lived experience of anxiety, displacement, and hidden labour.

The Junior Freeze and the Skills Gap

One of the most alarming trends in 2025 is the collapse of the entry-level job market in technology and knowledge sectors.

The “Codified Knowledge” Trap. AI tools have become proficient at tasks typically assigned to junior employees — coding boilerplate, drafting emails, summarising data. Consequently, companies have frozen hiring for junior roles, preferring to hire seniors who can orchestrate AI agents.

The Data. A Stanford Digital Economy Lab study (Brynjolfsson, Chandar & Chen, August 2025) reveals a 13% relative decline in employment for workers aged 22–25 in AI-exposed occupations, compared to workers in low-exposure occupations. For software developers specifically in that age bracket, the decline was closer to 20%. This creates a “missing rung” in the career ladder — and raises long-term questions about how the next generation of experts will be trained if they cannot enter the workforce.

The Sentiment. Practitioner communities are filled with despair among recent CS graduates who feel they are competing against tools that are faster and cheaper, leading to a pervasive sense of being “scammed” by the educational system that trained them for roles that are contracting.

This is the pipeline problem we identified in our analysis of AI and software engineering: if AI substitution compresses the junior tier, the senior engineer shortage does not show up for five to seven years — long after the executives who made the hiring decisions have moved on.

Middle Management: The Ecosystem Managers

Middle managers are absorbing the shock of this transition, their roles shifting from task oversight to “ecosystem management.”

The “Therapist” Role. Managers report spending increasing time managing the emotions of their teams — fear of layoffs, burnout, cynicism — rather than the work itself. They are the buffer zone between executive AI mandates and workforce reality.

Managing Invisible Work. AI-driven dashboards often present a sanitised version of reality (“everything is fine”), failing to capture the “shadow work” required to fix AI errors or the burnout of employees juggling unrealistic targets. Managers must navigate “invisible negotiations” to keep the system running.

Shadow Automation. Managers also face the challenge of employees quietly automating their work but hiding the productivity gains to avoid increased quotas. This creates a “cat-and-mouse” game that erodes trust and obscures true capacity.

The middle management crisis is one of the least discussed and most consequential dynamics of enterprise AI adoption. These are the people responsible for translating executive AI strategy into operational reality — and they are doing it without support, recognition, or adequate tooling.

Practitioner Sentiment: AI Fatigue

Analysis of practitioner communities reveals deep-seated “AI fatigue” across the technology workforce.

“Forced Integration.” Developers and knowledge workers describe “forced usage” of AI tools that are perceived as surveillance mechanisms or “feature factories” rather than genuine productivity aids.

“AI Slop.” There is growing resentment toward the “echo chamber” effect of AI-generated content, with practitioners describing models degrading into “digital echo chambers of nonsense.”

Skepticism of ROI. Discussions in IT management communities consistently question the ROI of GenAI, describing it as “vaporware” or “hype” that executives have bought into without understanding the technical limitations. This bottom-up scepticism contrasts sharply with the top-down optimism documented in the C-suite sentiment data — and the gap between the two is itself a risk indicator.

This is a pattern the Sovereign Command research calls the “Eloquence Trap” at the organisational level: polished executive dashboards and vendor demos suppress the verification instinct, while the people closest to the operational reality are systematically excluded from the investment decisions.

About the Author

Ethan Seow is a Centre for AI Leadership Co-Founder and Cybersecurity Expert. He is ISACA Singapore’s 2023 Infosec Leader, ISC2 2023 APAC Rising Star Professional in Cybersecurity, TEDx and Black Hat Asia speaker, educator, culture hacker and entrepreneur with over 13 years in entrepreneurship, training and education.

This is Part 3 of a five-part series. Continue to Part 4: Shadow AI and the Singapore Model.