The narrative is everywhere now. AI will displace millions of white-collar workers within the next twelve to eighteen months. Entire categories of professional work — legal analysis, financial modeling, marketing strategy, software development — will be compressed, automated, or eliminated. The office as we know it is ending.
It is a compelling story. And it contains enough truth to feel inevitable.
Recent projections suggest that 20 to 50 percent of the roughly 70 million white-collar workers in the United States alone could be affected in the coming years. Stock markets reward companies that announce headcount reductions alongside AI adoption. CEOs describe a future in which a handful of key employees, augmented by intelligent systems, replace departments.
But before accepting this as settled cause and effect, it is worth pausing. Not to deny that disruption is happening — it clearly is. But to ask a more precise question: Is AI actually the causal driver of mass layoffs? Or is it functioning as something else entirely — a catalyst, a justification, or even a convenient shield for decisions that have other origins?
The distinction matters. Because if the cause is misidentified, so will the response be.
The Assumed Causal Chain — and Where It Breaks
The dominant logic runs as follows:
AI adoption → productivity gains → reduced headcount needs → mass layoffs
Each arrow feels intuitive. Taken together, the chain appears airtight. But under scrutiny, at least two of those links are weaker than they appear.
The first gap: AI adoption does not reliably produce measurable productivity gains.
A July 2025 study from MIT's Media Lab — "The GenAI Divide: State of AI in Business 2025" — analyzed over 300 publicly disclosed AI deployments and found that 95 percent of organizations reported zero measurable impact on their profit and loss statements. Despite $30–40 billion in enterprise investment in generative AI, only 5 percent of integrated pilots were extracting meaningful value. Adoption was broad but shallow: over 80 percent of organizations had explored tools like ChatGPT or Copilot, yet these primarily enhanced individual productivity rather than organizational performance.
This does not mean AI is incapable of transforming work. It means that as of now, the premise that AI deployment directly translates into the kind of productivity surplus that renders large numbers of workers redundant is not well supported by evidence.
The second gap: layoffs correlate more strongly with macro conditions than with AI.
J.P. Morgan Asset Management reported that of the more than 1.1 million job cuts announced by U.S. employers in 2025, approximately 55,000 cited AI as a contributing factor — less than 5 percent of total layoffs, and roughly 0.03 percent of overall employment. As their global market strategist Stephanie Aliaga noted, AI may be fueling market narratives and active debate, but current evidence does not support the claim that it is having a material impact on aggregate employment.
So what is driving the layoffs?
The Real Drivers: Overcorrection, Capital Pressure, and Narrative
The structural forces behind the current wave of workforce reductions are more familiar than the AI narrative suggests.
Post-pandemic overcorrection. Between 2020 and 2022, technology companies hired at unprecedented scale. Meta nearly doubled its headcount from approximately 48,000 to over 80,000. Amazon's total workforce — including warehouse operations — expanded from roughly 650,000 to 1.6 million. When pandemic-era demand normalized, headcounts did not. The layoffs that began in late 2022 were, in large part, a correction of that overshoot. As Fabian Stephany, a researcher in AI and work at the Oxford Internet Institute, characterized it: the current wave is best understood as "late-cycle cost discipline and post-pandemic normalization."
Capital market pressure. The end of the low-interest-rate era shifted investor expectations from growth to efficiency. Markets began rewarding companies that demonstrated operational discipline — including headcount reduction — and penalizing those that did not. This dynamic predates generative AI. It is a feature of the current financial cycle, not a consequence of any particular technology.
AI as narrative device. When a company announces a restructuring, framing it as "AI-driven efficiency" carries strategic value. It signals forward-thinking leadership. It aligns with market expectations. It positions the decision as technologically inevitable rather than managerially discretionary. A December 2025 analysis noted that AI functioned less as a primary cause of layoffs and more as a "useful scapegoat" — a way for companies to frame cost-cutting as strategic transformation. Saying "we are leveraging AI to optimize our workforce" sounds considerably better than "we overhired during the pandemic."
None of this is to say that AI will not reshape labor markets. It almost certainly will. But the current layoff wave is substantially driven by financial conditions, cyclical corrections, and capital market dynamics — not by a sudden, AI-induced surplus of productivity.
The Deeper Problem: Decision Attribution
Here is where the conversation needs to shift altitude — from macroeconomics to governance.
When a company eliminates 1,000 positions and attributes the decision to AI-enabled productivity gains, a specific rhetorical move is taking place. Technology becomes the subject of the sentence. The decision appears to emerge from capability rather than from choice.
But AI does not make staffing decisions. AI does not authorize layoffs. AI does not weigh the trade-offs between short-term cost reduction and long-term organizational capability. People do — executives, boards, investors, and the incentive structures that connect them.
The problem is not that these decisions are being made. Restructuring is a legitimate function of management. The problem is that the decision architecture behind them is rarely visible, rarely designed, and rarely accountable.
When Jamie Dimon, CEO of JPMorgan Chase, spoke at the World Economic Forum in January 2026, he warned that rushing into AI-driven layoffs without safeguards could trigger serious social consequences. He described plans for retraining, redeployment, and income assistance. He even indicated openness to government regulation of AI-related mass layoffs.
What Dimon was describing — whether or not he used the term — was a question of decision architecture. Not what AI can do, but how organizations structure the human judgments that surround its deployment.
This is the question that most organizations have not yet addressed.
Decision Design: Designing the Act of Judgment Itself
The gap is not technological. It is structural. Most organizations have invested heavily in AI capability without designing the judgment structures that govern how that capability is used.
This is the problem that Decision Design addresses.
What it designs: Decision Design takes the act of organizational judgment itself as a design object. It asks: who decides, on what basis, with what authority, and with what accountability? It treats these not as informal norms but as elements of organizational architecture that require explicit, intentional design.
What it is not: Decision Design is not a decision-making methodology. It does not aim to make decisions faster or more accurate. It is not an AI strategy, a digital transformation framework, or a governance checklist. It is concerned with the structure within which decisions occur — not with the content of any particular decision.
What problem it addresses: In most organizations, the structure of judgment is implicit. Decisions are made, but the architecture governing who holds decision rights, what inputs are considered authoritative, and where accountability resides is rarely articulated. AI amplifies this problem. As intelligent systems begin to generate recommendations, draft analyses, and propose actions, the boundary between human judgment and machine output becomes blurred — often without anyone noticing.
At the center of Decision Design is the concept of Decision Boundary.
What it delineates: A Decision Boundary marks the line between what is delegated to automated or AI-driven processes and what remains within human judgment. Critically, this is not a task allocation exercise. It is a question of where decision authority and accountability reside.
What it is not: A Decision Boundary is not an AI implementation roadmap. It does not answer the question "which tasks should we automate?" It answers a different question: "For this judgment, who is responsible — and is that responsibility explicitly assigned?"
Why AI makes it urgent: Previous generations of automation replaced defined tasks. The human judgment layer remained largely intact. Generative AI is different. It operates in domains — summarization, analysis, recommendation, prediction — that are adjacent to judgment itself. As a result, decision authority can migrate from human to machine without any deliberate transfer. What appears to be efficiency may, in fact, be the unintentional abdication of judgment.
From Concept to Practice
Decision Design must translate into operational reality. Below are two implementation frameworks directly applicable to the intersection of AI deployment and workforce decisions.
1. Decision Boundary Mapping
Purpose: To make visible, across the organization, which judgments are made by humans, which are delegated to AI, and which exist in an ambiguous middle layer.
Structure: For each major decision domain (hiring, resource allocation, risk assessment, performance evaluation, workforce planning), map decisions across three layers:
- Layer 1 — Automated: AI processes and completes without human involvement. Examples: standard report generation, data aggregation, scheduling.
- Layer 2 — AI-assisted, human-approved: AI generates options, recommendations, or analyses; a human makes the final call. Examples: candidate screening, budget proposals, risk scoring.
- Layer 3 — Human-exclusive: No AI involvement in the judgment process. Examples: final workforce planning decisions, ethical determinations, high-stakes negotiations.
Operational discipline: Review the map quarterly. The critical audit question is whether Layer 2 decisions have silently migrated to Layer 1 — that is, whether human approval has become pro forma rather than substantive.
Any decision to reduce headcount should be explicitly situated in Layer 3.
2. AI-Involved Decision Log
Purpose: To create a verifiable record of how workforce decisions were made, what role AI played in the process, and who held final accountability.
Required fields for each logged decision:
- Decision date and accountable decision-maker (name and role)
- Basis for the decision (financial data, AI-generated analysis, market projections, or other inputs)
- Scope of AI involvement (which elements of the decision process were AI-informed or AI-generated)
- Alternatives considered (whether non-reduction options — retraining, redeployment, hiring freezes — were evaluated, and if not, why)
- Impact scope (number of affected positions, departments, seniority levels)
- Follow-through plan (reskilling programs, transition support, timeline)
Function: The log prevents post-hoc attribution — the tendency to ascribe decisions to "AI" or "market conditions" without specifying who judged, on what basis, and with what authority. It restores the human subject to the sentence.
Closing
The opening narrative — that AI is about to eliminate millions of white-collar jobs — is not entirely wrong. But it is structurally incomplete. It treats technology as the subject of a sentence in which technology is, at most, an instrument.
AI does not eliminate jobs. Organizations eliminate jobs. The relevant question is not whether AI is capable of displacing human work. In many domains, it increasingly is. The relevant question is whether the organizations making displacement decisions have designed the structures that govern those decisions — who holds authority, what counts as sufficient basis, and where accountability resides.
Where Decision Boundaries have not been drawn, AI adoption does not transfer judgment. It dissolves it. Productivity becomes a rationale without a rationale-maker. Efficiency becomes a destination without a navigator. And "AI did it" becomes the most convenient sentence in the executive vocabulary — precisely because it has no subject.
Organizations that design their decision architecture will navigate this transition with clarity. Those that do not will find that they have not been disrupted by AI. They will have disrupted themselves.
Decision Design / Decision Boundary™ — Insynergy Inc.
Sources
-
Andrew Yang, "The End of the Office," Yang's Substack, February 16, 2026. https://blog.andrewyang.com/p/the-end-of-the-office
-
MIT NANDA Initiative (MIT Media Lab), "The GenAI Divide: State of AI in Business 2025," July 2025. https://fortune.com/2025/08/18/mit-report-95-percent-generative-ai-pilots-at-companies-failing-cfo/
-
Stephanie Aliaga, "Is AI really driving an increase in layoffs?" J.P. Morgan Asset Management, 2025. https://am.jpmorgan.com/us/en/asset-management/adv/insights/market-insights/market-updates/on-the-minds-of-investors/is-ai-really-driving-an-increase-in-layoffs/
-
CNN Business, "How Big Tech's pandemic bubble burst," January 2023. https://www.cnn.com/2023/01/22/tech/big-tech-pandemic-hiring-layoffs
-
Fabian Stephany (Oxford Internet Institute, University of Oxford), quoted in Newsweek, "US Hits Highest Layoffs Since COVID," August 2025. https://www.newsweek.com/us-hits-highest-layoffs-since-covid-2111794
-
Gizmodo, "AI Gets the Blame for 55,000 Layoffs in 2025," December 23, 2025. https://gizmodo.com/ai-gets-the-blame-for-55000-layoffs-in-2025-2000703011
-
Fortune, "JPMorgan CEO Jamie Dimon says he welcomes government ban on mass-firing people for AI," January 22, 2026. https://fortune.com/2026/01/22/jpmorgan-chase-ceo-jamie-dimon-ai-layoff-income-assist-workers-elon-musk-sam-altman-universal-basic-income/