The question AI acceleration forces us to ask
In February 2026, OpenAI CEO Sam Altman told an audience at the India AI Impact Summit in New Delhi that superintelligent AI could, at some point on its development curve, outperform any corporate executive — "certainly me," as he put it. He predicted that by the end of 2028, more of the world's intellectual capacity could reside inside data centers than outside of them. (Fortune, February 19, 2026; CIOL, February 20, 2026)
These are striking claims. But the most important question they raise is not whether he is right. It is what we mean when we say AI will "outperform" a CEO.
If we mean that AI will process information faster, generate more accurate forecasts, and optimize complex variables more efficiently than any human executive, this is already partially true and will become more so. No serious observer disputes this trajectory.
But if "outperform" means that AI will govern an organization, bear accountability for strategic failure, or decide what a company should refuse to do even when the numbers say otherwise, then we are talking about something fundamentally different. We are no longer comparing computational ability. We are comparing two different categories of action: computation and judgment.
The conflation of these two categories is the central confusion in most discourse about AI and executive leadership. This article is an attempt to separate them and to propose what must be designed once that separation is made clear.
The computational acceleration argument
The factual basis of Altman's claim is straightforward. AI systems are scaling rapidly. Models that struggled with high school mathematics little more than a year ago now operate at research-science level. The computational power concentrated in global data centers is growing at a pace that makes human cognitive throughput look static by comparison.
Altman himself put it plainly at the summit: "It is very hard to outwork a GPU in many ways." (CIOL, February 20, 2026)
This is accurate. In the domain of information processing, pattern recognition, simulation, and predictive modeling, humans cannot compete with silicon at scale. The gap will widen. The cost of computation will continue to fall. The quality of AI output in analytical domains will continue to rise.
None of this is controversial. What is controversial, or at least under-examined, is the next inferential step: that because AI can outperform humans in computation, it can therefore outperform humans in executive judgment.
This step contains a hidden premise. It assumes that executive judgment is, at its core, a computational task.
Why computation is not judgment
Consider what happens when a CEO decides to enter a new market. AI can analyze the data: market size, competitive landscape, financial projections, risk profiles. It can do this faster and with greater precision than any human team. In many cases, it can identify opportunities that humans would miss.
But the decision to proceed is not the output of that analysis. It incorporates the analysis, but it also incorporates elements that resist quantification: the internal political dynamics of the organization, the tolerance of the board for short-term loss, the cultural readiness of the workforce for a strategic pivot, the CEO's assessment of whether this is the moment to stake personal credibility on a direction that cannot be validated in advance.
These are not data gaps that better models will eventually close. They are features of a different kind of activity. Analysis asks: what does the data suggest? Judgment asks: given irreducible uncertainty, what do we commit to, and who bears the consequences?
AI can generate options, rank them, assign probabilities, and recommend. What it cannot do is declare: "This is my decision, and I accept responsibility for its outcome." That declaration is not a computational event. It is an institutional act that creates accountability, assigns ownership, and binds a human agent to a future they cannot predict.
The distinction matters because organizations do not run on optimal outputs alone. They run on committed decisions made by identifiable agents who can be held to account. Remove the agent, and you do not get better management. You get a process with no one at the center.
The hidden assumption in "you can't outwork a GPU"
Altman's statement about GPUs carries an implicit frame: that intellectual work is, fundamentally, information processing. Within that frame, the conclusion follows naturally. GPUs process information faster, therefore GPUs outperform humans in intellectual work.
But this frame is not self-evident. It is a choice of definition, and it excludes a significant category of executive action: the capacity to refuse.
An executive who says no to a numerically optimal proposal, because they judge the organizational cost to be unacceptable, or because they believe the ethical implications outweigh the financial return, is not failing at computation. They are exercising a faculty that has no computational equivalent: the authority to reject an optimal output and accept the consequences of that rejection.
This is not a romantic claim about human superiority. Executives make poor decisions frequently, often because of bias, ego, or incomplete information. AI will increasingly correct for these failures, and that is valuable. But correction is not replacement. Correction improves the quality of judgment by providing better inputs. Replacement eliminates the judging subject entirely.
When AI outputs are executed without a human agent choosing to adopt them, what occurs is not superintelligent management. It is the hollowing out of judgment itself.
Optimization is not governance
The claim that AI can outperform a CEO implicitly treats management as an optimization problem: maximize shareholder value, minimize risk, allocate capital efficiently. Against these metrics, AI may indeed outperform.
But governance is not optimization. Governance involves deciding which metrics matter, how to weigh competing obligations, when to sacrifice short-term performance for long-term positioning, and what the organization will not do regardless of what the data recommends.
A pharmaceutical company that continues manufacturing a low-margin drug because it serves a critical public health need is not optimizing. It is governing. An executive who accepts a near-term stock price decline to fund research with a ten-year horizon is making a governance decision that no optimization function would produce, because the objective function itself is in question.
AI is exceptionally good at solving problems once the objective is defined. It is not equipped to define the objective in the first place, nor to take ownership of the consequences when the chosen objective produces harm.
This is not a limitation that scaling will resolve. It is a structural feature of the difference between processing and governing.
Responsibility: the missing dimension
Current legal and corporate governance frameworks are designed around human decision-making. Fiduciary duty, the duty of care, shareholder derivative actions, board liability: these mechanisms presuppose that identifiable human agents make decisions and bear responsibility for their outcomes.
When an AI system generates a recommendation that a board adopts, and that recommendation leads to significant loss, the existing accountability structure becomes strained. Did the board exercise judgment, or did it merely ratify an output? If the latter, who is accountable? The AI vendor? The CTO who selected the system? The board member who voted to approve?
These are not hypothetical questions. They are emerging realities in organizations that are integrating AI into strategic decision-making without redesigning their accountability structures.
Altman himself has called for a global AI governance body modeled on the International Atomic Energy Agency (TIME, February 21, 2026; Euronews, February 19, 2026). This is a meaningful proposal for managing the macro risks of AI development. But international oversight addresses the safety of the technology. It does not address the decision structure within individual organizations.
The question of who decides and who bears responsibility is not a regulatory question. It is an organizational design question. And in most organizations, it remains undesigned.
The risk of hollowed decision-making
There is a pattern that is already visible in organizations adopting AI at the executive level. AI proposes, a human approves. On paper, the human retains decision authority. In practice, as AI outputs become more consistently accurate, the approval becomes perfunctory. The human signs off. The record shows a human decision. But the substantive judgment has migrated to the machine.
This creates what can be called judgment hollowing: a state in which the formal structure of human decision-making remains intact, but the actual exercise of judgment has been vacated. Responsibility nominally resides with a human agent, but that agent is no longer engaging in the deliberative act that responsibility presupposes.
Judgment hollowing is dangerous not because AI is wrong, but because no one is deciding. The organization operates on AI outputs without a designated agent who has evaluated, committed to, and accepted ownership of the decision. When something goes wrong, there is no one who can explain why that course of action was chosen, because no one chose it. It was generated.
This problem does not require superintelligence to manifest. It is happening now, in boardrooms and executive committees where AI-generated analyses are adopted without scrutiny, without modification, and without a recorded rationale for acceptance.
The need for intentional design
In traditional organizations, decision boundaries existed implicitly. Authority was distributed through hierarchy: department heads decided within their scope, executives within theirs, the board within its charter. The system was imperfect, but the boundaries were legible.
AI disrupts this implicit structure. It introduces a new actor into the decision process, one that operates outside the authority hierarchy but increasingly shapes its outputs. The question is no longer just "who decides?" but "at what point does AI involvement shift from support to de facto decision-making, and has anyone explicitly sanctioned that shift?"
In most organizations, the answer is no. The boundary between AI assistance and AI-driven decision-making has moved without anyone designing that movement. The shift is gradual, invisible, and unremarked upon until something breaks.
What is needed is not less AI, or slower adoption, or principled objection to automation. What is needed is deliberate design of the boundary between AI contribution and human judgment. This is an architectural problem, not a philosophical one.
Decision Design: a structural response
Decision Design is the practice of intentionally designing the boundary between AI-generated output and human judgment within an organization.
At its center is a concept called the Decision Boundary: the explicit demarcation of who decides, how much AI involvement is sanctioned at each level of decision-making, and who bears accountability for the outcome.
Decision Design addresses three questions and embeds the answers into organizational structure before decisions are made:
Who makes this decision? Is it AI or a human agent? If human, which role, and with what authority?
What is AI's sanctioned role in this decision? Is AI providing data, presenting options, making a recommendation, or executing autonomously?
Who is accountable for the outcome? Is it the person who approved the AI's recommendation, the executive who authorized the AI system, or another designated agent?
These questions may seem elementary. In practice, most organizations cannot answer them for their most consequential decisions.
What Decision Design is not
Decision Design is not an AI adoption methodology. It does not advise organizations on how to use AI more effectively.
It is not an AI ethics framework. It does not prescribe principles of fairness, transparency, or bias mitigation.
It is not a claim of human superiority over AI. There are domains in which AI produces better outcomes than human judgment, and those domains will expand.
Decision Design asks a narrower and more precise question: who decides, and who bears the consequences? It designs the organizational answer to that question. Nothing more, and nothing less.
What problem it addresses
Decision Design addresses the problem of absent decision subjects: the condition in which organizational decisions are driven by AI outputs without any designated agent having deliberately chosen, evaluated, and accepted responsibility for the course of action.
This condition looks efficient on the surface. AI recommendations are often accurate. Outcomes may improve in the short term. But when failure occurs, no one can explain the reasoning behind the decision, because no reasoning occurred. An output was generated, and it was executed.
This is not a technology problem. It is a governance failure. Decision Design prevents it by requiring that decision boundaries be established in advance, that the level of AI involvement be explicitly classified, and that accountability be assigned before the decision is made, not after it fails.
Implementing Decision Boundaries
Concepts without mechanisms remain inert. The following are four governance mechanisms that embed Decision Boundaries into organizational practice.
AI Decision Acceptance Log
For significant decisions informed by AI, organizations should maintain a structured record that captures the AI-generated recommendation, the identity of the human who accepted or modified it, the rationale for acceptance or modification, the risk parameters considered, and the scheduled review date.
This log serves an audit function, but its primary purpose is cognitive. It forces the decision-maker to articulate why they are adopting an AI output as their own decision, rather than passively approving it. The log makes judgment visible and reviewable.
AI Involvement Classification
Organizations should classify their decisions by the level of AI involvement sanctioned for each type.
Level 1: AI provides data and analysis. The human decides from a blank slate. This applies to strategic pivots, market entry, and organizational restructuring.
Level 2: AI presents structured options. The human selects among them. This applies to investment prioritization, vendor selection, and product roadmap decisions.
Level 3: AI presents a recommendation with supporting rationale. The human approves or rejects. This applies to credit decisions, performance evaluations, and compliance determinations.
Level 4: AI executes autonomously. Humans conduct periodic audit. This applies to routine procurement, anomaly detection responses, and standardized operational processes.
The critical discipline is not the classification itself but the governance of boundary migration. As AI capability grows, decisions will naturally migrate from Level 2 to Level 3, or from Level 3 to Level 4. This migration must be deliberate, documented, and periodically reviewed. Unexamined boundary drift is the primary mechanism through which judgment hollowing occurs.
Responsibility Declaration Protocol
For material decisions based on AI output, the decision-maker formally declares three things: that they understand the content of the AI recommendation, that they adopt it as their own decision, and that they accept accountability for the outcome.
This is not a signature formality. It is a structural mechanism that prevents the diffusion of responsibility. It makes it impossible for a decision-maker to later claim that they merely followed the AI's suggestion. The declaration binds the human agent to the decision in the same way that a board resolution binds directors to a strategic commitment.
Decision Boundary Map
Organizations should maintain a visual mapping of their major decision categories against sanctioned AI involvement levels. The vertical axis lists decision types (strategic, financial, operational, human capital). The horizontal axis indicates the sanctioned AI involvement level (data provision, option generation, recommendation, autonomous execution).
This map is not static. It should be reviewed quarterly or whenever AI systems are materially updated. The purpose is not to freeze boundaries but to ensure that when they move, someone has decided that they should move.
The danger is not that boundaries shift. The danger is that no one notices they have shifted.
Closing
Whether superintelligence arrives by 2028 or not is a question no one can answer today. But a different question can be answered now: has your organization designed the boundary between AI output and human judgment?
In most cases, the answer is no. That boundary is moving, unmarked and unmanaged, in organizations of every size and sector. The risk it creates is not speculative. It is structural, and it compounds with every increase in AI capability.
What needs to be designed is not AI performance. It is the structure of judgment itself: who decides, who refuses, who bears the weight.
AI may accelerate intelligence. Governance remains human.
Sources
Statements attributed to Sam Altman in this article are drawn from his keynote address at the India AI Impact Summit 2026, delivered on February 19, 2026 at Bharat Mandapam, New Delhi.
-
Fortune (February 19, 2026). "Sam Altman says not even the CEO's job is safe from AI as it will soon perform the work better than 'certainly me.'" https://fortune.com/2026/02/19/sam-altman-openai-ceo-ai-white-collar-jobs-ceo-executives/
-
CIOL (February 20, 2026). "Altman says early superintelligence could arrive by 2028." https://www.ciol.com/news/sam-altman-says-early-superintelligence-could-arrive-by-2028-11135224
-
TIME (February 21, 2026). "World Leaders Near Declaration on AI, Indian Government Says." https://time.com/7379949/india-ai-impact-summit-us-china-middle-powers/
-
Euronews (February 19, 2026). "From Modi and Macron to the US: What are world and tech leaders saying at the India AI summit." https://www.euronews.com/next/2026/02/19/from-macron-to-altman-what-are-world-and-tech-leaders-saying-at-the-india-ai-summit