A definition that never arrives
In a recent interview, Sam Altman said something remarkable. Midway through a wide-ranging conversation about OpenAI's ambitions, he offered what sounded like a declaration of triumph: "We basically have built AGI, or very close to it." A few days later, he walked it back. "I meant that as a spiritual statement, not a literal one."
This sequence — assertion, retraction, reframing — would be unusual in most fields. If a pharmaceutical CEO announced that a cure for cancer had "basically" been achieved, then clarified it was a "spiritual statement," the response would be swift and unforgiving. But in AI, this pattern has become so familiar that it barely registers.
AGI — artificial general intelligence — is the gravitational center of the AI industry's narrative. Hundreds of billions of dollars in investment, infrastructure commitments measured in trillions, corporate strategy pivots across every sector: all of these orbit a concept that no one has agreed how to define. Not Altman, who concedes that achieving AGI will require "a lot of medium-sized breakthroughs." Not Microsoft CEO Satya Nadella, who responded to Altman's claim with a chuckle and a polite correction: "I don't think we are anywhere close to it." Not the researchers, the regulators, or the executives making decisions today on the assumption that AGI is either imminent or inevitable.
The strangeness of this situation is easy to overlook, precisely because AGI discourse is so voluminous. The sheer quantity of discussion creates an illusion of substance. But volume is not the same as clarity. And in the space between volume and clarity, something consequential is happening — not to technology, but to judgment.
The machinery of deferral
Consider the practical effects of AGI as an organizing concept for corporate strategy.
When a CEO tells a board that the company must invest aggressively in AI "because AGI is coming," the statement functions as both a justification and a shield. It justifies expenditure because the stakes are framed as existential. And it shields the decision-maker from specificity, because no one can define exactly what is coming or when. The investment is defended not by a measurable thesis but by an unfalsifiable horizon.
Altman himself is candid about operating in this mode. He describes his $1.4 trillion infrastructure commitment as "obvious," then acknowledges that "the rest of the world is like, 'financial reality.' And I don't think I'm the strongest at keeping those dueling perspectives in mind." This is not evasion. It is a genuine articulation of how AGI functions as a planning concept: it makes the scale of commitment feel self-evident while rendering the criteria for success permanently elastic.
This elasticity propagates outward. When OpenAI announces initiatives spanning custom chips, social media, healthcare tools, humanoid robots, and a hardware venture with Jony Ive, the connective tissue is not a product strategy. It is the premise that AGI will eventually make all of these bets coherent. The coherence is located in the future, which means it does not need to be demonstrated in the present.
For organizations watching from the outside — enterprises evaluating AI adoption, boards reviewing transformation roadmaps, public institutions designing policy — this creates a peculiar decision environment. The most consequential technology narrative of the era is anchored to a term that its own proponents cannot stabilize. And yet decisions are being made, budgets allocated, organizational structures redesigned, all in reference to this unstable anchor.
Where judgment goes to wait
The structural issue is not that AGI is poorly defined. Many important concepts begin without precise definitions and gain clarity through use. The issue is that AGI's indefiniteness has become functionally useful — not for advancing understanding, but for suspending judgment.
When an organization says "we need to be ready for AGI," it is rarely making a technical claim. It is making a governance claim — or, more precisely, avoiding one. "Ready for AGI" defers the question of which specific capabilities the organization is preparing for, which decisions will be automated, which will remain human, and where accountability will reside when autonomous systems act in ways that no one explicitly authorized.
These are questions of Decision Design — the deliberate structuring of where judgment lives within an organization, who exercises it, and under what conditions it transfers. They are questions that become more urgent, not less, as AI capabilities expand. And they are precisely the questions that AGI, as an undefined future state, allows organizations to postpone.
The postponement is not always conscious. It operates through a subtle substitution: instead of designing the Decision Boundaries that will govern how AI systems interact with human judgment today, organizations orient toward a future in which those boundaries will presumably be redrawn anyway. Why design the structure of judgment now, the reasoning goes, when AGI might render that structure obsolete?
This logic has a comforting symmetry. But it contains a dangerous inversion. It treats the absence of design as a reasonable interim state, when in fact it is the condition under which the most consequential defaults are set. Every month that an organization operates AI systems without explicitly designed decision boundaries is a month in which those boundaries are being established implicitly — by the capabilities of the tools, by the habits of the operators, by the path of least resistance. The boundaries exist. They are simply not governed.
The convenience of the undefined
Altman's own trajectory illustrates how productive ambiguity can be. His mentor, Paul Graham, observes that "he's good at convincing people of things. He's good at getting people to do what he wants." His colleague Jony Ive notes that he "is comfortable with the unknown." These are not criticisms. They are descriptions of a leadership style that thrives in environments where definitions remain fluid and commitments can be reinterpreted as conditions change.
This is not a character flaw. It may, in fact, be the optimal strategy for navigating a technological frontier where the destination is genuinely uncertain. But what works as a leadership posture for a technology company becomes something more problematic when it is adopted, wholesale, as an organizational planning framework by enterprises, governments, and institutions whose obligations are not to innovation but to accountability.
When Altman describes his succession plan — handing OpenAI to an AI model — he is extending the same logic to its natural conclusion. If AGI is imminent, then even the question of who leads the organization can be deferred to the technology itself. The judgment about leadership becomes, in this framing, a judgment that leadership will eventually be unnecessary. It is a decision to not design the future of decision-making, offered as a vision of the future.
The audience for this vision is not primarily technologists. It is the executives, board members, and institutional leaders who must decide, today, how their organizations will relate to AI systems that are growing more capable by the quarter. For these leaders, the question is not whether AGI will arrive. It is whether the word "AGI" is helping them make better decisions — or giving them permission to make none at all.
What remains undesigned
There is a pattern in how organizations respond to technological uncertainty. They invest in capability and defer governance. They acquire tools and postpone the question of how those tools interact with existing structures of judgment and responsibility. This pattern is not new. But AGI has given it an unusually durable justification.
The result is a widening gap between execution capacity and judgment architecture. AI systems can now draft analyses, evaluate risks, generate recommendations, and initiate actions across domains that once required deliberate human decision-making at every step. The systems are not waiting for AGI. They are operating now, within organizations that have not yet designed the boundaries between what these systems should decide and what humans must.
The irony is that AGI — the concept invoked to justify the urgency of AI investment — is the same concept that permits the indefinite deferral of the design work that investment demands. The word creates momentum. It also creates a vacuum where judgment architecture should be.
Altman says he has "mostly accomplished" what he set out to do and is now "playing for bonus points." Whether or not that is true for him personally, the organizations shaped by his vision are not playing for bonus points. They are making structural commitments — to technology, to vendors, to operational models — that will define how judgment is exercised, distributed, and accounted for across their enterprises for years to come.
Those commitments deserve a design discipline that does not depend on a word no one can define. The most urgent design problem in the age of AI is not building artificial general intelligence. It is deciding — specifically, deliberately, and accountably — what decisions will be made by whom, under what conditions, with what oversight, starting now. Not when AGI arrives. Not when the definition stabilizes. Now.
That AGI remains undefined is not a temporary gap in the discourse. It is, at this point, a structural feature of it — one that quietly shapes every judgment deferred in its name. Designing around that feature, rather than waiting for it to resolve, may be the most consequential act of organizational leadership available today. And it is, so far, almost entirely unbuilt.