Insynergy
← Back to Insights

The Handshake That Didn't Happen: What the India AI Summit Revealed

When OpenAI and Anthropic CEOs declined to hold hands at the India AI Summit, the moment was widely framed as rivalry. But the signal runs deeper. As AI firms increasingly operate at governance scale, the issue is not competition itself, but the absence of competitive architecture. This article examines the structural divergence between scaling-first and boundary-first governance models—and argues that what AI lacks is a designed boundary of judgment.

About Competitive Architecture

Insynergy Insights — February 20, 2026


The Moment

On the morning of February 19, 2026, at Bharat Mandapam in New Delhi, a group photo at the India AI Impact Summit produced an image that traveled faster than any policy announcement made that day.

Prime Minister Narendra Modi lifted the hands of OpenAI CEO Sam Altman and Google CEO Sundar Pichai before an applauding crowd. Other leaders on stage followed suit, clasping hands in a display of collective purpose. But Altman and Anthropic CEO Dario Amodei, standing side by side, did not join hands. Instead, both raised their fists. They did not make eye contact.

The moment was immediately consumed as gossip: a personal feud between former colleagues, a corporate cold war made visible. Social media amplified the drama. Amodei's name surged in search trends across India within hours.

But reading this as a personality conflict misses the point entirely. What happened on that stage was not an expression of personal animosity. It was a visible symptom of something more consequential: the absence of any designed architecture governing AI competition at scale.

Why India Matters

The setting was not incidental.

India is the fourth host in the global AI summit series, following Bletchley Park (2023), Seoul (2024), and Paris (2025). It is the first Global South nation to hold this position — a deliberate signal of the country's ambition to shape, not merely adopt, AI governance norms. The summit drew delegations from over 100 countries, more than 20 heads of state, and an estimated 250,000 visitors.

The commercial context is equally significant. Sam Altman disclosed that India now has 100 million weekly active ChatGPT users, making it OpenAI's second-largest market after the United States. Anthropic confirmed India as its second-largest market for Claude, with revenue doubling since October 2025. OpenAI announced new offices in Mumbai and Bengaluru. Anthropic opened its second Asian office in Bengaluru and signed an enterprise partnership with Infosys. Google committed to major infrastructure expansion.

India's position is shaped by three structural advantages: a massive developer ecosystem, a population-scale consumer base for AI products, and a regulatory environment that remains relatively open — inviting for firms seeking to establish precedent before formal rules crystallize.

For Modi, the summit served a dual function: attracting investment and projecting India as a governance interlocutor capable of convening global AI leadership. The raised-hands photo was meant to embody that convening power.

It did not fully succeed. And that failure is instructive.

The Real Divide

The tension between OpenAI and Anthropic has been escalating for months. In early February, Anthropic ran a four-part Super Bowl advertising campaign titled "A Time and a Place," satirizing OpenAI's introduction of advertising into ChatGPT. Each spot depicted an AI assistant interrupting helpful responses with unsolicited product pitches — a pointed critique of what Anthropic framed as a corruption of the user-AI relationship.

Altman publicly called the campaign "clearly dishonest." Anthropic's leadership countered with measured remarks about focusing on product quality rather than generating headlines. The exchange was sharp, public, and widely covered.

But beneath the advertising dispute lies a more substantive divergence — one that concerns how each firm defines the boundaries of AI's role in society.

OpenAI's posture centers on expanding AI access as broadly and rapidly as possible. The introduction of advertising for free-tier users is a mechanism to sustain that expansion. At the summit, Altman introduced the concept of "societal resilience" as a complement to AI safety — suggesting that society itself should build the capacity to absorb and adapt to AI's growing capabilities. The implication: AI's reach will continue to expand, and the burden of adjustment falls substantially on social systems.

Anthropic's posture centers on defining safety boundaries before expanding capability. Amodei's summit remarks focused on what he described as "serious risks" — the autonomous behavior of AI systems, their potential misuse by individuals and governments, and economic displacement. Anthropic's refusal to introduce advertising is consistent with this stance: it reflects a choice not to introduce additional decision-making agents (advertisers) into the space where users interact with AI.

Neither posture is inherently superior. Both represent legitimate responses to genuine uncertainty. The critical observation is this: these two governance postures are developing in parallel, without shared reference points, without mutual constraints, and without any architecture connecting them. Each firm is designing its own boundary conditions for AI's role in the world. No one is designing the relationship between those boundaries.

AI Firms as Quasi-Sovereign Actors

To understand why this matters, consider what these firms have become.

OpenAI and Anthropic each command billions of dollars in capital. They deploy products that hundreds of millions of people use for cognitive tasks — writing, reasoning, analysis, decision support. They set their own safety standards. They define acceptable use. They negotiate directly with heads of state. At the India summit, their CEOs shared a stage with a prime minister and a president, not as invited guests of a regulatory process, but as principals in a strategic conversation.

This is not to claim that AI firms have replaced states. They have not. But they have acquired characteristics that overlap with governance functions: they build infrastructure that mediates cognition at scale, set the rules of engagement for that infrastructure, and define the scope of their own accountability. The term "quasi-sovereign" is not hyperbole — it is a structural description of organizations whose decisions carry governance-level consequences without governance-level accountability structures.

When quasi-sovereign actors compete, the nature of that competition has consequences beyond market share. It shapes what safety means, how risk is distributed, and where responsibility resides. These are governance questions. And at present, no governance architecture exists to address them.

The Missing Layer: Competitive Architecture

Competition itself is not the problem. Markets function through competition. Innovation depends on it. The rivalry between OpenAI and Anthropic has produced rapid advances in model capability, safety research, and product development. Competition is healthy.

What is absent is competitive architecture — the designed set of shared constraints, boundaries, and rules that allow competition to function without destabilizing the systems it operates within.

Consider analogies from other domains. Financial markets have clearing houses, capital requirements, and coordinated stress testing — not because competition is harmful, but because unstructured competition in systemically important sectors creates unacceptable risk. Nuclear powers developed arms control frameworks not to eliminate rivalry but to prevent rivalry from producing catastrophic outcomes. International trade operates within the WTO and bilateral agreements that define what can be competed on and what cannot.

AI has none of this. What exists today is a collection of self-defined safety standards (different at each firm), nascent regulatory proposals (different in each jurisdiction), and a growing set of international summits that produce declarations but not binding architecture.

The result is what the New Delhi stage made visible: two firms competing intensely, standing side by side, with no designed framework governing the relationship between their competing approaches to safety, access, and societal impact. The absence of competitive architecture means that questions about who decides, what falls within whose responsibility, and where one actor's authority ends and another's begins are resolved ad hoc — or not resolved at all.

Designing the Boundaries of Judgment

There is a way to think about this gap more precisely. The underlying issue is not a shortage of regulation or a lack of good intentions. It is the absence of deliberate design around judgment itself — around who holds the authority to make consequential decisions, within what boundaries, and with what accountability.

This is the domain of what we call Decision Design: a framework concerned with designing the boundaries of judgment.

Decision Design does not prescribe what decisions should be made. It addresses the prior question: who decides, within what scope, and where does one party's judgment authority end and another's begin? It is not AI alignment tooling. It is not a compliance checklist. It is a structural approach to the problem of unassigned or ambiguously assigned judgment authority — the condition that produces governance gaps, accountability failures, and the kind of visible friction that appeared on stage in New Delhi.

The concept at its core is the Decision Boundary: the explicitly designed line that defines who judges what, where delegation stops, and where responsibility is retained. When these boundaries are left implicit — when they emerge from competitive dynamics rather than intentional design — the result is structural ambiguity. And structural ambiguity, in systems that affect hundreds of millions of people, is not a tolerable condition.

From Concept to Architecture

Frameworks are useful only if they can be implemented. Here is what Decision Boundary design looks like when applied to the current AI competitive landscape.

Layered responsibility assignment. AI governance involves at least three decision-making layers: states, AI firms, and developers who build on AI platforms via APIs. Each layer currently defines its own judgment scope independently. A Decision Boundary approach would make each layer's scope explicit and — critically — would define in advance who holds judgment authority in the gray zones between layers. The state defines minimum standards with legal force. The AI firm defines model-level safety constraints and takes technical responsibility for outputs. The developer takes responsibility for application-level implementation and end-user transparency. Where these layers overlap or conflict, the authority to resolve ambiguity must be pre-assigned, not left to emerge under pressure.

API-level governance metadata. Judgment boundaries should not remain abstract policy statements. They can be embedded in technical infrastructure. Specifically: API responses could include standardized metadata indicating which safety policies were applied, which governance framework version was active, and what the boundary conditions of the model's judgment scope are for that interaction. This is not speculative — it is technically feasible and would make governance auditable at the interface level.

Escalation protocols for conflicting safety standards. In a multi-model environment where outputs from different AI systems are combined in a single workflow, whose safety standard prevails? A Decision Boundary framework would require that model interconnection include machine-readable governance protocols, pre-agreed escalation rules for policy conflicts, and clear responsibility attribution for combined outputs. Without this, multi-model architectures create accountability voids that no single firm can resolve.

Competitive and cooperative boundary separation. Perhaps the most consequential implementation: the explicit design of which domains are competitive and which are cooperative. Model performance, user experience, pricing, and business model choices belong to competition. Safety evaluation standards, incident disclosure protocols, and fundamental rights protections belong to cooperation. Between these sits a boundary zone — advertising policy, data usage scope, government contracting terms — that requires deliberate design rather than ad hoc market resolution. The Super Bowl ad dispute and the New Delhi stage moment both occurred in this undesigned boundary zone. They were consumed as drama precisely because no architecture existed to frame them as structural questions.

Return to the Stage

Two men stood side by side in New Delhi. One had built his company on the premise that AI should be made as widely accessible as possible, with society adapting to absorb its impact. The other had built his on the premise that safety boundaries must be established before capability expands further. Both had raised billions. Both served hundreds of millions of users. Both stood alongside a head of state who had sought to present them as part of a unified vision.

They raised their fists instead of joining hands. The image went viral. Commentary focused on the personal, the dramatic, the adversarial.

But the structural reading is different, and more consequential. What that moment revealed was not animosity between two individuals. It was the absence of any designed framework within which their competition could coexist with shared accountability. The question of whether to clasp hands or raise fists was left to personal improvisation — because no architecture existed to define what cooperation and competition should look like between actors of this scale and consequence.

That absence will not remain confined to photo opportunities. As AI systems become more deeply integrated into economic, civic, and cognitive infrastructure, the lack of competitive architecture will produce increasingly consequential failures — not of technology, but of judgment allocation.

The fists raised in New Delhi were not a scandal. They were a signal. The architecture of AI competition remains undesigned. The question is whether the actors involved — firms, states, and the emerging ecosystem of developers and users — will treat that signal as a prompt for structural design, or allow it to fade into the next news cycle.

Designing competition is not an act of restraint. It is a precondition for competition that does not erode the governance systems it depends on.

The boundary must be drawn. And it must be drawn by design.


Sources

  1. CNBC, "OpenAI and Anthropic's rivalry on display as CEOs avoid holding hands at AI summit," February 19, 2026. Link

  2. Fortune, "Sam Altman and Dario Amodei refused to hold hands at an AI summit," February 19, 2026. Link

  3. The Quint, "OpenAI and Anthropic CEOs' Awkward Moment at India AI Summit," February 19, 2026. Link

  4. Marketing Brew, "About that AI Bowl: Reception of AI ads 'sharply negative' as brand beefs, misinformation brewed," February 11, 2026. Link

  5. Android Headlines, "Anthropic Softens 'Anti-ChatGPT' Super Bowl Ad After OpenAI Callout," February 2026. Link

  6. Anadolu Agency, "OpenAI, Anthropic CEOs avoid handshake at India AI summit," February 19, 2026. Link

  7. Business Standard, "AI Impact Summit Day 4: Modi redraws AI roadmap," February 19, 2026. Link

  8. TechCrunch, "All the important news from the ongoing India AI Impact Summit," February 16, 2026. Link

  9. CNBC, "India AI Impact Summit: What to expect as tech CEOs head to New Delhi," February 16, 2026. Link

  10. Wikipedia, "AI Impact Summit," accessed February 20, 2026. Link


RYOJI — Representative Director, Insynergy Inc. Decision Design × Decision Boundary — Designing where judgment begins and ends.

Japanese version is available on note.

Open Japanese version →