A Governance Framework That Takes the Offense
A recent article published by NRI (Nomura Research Institute) titled "AI Risks and Responsibilities You Cannot Afford to Ignore: What AI Governance That Builds Competitive Advantage Looks Like" (Daisuke Takagi and Misuzu Takahashi, NRI JOURNAL, March 9, 2026) stands out from the typical "AI is dangerous" genre of governance writing.
Its central reframe is worth acknowledging: AI governance is not a set of rules to prevent use, but a set of conditions that make responsible use possible. The authors explicitly raise the cost of not adopting AI — asking how abstaining from AI affects cost structures, decision speed, customer experience, and competitive positioning. That is a question most risk management frameworks are poorly equipped to answer. The shift from defensive to offensive governance is clearly intentional.
That said, when the article is examined for what it places at the center of its argument, four themes emerge:
- Risk classification and tolerance-setting
- Establishing an AI Center of Excellence (CoE) as a cross-functional governance hub
- Delegating operational governance to department-level hubs
- Building AI literacy and capability across the workforce
None of these is wrong. But if a reader finishes the article with a sense that something important remains unsaid, it is probably because the article addresses how to organize governance without arriving at a deeper question: how to design judgment itself as an organizational object.
Risk Analysis Is Now Table Stakes
The risk inventory the NRI article presents — hallucination, copyright infringement, prompt injection, model extraction, data poisoning — represents exactly what any organization deploying AI should understand. That is accurate.
But understanding these risks is increasingly the baseline expectation, not a source of competitive advantage.
By 2026, few serious organizations can claim ignorance of hallucination risks. The EU AI Act is in effect. Domestic AI regulations in Japan and across major economies are moving from voluntary guidelines toward enforceable frameworks. In this environment, the process of naming risks, classifying them, and setting tolerance thresholds is converging on a minimum standard for operational continuity — not a differentiator.
Consider the analogy: a hospital's infection control protocols are not a competitive strength. They are the minimum obligation to patients. The same dynamic is now shaping AI risk management. Competence in risk identification is becoming the floor, not the ceiling.
Where does competitive advantage actually emerge? Not in the ability to find risks, but in the ability to sustain sound judgment at scale across the moments when those risks become real. Identifying risk and architecting stable judgment under risk are not the same problem.
Shadow AI and Over-Restriction Share the Same Root
The NRI article accurately names what it calls two extremes observed in organizations: on one side, shadow AI — uncontrolled use driven by unclear rules and informal workarounds; on the other, governance so restrictive that it paralyzes adoption entirely.
This tension is real and widely observed. But the explanation that usually follows — that the problem is unclear messaging from leadership, or insufficiently communicated policies — is only partially right.
The more fundamental cause is that the boundary of judgment has not been designed. No one has explicitly defined who may decide what, under which conditions, and through which process.
When shadow AI emerges, individual employees are making judgment calls in the absence of structure. "I assume this level of risk falls within my discretion." "I don't know who to consult." "Consulting will result in a blanket refusal, so I won't ask." These are not failures of individual judgment. They are improvised responses to an organizational design gap.
When governance becomes over-restrictive, the same gap is present in inverted form. The conditions under which judgment is permitted have not been defined, so the default becomes prohibition. Without a designed path for judgment, everything stops.
The symptoms differ. The root is the same.
AI CoE Is Necessary — But It Defers the Core Question
An AI Center of Excellence is a well-founded governance structure. Bringing together legal, security, and business stakeholders to integrate offensive and defensive knowledge — to break down the silos that make governance ineffective — is the right direction.
But here is a question worth holding.
If the AI CoE's function is to translate executive-level risk tolerances into operational rules and tools that frontline teams can use without confusion (as the NRI article describes), then the quality of that translation depends on whoever is performing it inside the CoE. When that person moves to a different role, the translation quality changes. The governance maturity of the organization becomes contingent on the judgment and experience of a specific individual.
This is not a criticism of the AI CoE model. It is an observation that standing up a CoE organization and designing the structure of judgment are not the same act.
The NRI article itself notes that "as AI adoption cases multiply, it becomes unrealistic to expect the AI CoE to oversee everything." That recognition is precise. But the solution it points toward — building the capacity for autonomous judgment within each business unit — raises the same question again, one layer down.
Capability Building Is Necessary but Structurally Insufficient
"Enable frontline employees to make sound judgments." "Raise AI literacy across the organization." "Create accessible channels for consultation." These initiatives matter. They should be pursued.
But they share a structural characteristic that deserves scrutiny: they depend, ultimately, on the capability of individual people.
Governance built on individual capability functions when the right people are present. It degrades when those people transfer. It stops when they leave. Organizational knowledge accumulates in individual minds and does not transfer reliably to the institution. Training costs grow. Judgment quality continues to vary.
This is not a failure of training programs. It is the limit of treating capability development as a substitute for judgment design.
Training raises the average quality of individual judgment. It cannot replace the structure of judgment itself.
Scalable governance — governance that maintains consistent judgment quality without requiring exceptional individuals in every critical role — is a different design problem. It requires an answer that capability development alone cannot provide.
What Governments Sense, and What They Have Not Yet Designed
It is worth stepping back to observe a broader signal.
Governments are increasingly moving toward requiring that mechanisms ensuring human judgment remains involved be built into autonomous AI agent systems, particularly given risks of malfunction and privacy violations. Japan's Ministry of Internal Affairs and Communications and the Ministry of Economy, Trade and Industry have signaled that updated AI business guidelines will mandate mechanisms to keep human judgment in the loop for autonomous AI agents and physical AI systems (Nikkei Shimbun, February 15, 2026). Similar orientations are visible in international governance discussions.
What this policy trajectory reveals is that society already has a strong intuition: humans must remain in the loop. The anxiety about fully delegating consequential decisions to AI systems has surfaced at the policy level.
But the harder design question remains largely unresolved in publicly available governance discourse: where should humans remain, and in what form?
Is it sufficient to retain a person who clicks an approval checkbox? Or does retention of human judgment require that the person hold genuine authority, relevant information, and real accountability for the outcome? When an exception arises, is there a person positioned to assume actual responsibility — not nominal responsibility?
The intuition to keep humans present is correct. The design language for how to do that with intention and structure is still underdeveloped, both in policy documents and in most enterprise governance frameworks.
The NRI article touches this precisely when it observes that organizations must design the judgment process itself for handling exceptions — not just the rules. But the specifics of how to design that process fall outside its scope.
Why Fixed Rules Break in the Age of AI Agents
The NRI article raises the issue of AI agents — systems that act autonomously without waiting for human instruction between steps. This is an important frontier.
Fixed rules function when the set of possible scenarios is finite and anticipatable. When every relevant situation can be enumerated in advance, rules are adequate governance instruments.
Autonomous AI agents do not operate in finite scenario spaces. They traverse multiple systems, execute parallel tasks, and encounter configurations that rule designers did not anticipate. The failure mode is not that the rules are too few — it is that the rule format is structurally unsuited to the situation.
What is needed is not a more exhaustive rulebook. It is a designed boundary: when something unexpected occurs, who assumes responsibility, at which organizational layer, and under what conditions?
This is not an exception-handling manual. It is a structural question about how the act of judgment is allocated across the organization — which layer assumes it, when, and with what authority.
The Design Gap That Governance Has Not Named
Risk has been analyzed. Organizational structures have been created. Capability development programs are underway.
But a question remains that existing governance language has not fully answered.
How is judgment itself designed as an organizational object?
Not AI performance. Not rule comprehensiveness. Not the talent level of the individuals in critical roles.
The core issue is whether the act of judgment — who assumes it, where, and under what conditions — has been intentionally designed as a stable organizational structure.
Governance has been discussed in the language of risk management, in the language of organizational design, in the language of capability development. But it has rarely been discussed in the language of judgment design. That gap is where the problem lives.
The framework that addresses this gap is Decision Design.
Decision Design is the discipline of designing judgment itself as an organizational object. Its central concept is the Decision Boundary (organizational governance) — the intentionally designed threshold that defines where accountability is allocated between AI-enabled systems and accountable human authority.
Decision Design asks: who decides, where in the process, and under what conditions. And it insists that this question must be answered by design, not left to circumstance.
What Decision Design Is — and What It Is Not
Decision Design asks: how is the act of judgment — who assumes it, where in the workflow, and under what conditions — structured within an organization?
Specifically, it treats three things as design objects:
Who — which person or role, at which layer of the organization, assumes this judgment. Where — at which point in the operational workflow does this judgment arise. With what — what set of information, authority, and accountability does the responsible party hold when assuming this judgment.
This is not workflow design. It is not prompt engineering. It is not policy documentation. It is the design of judgment as an accountable organizational act.
Decision Design is not:
Not an AI adoption strategy. The question is not how to use AI more broadly, but who assumes responsibility for what AI generates.
Not an AI risk management framework. Risk classification is the starting point, not the destination. Decision Design addresses what remains after risk is understood: in the moments where judgment is still required, who holds it?
Not a capability development program. Decision Design does not rely on the expectation that skilled individuals will make good judgments. It builds the structure within which judgment is exercised — regardless of who fills the role.
Not a checklist or policy library. Checklists are tools for abbreviating judgment. Decision Design is concerned not with abbreviating judgment but with correctly allocating it.
The problem Decision Design addresses is created when AI outputs — recommendations, analytical inputs, operational guidance, or de facto decisions — enter an organization while the location of responsibility, approval authority, exception handling, and accountability remain unclear.
An AI-generated credit assessment. An AI-recommended M&A target. An AI agent that autonomously adjusts supply chain allocations. Is responsibility for these outcomes suspended in the air because "AI did it," or does the organization formally assume it because a human approved it?
In the age of AI agents, pre-specifying every scenario is not possible. When an agent acts in an unanticipated way, if no boundary has been designed for who assumes final responsibility, the organization is left in an accountability vacuum.
Decision Boundary — Designing the Threshold
The Decision Boundary (organizational governance) is the conceptual tool that makes judgment design operational.
It is composed of two distinct thresholds:
The Human Judgment Decision Boundary is the threshold at which human review, interpretation, approval, or assumption of responsibility becomes necessary — where AI delegation ends and human accountability begins.
The Governance Decision Boundary is the threshold at which escalation, exception handling, cross-functional authority, or institutional governance must take over — where ordinary operational judgment is insufficient and the organization's governance structure must engage.
These are not the same threshold, and conflating them creates design failures. Requiring every AI output to cross the Governance Decision Boundary produces paralysis. Leaving the Human Judgment Decision Boundary undefined produces shadow AI and accountability gaps.
Both boundaries must be designed explicitly — not inferred from policy intent, not delegated to individual discretion.
When these boundaries are left undefined, judgment drifts. Capable individuals compensate informally, and governance depends on their continued presence. When they move on, the structure collapses. Or, more quietly, AI outputs become operational decisions without anyone having formally assumed responsibility for them.
Designing the Decision Boundary is not about assigning blame in advance. It is about ensuring the sustainability of judgment — that an organization can make the same category of decision repeatedly, at consistent quality, at scale, without requiring exceptional individuals in every role.
What This Looks Like in Practice
Decision Design is not abstract philosophy. The following represents the practical architecture it produces within an organization.
The first step is to map judgment points — not AI touchpoints, but the moments in any workflow where someone makes a decision, approves an outcome, or handles an exception. When a workflow is decomposed into information-processing steps and judgment steps, most organizations discover that accountability at several judgment points is genuinely ambiguous.
The second step is to assign a delegation classification to each judgment point: AI-delegable (no human review required), human review required (AI processes, human verifies before execution), human final approval required (AI informs, human decides explicitly), or AI-excluded (AI not used in this judgment process). These classifications should reflect the organization's risk tolerance and regulatory context — not informal convention.
The third step is to design escalation routes triggered by boundary conditions, not by individual discretion. When an AI output crosses a threshold — low confidence score, anomalous value, novel pattern — the routing to the appropriate Human Judgment Decision Boundary or Governance Decision Boundary should be automatic and pre-defined. The AI CoE's role here is not to be the escalation destination for everything. Its role is to standardize the boundary conditions themselves.
The fourth step is to design exception handling as an organizational learning process, not an individual improvisation. When something falls outside the anticipated scenario space, there must be a defined path: who detects it, who is notified, within what timeframe a decision is made, how it is recorded, and how accumulated exceptions trigger policy review.
The fifth step is to design judgment logs — records not of what AI processed, but of who assumed which judgment, on what basis, and with what outcome. This is not primarily an audit trail. It is the data infrastructure for improving organizational judgment quality over time, and for maintaining accountability when AI agent behavior is later questioned.
The sixth step is to redefine the AI CoE's function. Rather than serving as the central decision-maker for individual AI use cases, the CoE becomes the designer and maintainer of boundary condition standards — the conditions under which AI delegation is appropriate, the thresholds that trigger human review, and the escalation paths that activate institutional governance. This shift resolves the scale problem, eliminates key-person dependency, and prevents governance from becoming a formality.
Structure First, Then Education
The logic of Decision Design inverts the conventional sequence.
The conventional approach is: train people, then have trained people make good judgments. The Decision Design approach is: design the judgment structure first, then make education meaningful within that structure.
Without a prior design of judgment architecture, training programs tend to produce awareness rather than competence — general caution about AI risks rather than specific behaviors tied to specific judgment points. With judgment architecture in place, training has a target: at this point in the workflow, verify these conditions; when this boundary is crossed, route to this escalation path.
The goal is not to produce people who are generally thoughtful about AI. The goal is to produce an organization that makes consistent, accountable, auditable judgments at scale — with or without exceptional individuals in every seat.
The Question Every Organization Must Answer
Who assumes responsibility for what AI generates?
Whether that question has a designed answer — or whether it is left to circumstance, individual judgment, and informal convention — is the fundamental governance distinction of the AI era.
Knowing the risks is the entry point. Organizing the governance structure is the foundation. But without designing judgment as an organizational object, no enterprise has a structure capable of bearing the weight of AI-era accountability.
Decision Design does not ask how to use AI. It asks whether judgment — who assumes it, where, and under what conditions — has been designed.
That question is waiting for an answer in most organizations today.
Ryoji Morii Founder & Representative Director, Insynergy Inc. AI Governance / Decision Design / Organizational Judgment Architecture insynergy.io/en/insights
References
-
Daisuke Takagi and Misuzu Takahashi, "AI Risks and Responsibilities You Cannot Afford to Ignore: What AI Governance That Builds Competitive Advantage Looks Like," NRI JOURNAL, Nomura Research Institute, March 9, 2026.
-
"Government to Mandate 'Human Judgment' Mechanisms for AI Agents and Physical AI," Nikkei Shimbun, February 15, 2026. https://www.nikkei.com/article/DGXZQOUA136YP0T10C26A2000000/
-
Ministry of Internal Affairs and Communications / Ministry of Economy, Trade and Industry, AI Business Guidelines (Version 1.1), March 28, 2025.
-
Cabinet Office, AI Basic Plan: Japan's Revival Through Trustworthy AI, approved by Cabinet, December 23, 2025. https://www8.cao.go.jp/cstp/ai/ai_plan/aiplan_20251223.pdf