Insynergy
← Back to Insights

Generative AI and the Limits of Rights Doctrine: Why Japan's Ministry of Justice Is Really Asking a Question About Judgment Design

Japan’s Ministry of Justice may appear to be clarifying generative AI rights doctrine. The deeper issue, however, is judgment design: who decides, where authority shifts back from AI to humans, and how accountability is preserved across distributed review processes.

A quiet signal from Tokyo

In 2026, Japan's Ministry of Justice convened an expert panel to begin the legal organization of rights-infringement issues arising from generative AI. According to reporting by the Asahi Shimbun, the panel's scope covers publicity rights, voice rights, deepfake pornography, and civil tort liability, with a consolidated set of findings expected in July.

Read narrowly, this looks like a familiar exercise: a legal system adapting its existing doctrinal categories to a new technology. How far does voice protection extend? At what point does a resembling face become an infringement? What civil remedies are available to victims of deepfakes? These are unavoidable questions, and a domestic legal framework will have to answer them.

But reading this development purely as a rights doctrine story captures less than half of what is actually happening.

The real question is not what the rights are, but who decides

Generative AI rights infringement differs from prior forms of infringement in three structural ways.

First, the speed of infringement production now far exceeds the speed of human moderation. Second, attribution is unstable. Is the infringing actor the user who wrote the prompt, the company that provided the model, the party that supplied the training data, or some combination? Third, the boundary of infringement is continuous rather than discrete. Between "resembles," "looks like," and "is identical," the line has to be drawn somewhere, and no one arrives at that line by pure deduction.

Legal organization offers reference points for drawing lines in this continuous space. By itself, however, it does not tell an operating company what to do on a Tuesday afternoon.

Suppose a synthetic voice sounds similar to a specific person's voice. Who determines the degree of similarity? An automated classifier, a first-line moderator, in-house counsel, or ultimately a court? At what stage, by whom, and on what grounds is the verdict of "this is not acceptable" actually reached? Unless these questions are answered operationally, legal clarification remains suspended above the ground where decisions are made.

The rights-infringement problem is therefore, at the same time, a problem of who holds decision authority, who bears responsibility, and how remedies are designed.

Three vantage points, each insufficient on its own

Several vantage points have been offered to address this.

The first is technology. Improve detection models, scrub problematic content from training data, apply filters at the output stage. All necessary, none sufficient. Technology enforces lines that have already been drawn; it cannot, by itself, decide where the line should be.

The second is AI ethics. Statements of principle — transparency, fairness, accountability, human-centeredness — orient organizations toward values worth respecting. But principles, by their nature, do not resolve individual cases. They point toward a direction; they do not adjudicate specific outputs.

The third is governance. Policies, committees, audit mechanisms — these establish the container within which decisions are made. They are indispensable. Yet governance organizes the vessel of decision-making; it does not design the substance of the decisions themselves. A well-governed organization can still produce incoherent judgments if the decisions inside the vessel are left unstructured.

Technology, ethics, and governance are each necessary. Summed together, they still leave a layer unaddressed.

"Keep a human in the loop" is not a design

Japan's AI Business Guidelines version 1.2, published jointly by the Ministry of Internal Affairs and Communications and the Ministry of Economy, Trade and Industry, calls on developers of autonomous AI agents to build mechanisms that require human judgment, in view of risks such as malfunction and privacy intrusion. The direction is right. AI should not be entrusted with everything; human judgment must remain structurally present.

But "insert a human" is a requirement, not a design.

What tends to happen in practice is this. An AI produces an initial classification. Only the ambiguous cases reach the human reviewer. The reviewer becomes fatigued, is measured on throughput, and applies criteria inconsistently. Similar cases yield different outcomes. Over time, the fact that a human was involved remains, but the quality of judgment is not preserved.

Putting humans in the loop and designing their judgment are different things.

Who reviews, at what stage, with what information, against what criteria, within what time budget? Where is the decision recorded? How is it audited? How does it feed back into subsequent decisions and model behavior? Without engagement at this level, "a human was involved" becomes an alibi rather than an accountability structure.

Returning to the opening questions

A synthetic voice circulates, indistinguishable from a real person's. Whose voice is it? An image is generated that closely resembles a known character. Where does homage end and infringement begin? A deepfake is reported. Who issues the takedown decision — the classifier, the moderator, legal, the executive team?

These are questions of legal interpretation. They are also, and more immediately, questions about where judgment is located and how its boundaries are drawn.

When the Ministry of Justice publishes its organized findings, operating companies will still face these decisions every day, thousands of times a day. Legal organization offers reference points. The judgments themselves remain on the ground.

The missing layer

What emerges from this is that rights infringement in the generative AI era is not solvable from any single layer.

Technology, ethics, governance, human involvement — even when all four are present, something remains unaddressed: the design of judgment itself. This layer is not DX, not governance, not automation, not AI ethics. It is none of these, and it cuts across all of them. After the Ministry of Justice completes its July organization, the layer each company will confront is this one.

This layer already has a name.

Decision Design: what it is, what it is not, what problem it addresses

Decision Design treats the act of judgment within an organization as an object of design.

What Decision Design designs. It designs who holds decision authority, the scope of that authority, the criteria applied, the sequence of escalation, the record of judgment, and the verifiability of that record. It does not design individual decisions; it designs the structure within which decisions are produced.

What Decision Design is not. It is not business process automation. It is not the drafting of authority matrices. It is not the proclamation of AI ethics principles. These may be instruments of Decision Design, but they are not Decision Design itself.

What problem Decision Design addresses. In an era where humans and AI share judgment, the ambiguity of where authority sits and where boundaries lie produces three predictable failures: vacuums of responsibility, drift in criteria, and delay in remedy. Decision Design is the concept that directly addresses this layer.

Decision Design is not about improving decisions alone; it is about designing the authority structure within which decisions become institutionally legitimate.

Why governance, DX, automation, and AI ethics are not substitutes

Decision Design is not a rebranding of existing concepts.

Governance designs the container. Policies, committees, audits, rules of procedure. These establish the conditions under which decisions occur, but do not prescribe the substance of decisions.

DX designs the operation. Processes, data flows, customer experience. It changes how work moves, but does not ask how judgment is structured.

Automation designs the task. Repetitive and routine operations. It expands the domain where judgment is not required, rather than designing judgment itself.

AI ethics designs the principles. The values to be honored, the harms to be avoided. It points in a direction, but does not draw specific lines.

Decision Design is not a sub-concept beneath these. It cuts across them. Inside the container of governance, within the operations of DX, at the edge of what automation has absorbed, and wherever AI ethics must be translated into a specific case, the question "who decides, where, on what basis" inevitably arises. Decision Design is the discipline that gives this question an explicit architecture.

Decision Boundaries: the institutional demarcation of authority

At the center of Decision Design is the concept of Decision Boundaries.

A Decision Boundary articulates two things. First, who decides: the AI system, a first-line moderator, a specialist reviewer, legal counsel, executive leadership, or an external committee. Second, how far authority is delegated and where it must be taken back: the range in which the AI is permitted to act, and the point at which human authority explicitly resumes.

Crucially, a Decision Boundary is not a single line. It is a series of demarcations, each calibrated to a different stage and a different level of consequence. A continuous problem requires continuous boundaries.

Decision Boundaries are not operational thresholds; they are institutional demarcations of legitimate authority.

This distinction matters. An operational threshold can be tuned by an engineer based on throughput. An institutional demarcation reflects who, within the organization, is legitimately empowered to decide. The two look similar from a distance, and confusing them is how accountability quietly dissolves.

An implementation view: rights-infringement review across five boundaries

Consider how this plays out in a concrete setting: a generative AI service handling voice, likeness, and character-related rights infringement.

First boundary — pre-generation prompt layer. When a prompt explicitly references a real person or a well-known protected character, the system performs an initial classification and refuses or warns. This is an AI-held zone. The criteria are relatively discrete, and the cost of a misclassification is relatively low. Even so, the boundary is not static. Refusal rates and false-refusal complaints are reviewed on a fixed cadence, and when thresholds are exceeded, the criteria themselves are revised by a human reviewer.

Second boundary — post-generation output scoring. Outputs are scored for similarity: likeness match, voiceprint match, and other relevant metrics. The AI handles this. But outputs that fall into a defined ambiguity range are routed to the next boundary. How wide that ambiguity range should be is itself a design choice, calibrated against the service's profile, the sensitivity of the rights-holders involved, and the historical distribution of complaints. This is a variable to be maintained, not a constant to be set.

Third boundary — first-line human review. Ambiguous outputs and external complaints reach a first-line moderator. The moderator is not handed a case and asked to "decide." They receive the flagged output, the similarity scores, reference material, prior comparable decisions, a standard handling time, and clearly defined escalation criteria. Every decision is logged — not only the verdict (remove, hold, allow), but the reasoning field that grounds it. This is the point where the record becomes structurally important.

Fourth boundary — specialist and legal review. Cases the moderator classifies as borderline escalate to a specialist team. These typically involve publicity rights, voice rights, deepfake edge cases, monetization decisions, and scope-of-distribution questions — cases with material legal implication. Here, AI scores and similar-case references do not determine the outcome. Context, intent, the nature of the harm, and the necessity of remedy are weighed together. This is the zone of explicit human assumption of authority, and from a Decision Design perspective, what matters is that this assumption is explicitly defined, not implicit.

Fifth boundary — executive and external committee. Policy-level decisions, changes to prior standards, and cases with significant social implications reach executive leadership and an external committee. This is the designated point of final accountability.

What makes these five boundaries function as a system is that they are connected: any decision at any level is traceable through the records produced at each preceding stage.

Decision Logs: the continuity of accountability

The most frequently underbuilt component of Decision Design is the log.

At each boundary, every decision is captured in structured form: who decided, when, on what basis, with reference to what material. These records serve four purposes simultaneously — post-hoc verification, criteria revision, response to legal inquiries, and feedback into model and process improvement.

Decision Logs do not merely record outputs; they preserve accountability continuity across distributed judgment processes.

This is adjacent to, but distinct from, the audit trails familiar to governance. Governance auditability asks whether the process was followed. Decision Design verifiability asks whether the judgment was sound. The former confirms procedural compliance; the latter permits substantive review. In a hybrid human-AI decision chain, where authority shifts across boundaries within minutes, only structured logs preserve continuity of accountability across that distribution.

What this means for AI services

The implication for any service confronting rights-infringement decisions at scale is concrete.

Boundaries must be made explicit, not inherited from ad-hoc practice. The AI-held zone, the first-line moderation zone, the specialist zone, the legal zone, and the executive zone should each be named, bounded, and recorded. Escalation criteria should be defined in advance and applied consistently. Log fields should capture rationale, not only outcome. Criteria should be versioned, so that changes over time are themselves traceable. Gray-zone cases should be treated as expected inputs, not as exceptions to be handled informally.

Where authority is delegated to AI, the delegation must be documented as such. Where authority is taken back by humans, the reclamation must be documented as such. Where authority is suspended pending further review, the suspension must be documented as such. None of this is procedural window-dressing. It is the infrastructure on which the legitimacy of the overall decision system rests.

Returning to the opening questions, one last time

Whose voice is that. Where does resemblance become infringement. Who ultimately stops the content.

Legal interpretation will answer parts of these questions. The Ministry of Justice's July findings will provide one such reference point. But the thousands of decisions that occur every day on an operating platform are not legal interpretations. They are decisions made in reference to legal interpretation, applied to specific cases. Who makes them, on what basis, with what record — this is what determines whether rights-infringement response actually works.

Conclusion

Over the coming years, the debate around generative AI rights infringement will shift from legal organization to operational implementation. Legislation will supply reference points. Guidelines will indicate direction. Companies will build. What will determine the quality of what gets built is not technology, not ethics, and not governance alone. It is the design of judgment itself, running across all three.

The era in which "we have a human in the loop" was sufficient is ending. Once the human is present, the question becomes how that human's judgment is designed, where the boundary with AI is drawn, and how the decision is recorded and verified. Organizations that do not engage at this level will find themselves holding responsibility inside an institutional vacuum.

Decision Design is the concept that gives that vacuum a shape.

Japanese version is available on note.

Open Japanese version →