Source: Natasha Mascarenhas, "VC Firms Grab AI Talent to Boost Their Investment Bets," Bloomberg (February 20, 2026)
The New Hiring Signal
Venture capital firms are rewriting their job descriptions. According to Bloomberg, AI proficiency is now a formal hiring requirement across the industry. Firms are appointing heads of AI. Interview processes are being redesigned to evaluate candidates' fluency with AI tools. WndrCo, the firm co-founded by Jeffrey Katzenberg, has revised its entire application process to ensure new hires can work effectively with AI.
This is not surprising. What is surprising is how narrowly the capability is being defined.
VCs are not using AI to make investment decisions. They are using it to synthesize data, summarize reports, and accelerate information processing. AI prepares the materials. Humans make the call.
That distinction sounds reasonable. It is also structurally incomplete.
The Three-Layer Model
One VC quoted in the Bloomberg article describes evaluating candidates on two dimensions: how they select and prompt AI tools, and how they integrate and apply AI outputs to judgment.
This framing reveals a three-layer model of AI competence.
Layer 1: Selection. Choosing the right AI tool for the right task and data source. This is tool literacy — the most visible and most trainable layer.
Layer 2: Prompting. Giving the selected tool effective instructions. This is often called prompt engineering, but its substance is the structuring of inquiry. What you ask depends on what you need to know, and what you need to know depends on what decision you are trying to make.
Layer 3: Integration. Incorporating AI output into human judgment. This is the most critical layer — and the least understood.
Layers 1 and 2 can be taught. Tool selection improves with exposure. Prompt quality improves with iteration. But Layer 3 — the act of integrating AI output into a decision — lacks any established methodology. When a VC says they evaluate how someone "integrates and applies," they are testing for something that has no formal structure, no shared standard, and no organizational design behind it.
The Structural Blind Spot
The decision not to use AI as the decision-maker is sound. But it creates a different problem that most organizations have not yet confronted.
If AI is not making the decision, who is? The human. But the human's judgment is formed on the basis of information that AI has shaped. AI determines what gets summarized and what gets omitted. It selects which patterns are surfaced and which are suppressed. In doing so, AI does not make the decision — but it constructs the premises on which the decision rests.
This means that maintaining human agency in decision-making requires understanding and managing how AI output constitutes the informational foundation of that judgment. Most current AI adoption discourse focuses on how to use AI. Almost none of it addresses how to design the process by which AI output enters the decision.
Consider a concrete scenario. An analyst at a VC firm uses AI to prepare a market research report for an investment committee. How much of that report is AI-generated, and how much reflects the analyst's independent assessment? Can the committee members distinguish between the two? Should they? If they cannot, and a decision is made on that basis, who bears responsibility for the judgment?
The standard answer — "AI is just a tool; the human makes the final call" — does not resolve this. The fact that a human signs off on a decision is not the same as the decision process being well-designed. These are separate problems.
The Unarticulated Question
When VCs evaluate AI literacy in interviews, Layers 1 and 2 are relatively easy to assess. You can test tool selection. You can review prompt quality. You can ask about past usage.
Layer 3 is different. The act of integrating AI output into judgment contains an invisible web of sub-decisions: which outputs to accept, which to reject, how much verification to perform, what alternatives to consider beyond what AI surfaced. These are all judgments — and in most organizations, there are no rules governing how they should be made.
When firms say they are hiring for AI proficiency, they are implicitly relying on individual talent to manage this integration well. Hire the right people, and the problem takes care of itself.
But organizational decision-making does not work that way. A judgment process that depends on individual aptitude does not scale. It cannot be reproduced. It cannot be audited.
The question that AI adoption discourse has not yet asked is not "how should we use AI?" It is: "who designs the process by which AI-informed decisions are made — and how?"
VC hiring requirements ask about individual AI capability. But the deeper organizational question is whether that capability should remain confined to individuals at all — or whether it requires structural design. Design of judgment itself.
This is what Decision Design addresses.
Decision Design treats the act of judgment as a design object. At its core is the concept of the Decision Boundary — the explicit demarcation of who decides what, where AI contribution ends and human responsibility begins. Decision Design holds that this line must not be left implicit. It must be deliberately architected.
What Decision Design Is
Decision Design is a methodology for intentionally structuring how decisions are made within organizations.
What does it design? The decision process itself — specifically, the structure that determines who judges, on what basis, within what scope, and through what procedure.
What is it not? Decision Design is not an AI adoption framework. It is not an advanced form of prompt engineering. It is not an efficiency methodology.
What problem does it address? In the AI era, the agency, basis, and scope of organizational judgment have become structurally ambiguous. Decision Design addresses that ambiguity directly.
Before AI, organizational decision processes were relatively legible. An employee gathered information, conducted analysis, reported to a manager, and a decision-maker rendered judgment. Each step had a clear owner.
AI disrupts this legibility. Information gathering is delegated to AI. Parts of the analysis are performed by AI. Summaries are generated by AI. Yet judgment remains — nominally — with humans. The problem is that when the informational basis of a decision is constructed by AI, the autonomy of that judgment becomes opaque.
Decision Design frames this opacity not as an AI problem, but as a design absence. The issue is not whether to use AI. The issue is that no one has designed the decision process that includes AI.
Why the Decision Boundary Matters
The Decision Boundary is the line within a decision process that separates what AI handles from what humans own. It answers: where does AI contribution end? Where does human accountability begin?
Recall the investment committee scenario. An AI-assisted report reaches the committee. No one has specified which portions are AI-generated and which reflect independent analysis. The committee deliberates and decides. This is a decision made without a designed boundary — and it creates three systemic risks.
Accountability dissolves. If AI constructs the informational premise and a human approves the conclusion, who is responsible for the judgment? The principle that "the human makes the final call" only functions when the boundary between AI input and human judgment is explicit. When AI substantively shapes the premise, the meaning of "final call" itself shifts.
Verifiability collapses. To evaluate whether a decision was sound, you need to evaluate the inputs. If AI generated those inputs, you need to assess the AI output. But without a defined boundary, there is no way to know which outputs require scrutiny.
Judgment capability erodes. When humans rely on AI without awareness of the boundary, their own decision-making capacity atrophies — not as an individual failure, but as an organizational structural outcome. Without a designed boundary, there is no way to specify which human judgment capabilities must be preserved.
The Bloomberg article's phrase — "integrate and apply" — implicitly contains the boundary problem. Integration is, in essence, the act of designing where AI output and human judgment connect. But today, this act of integration is treated as an individual skill, not as an organizational design object.
Implementation: The Judgment Ledger
Abstract frameworks require concrete mechanisms. One such mechanism is the Judgment Ledger — a structured record that documents the division of roles between AI and humans in each decision.
For every material decision, the Judgment Ledger captures the following: what was decided; which steps AI performed; which AI outputs were accepted and which were rejected; what additional analysis humans contributed; where the boundary between AI and human was set; and the basis for the final judgment, separated into AI-dependent and independently derived components.
This is not meeting minutes. It is a structural record of how a decision was made — including who and what shaped it.
The Judgment Ledger serves three functions. First, it makes the decision process visible, enabling the organization to share and examine its boundary settings. Second, it enables post-hoc verification by making it possible to trace how much AI output influenced a given judgment. Third, it preserves human judgment capability by explicitly designating which domains require human decision-making, preventing unconscious delegation to AI.
In practice, this can be operationalized through protocols such as: mandatory labeling of AI-generated content in committee materials; pre-decision boundary setting that specifies which tasks AI may and may not perform for each case; and a structured counter-perspective process in which at least one reviewer is tasked with surfacing perspectives that AI did not present.
These protocols do not restrict AI use. They design the structure within which AI use occurs — so that judgment remains accountable, verifiable, and human where it needs to be.
What Comes After AI Literacy
The VC industry's move to require AI proficiency is a correct reading of where the market is heading. Professionals who cannot work with AI will face an increasing disadvantage in information processing speed and density.
But the Bloomberg article signals something beyond individual skill. Of the three layers — Selection, Prompting, and Integration — the third cannot be fully addressed at the individual level. The quality of integration depends not only on individual capability, but on the organizational structure within which integration occurs.
As AI tools improve, the skill gap in Layers 1 and 2 will narrow. Tools will become universally accessible. Prompts will become standardized. What remains is Layer 3: the question of how judgment itself is structured.
Decision Design gives that question a name. But the question itself is already present in every organization that takes AI adoption seriously. The shift in VC hiring requirements is an early signal.
The issue is not whether your people can use AI. The issue is whether your organization can design the structure of judgment that includes AI.
Decision Design / Decision Boundary™ — Insynergy Inc.