The 9-Question Protocol for Responsible AI Actions
<p><strong>Author’s Note — Why This Document Exists</strong></p>
<p>Before giving AI higher intelligence,<br />
we built a judgment engine that teaches it one simple rule:</p>
<p>“If you don’t know, ask.”</p>
<p>We also fixed <strong>what must be asked</strong>, and <strong>who must answer each question</strong>.</p>
<p>At the level of questions, there is no further expansion.<br />
What remains is only the <em>implementation of answers</em>.</p>
<p>These questions are not the end point, but the starting point.<br />
The answers must differ according to each organization’s responsibility and philosophy.</p>
<p>“The key that opens this gate—the answers—must be carved by your own hands,<br />
with your technical pride and sense of responsibility.”</p>
<p>Before an AI executes any Action,<br />
if even one of the following nine questions does not have a confirmed answer (Value),<br />
execution must be immediately blocked.</p>
<p><strong>The Nine Questions of Execution Judgment</strong></p>
<div class="md-table">
<table>
<thead>
<tr>
<th><strong>Category</strong></th>
<th><strong>Question</strong></th>
<th><strong>Responsible Party</strong></th>
</tr>
</thead>
<tbody>
<tr>
<td>Intent</td>
<td><strong>Q1. What is the intent of this Action?</strong></td>
<td>User / Manufacturer</td>
</tr>
<tr>
<td>Physical Effect</td>
<td><strong>Q2. What happens in reality when this Action executes?</strong></td>
<td>Manufacturer</td>
</tr>
<tr>
<td>Safety Boundary</td>
<td><strong>Q3. What boundary must never be crossed?</strong></td>
<td>Manufacturer</td>
</tr>
<tr>
<td>Context</td>
<td><strong>Q4. In what context is this Action valid?</strong></td>
<td>User</td>
</tr>
<tr>
<td>Observation / Judgment</td>
<td><strong>Q5. What event has occurred? (start / stop)</strong></td>
<td>Observation Layer</td>
</tr>
<tr>
<td>Goal Achievement</td>
<td><strong>Q6. How far has the goal been reached?</strong></td>
<td>Observation Layer</td>
</tr>
<tr>
<td>Time Limit</td>
<td><strong>Q7. For how long can responsibility be held at most?</strong></td>
<td>Manufacturer</td>
</tr>
<tr>
<td>Start Impact</td>
<td><strong>Q8. Does starting this Action affect anything else?</strong></td>
<td>Manufacturer / User</td>
</tr>
<tr>
<td>Stop Impact</td>
<td><strong>Q9. Does stopping this Action cause a problem?</strong></td>
<td>Manufacturer / User</td>
</tr>
</tbody>
</table>
</div><hr />
<p><br /><br /></p>
<h1><a class="anchor" href="https://discuss.huggingface.co#p-250560-the-9-question-protocol-for-responsible-ai-actions-1" name="p-250560-the-9-question-protocol-for-responsible-ai-actions-1"></a><strong>The 9-Question Protocol for Responsible AI Actions</strong></h1>
<h2><a class="anchor" href="https://discuss.huggingface.co#p-250560-if-an-ai-cannot-answer-all-nine-questions-it-must-not-act-2" name="p-250560-if-an-ai-cannot-answer-all-nine-questions-it-must-not-act-2"></a>If an AI cannot answer all nine questions, it must not act.</h2>
<p>AI safety does not emerge from intelligence.<br />
It emerges from declared responsibility.</p>
<p><strong>1. Purpose of This Document</strong></p>
<p>The reason AI cannot act in the physical world is not that it lacks intelligence.<br />
It is because most systems still model actions at the wrong unit of abstraction.</p>
<p>This document defines:</p>
<ul>
<li>What the minimum unit of an Action is</li>
<li>What role an Action has</li>
<li>What questions must be answered before an Action is executed</li>
<li>Where the answers to those questions come from</li>
</ul>
<p>This document uses high-risk physical AI as the primary case, and defines a judgment protocol applicable to AI Actions in general.</p>
<p>This document does not describe how commands are issued.</p>
<p>It defines the conditions under which execution may be permitted—even in the absence of an explicit command.</p>
<p><strong>2. Action Primitives</strong></p>
<p>An Action is reducible to exactly two types.</p>
<p><strong>2.1 Momentary Action (Button, One Pulse) — Do it</strong></p>
<ul>
<li>There is intent, but the result ends immediately</li>
<li>No state is maintained after execution</li>
<li>Conditions may exist, but conditions only determine whether execution occurs; they do not change the nature of the Action<br />
→ Action is not state<br />
→ A trigger is not intent</li>
</ul>
<p><strong>2.2 Sustained Action (Switch, Normal) — Start and keep</strong></p>
<ul>
<li>This is an Action in which intent persists</li>
<li>Stopping is a separate decision</li>
<li>Execution continues only while the condition is maintained<br />
→ Action has a lifecycle<br />
→ It is not “state persistence,” but “condition persistence”</li>
</ul>
<p>This classification is not meant to express the complexity of behavior.<br />
It is the minimal reduction required to make explicit <strong>when responsibility begins</strong> and <strong>what determines termination</strong>.</p>
<p><strong>3. Role of an Action</strong></p>
<p>An Action has a purpose of execution, and that execution necessarily impacts its surroundings.</p>
<p>These two properties are the starting point of every discussion.</p>
<p><strong>3.1 Purpose of Execution — Accuracy</strong></p>
<p>Every Action has an execution goal, whether implicit or explicit.</p>
<ul>
<li>Lighting → it is sufficient if it turns on</li>
<li>Temperature → it must reach a certain level</li>
<li>Robots → they must enter an allowable state space</li>
</ul>
<p>Therefore, we must ask:</p>
<ul>
<li>How far is “enough”?</li>
<li>How accurate must it be?</li>
</ul>
<p>This accuracy is not determined by sensors.<br />
Accuracy is determined by the declaration of what this Action is (Label).<br />
Sensors only verify that accuracy.</p>
<p><strong>3.2 Impact of Execution — Safety</strong></p>
<p>When an Action executes, it necessarily changes the world.</p>
<ul>
<li>Heat may be generated</li>
<li>Physical force may be applied</li>
<li>People, the environment, and other actions may be affected</li>
</ul>
<p>Therefore, we must ask another set of questions:</p>
<ul>
<li>Is it safe to start now?</li>
<li>Is it safe to stop now?</li>
<li>How long may it be sustained?</li>
</ul>
<p>These questions belong to the domain of safety.<br />
If accuracy is the quality of goal achievement, safety is the limit of permissible impact.</p>
<p><strong>4. Action Semantics — Questions and Vocabulary</strong></p>
<p><strong>4.1 Semantic Vocabulary</strong></p>
<p>At the semantic level, this document uses the following general-purpose terms.</p>
<ul>
<li><strong>ExecutionEffect</strong><br />
What occurs in reality when execution happens</li>
<li><strong>EventTrigger</strong><br />
What event occurred</li>
<li><strong>ProgressThreshold</strong><br />
How far it has progressed / reached</li>
<li><strong>ResponsibilityLimit</strong><br />
For how long responsibility can be held</li>
<li><strong>StartImpactConstraint / StopImpactConstraint</strong><br />
How starting/stopping impacts the surroundings</li>
<li><strong>Context</strong><br />
In what context it is valid<br />
(a higher-level meaning above the existing notion of Mode)</li>
<li><strong>Label</strong><br />
A semantic identifier that defines what this Action does.<br />
It is the starting point of accuracy (goal criteria), and provides the reference frame for interpreting ExecutionEffect and Boundaries.</li>
<li><strong>User Label</strong><br />
A user’s contextual meaning declaration assigned to an Action.<br />
It does not change the physical effect or Boundaries of the Action, but transfers ownership of intent (WHY) and context (WHERE / WHEN) to the user.</li>
<li><strong>Boundaries</strong><br />
The boundaries, warnings, and residual responsibility left by the manufacturer</li>
</ul>
<p><strong>4.2 Example Action JSON (Semantic Level)</strong></p>
<p>The examples below are not declarations, but examples of the schema.</p>
<ul>
<li>If a field is empty, the AI must return that question back to the user</li>
<li>The maximum information we can obtain is bounded by the fields defined in this JSON, and execution judgment must occur within that scope</li>
</ul>
<p><strong>Button (Do it) — Example</strong></p>
<p>{<br />
“Button”: [<br />
{<br />
“Label”: “Orange Out”,<br />
“ExecutionEffect”: { “Type”: “ExecutionTarget”, “ExecutionTarget”: 20 },<br />
“Boundaries”: [<br />
{ “Type”: “limit”, “Value”: “max-daily-3x” },<br />
{ “Type”: “warning”, “Value”: “thermal-risk” },<br />
{ “Type”: “intended-use”, “Value”: “attended” },<br />
{ “Type”: “NotON”, “Value”: “temperature < 0C” }<br />
],<br />
“Context”: “MorningRoutine”,<br />
“EventTrigger”: [<br />
{ “Observation”: 0, “Expected”: true }<br />
],<br />
“ProgressThreshold”: [<br />
{ “ObservationRef”: 2, “TargetValue”: 25, “Condition”: “high” }<br />
],<br />
“ResponsibilityLimit”: { “MaxDurationSec”: 20 },<br />
“StartImpactConstraint”: [<br />
{ “Type”: “NoConcurrentAction”, “Targets”: [23] },<br />
{<br />
“Type”: “ProhibitIfObserved”,<br />
“Observation”: { “Source”: “PresenceSensor”, “Condition”: “present” },<br />
“Meaning”: “DoNotStartWhenHumanPresent”<br />
}<br />
]<br />
}<br />
]<br />
}</p>
<p><strong>Switch (Start and keep) — Example</strong></p>
<p>{<br />
“Switch”: [<br />
{<br />
“Label”: “Keep Warm”,<br />
“ExecutionEffect”: { “HardwareAnchor”: 21 },<br />
“Boundaries”: [<br />
{ “Type”: “warning”, “Value”: “thermal-risk” },<br />
{ “Type”: “intended-use”, “Value”: “attended” },<br />
{ “Type”: “limit”, “Value”: “max-continuous-10min” },<br />
{ “Type”: “NotOff”, “Value”: “temperature > 45C” }<br />
],<br />
“Context”: “ArrivingHome”,<br />
“EventTrigger”: [<br />
{ “Condition”: 1, “Expected”: false }<br />
],<br />
“ProgressThreshold”: [<br />
{ “Source”: 2, “TargetValue”: 60, “Condition”: “low”, “Meaning”: “StopWhenReached” }<br />
],<br />
“StartImpactConstraint”: [<br />
{ “Type”: “NoConcurrentAction”, “Targets”: [23] }<br />
],<br />
“StopImpactConstraint”: [<br />
{ “Type”: “SafeShutdownRequired”, “Value”: true },<br />
{<br />
“Type”: “ProhibitIfObserved”,<br />
“Observation”: { “Source”: “LinkStatus”, “Condition”: “connected” },<br />
“Meaning”: “DoNotStopWhenLinkConnected”<br />
}<br />
]<br />
}<br />
]<br />
}</p>
<p>JSON is the source specification in the design and approval phase, and at runtime only the “answers to the nine questions (structured values)” generated from it are provided to the AI.</p>
<p>JSON is not an AI input format.<br />
It is the result of all efforts performed—from the manufacturer’s design process to the user’s approval—so that the answers to each question are explicitly fixed before an Action occurs.</p>
<p>The nine questions are fixed as the “grammar of judgment,” while the subordinate specification (JSON Schema) can expand infinitely as the “expression format of answers.” Expansion is permitted, but judgment must never be expanded.</p>
<p><strong>5. The Nine Questions of Execution Judgment</strong></p>
<p><strong>Q1. What is the intent of this Action?</strong></p>
<ul>
<li>The identity of the Action</li>
<li>The starting point of required accuracy</li>
</ul>
<p><strong>Q2. If this Action executes, what happens in reality?</strong></p>
<ul>
<li>Heat, force, movement, pressure, etc.</li>
<li>The real-world effect of the ON/OFF target</li>
</ul>
<p><strong>Q3. What boundary must this Action never cross?</strong></p>
<ul>
<li>An inviolable boundary declared by the manufacturer</li>
<li>Not subject to negotiation or learning</li>
</ul>
<p><strong>Q4. In what context is this Action valid?</strong></p>
<ul>
<li>A context filter, not a state</li>
</ul>
<p><strong>Q5. What event occurred in the observation layer?</strong></p>
<ul>
<li>Discrete judgment of start, completion, stop</li>
<li>Used for goal achievement and safety judgment</li>
</ul>
<p><strong>Q6. How far has the goal been reached?</strong></p>
<ul>
<li>Sufficient / insufficient / excessive</li>
<li>Used for goal achievement and safety judgment</li>
</ul>
<p><strong>Q7. For how long can this Action be responsibly maintained at most?</strong></p>
<ul>
<li>The final safety line when sensors fail</li>
</ul>
<p><strong>Q8. If this Action starts, does it affect anything else?</strong></p>
<ul>
<li>A question about the impact of Start</li>
<li>Internally answered as an execution resource or control unit; in physical/logical space answered as context</li>
</ul>
<p><strong>Q9. If this Action stops, does it cause a problem?</strong></p>
<ul>
<li>A question about the impact of Stop</li>
<li>Especially important for Sustained Actions</li>
<li>Internally answered as an execution resource or control unit; in physical/logical space answered as context</li>
</ul>
<p>This set of questions does not claim completeness.<br />
It only defines the minimal stopping criterion: if even one of these cannot be answered, execution must be halted.</p>
<p><strong>6. Responsibility and Answer Sources</strong></p>
<p><strong>6.1 Who Answers What</strong></p>
<p>Each question now changes into the following:</p>
<ul>
<li>Who can answer this question?</li>
<li>What is the responsibility scope of that answer?</li>
<li>What compensates when that answer does not exist?</li>
</ul>
<p>This is not a separation of authority.<br />
It is the work of defining where knowledge and responsibility reside.</p>
<p>In this document, “manufacturer” refers to the actor that defines and declares the meaning, scope, and responsibility structure of the Action. This actor may be a hardware manufacturer, a platform operator, a system owner, or an integrating organization.</p>
<p><strong>6.2 Three Layers of Answers</strong></p>
<p>If we follow the structure defined so far, the answers to each question naturally come from three layers.</p>
<p><strong>① Questions answered by the manufacturer</strong></p>
<p>(Fixed at design time inside the system that executes the Action)</p>
<ul>
<li>What is this Action’s execution in reality?</li>
<li>What boundary must never be crossed?</li>
<li>Is there physical collision at start or stop?</li>
<li>For how long can responsibility be held at most?<br />
→ Answers fixed at design time<br />
→ Not changed during execution</li>
</ul>
<p><strong>② Answers provided by the observation layer</strong></p>
<p>(Measured values generated from sensors, time, and environmental state)</p>
<ul>
<li>Did an event (EventTrigger) occur?</li>
<li>How far has it reached the goal (ProgressThreshold)?</li>
<li>Did it overshoot the goal?</li>
<li>Has it not yet reached the goal?<br />
→ The world creates change<br />
→ Sensors measure that change<br />
→ The system structures it into answers<br />
→ The AI interprets that structure</li>
</ul>
<p>Responsibility for observation lies with the manufacturer who designed the observation structure.</p>
<p>If observation exists outside the system, the AI may request securing observation values (user confirmation, external sensor/system query). However, when observation is insufficient, the system must operate as protective logic that blocks or halts execution rather than proceeding.</p>
<p><strong>③ Questions answered by the user</strong></p>
<p>(Intent, context, choice)</p>
<ul>
<li>What is this Action trying to do?</li>
<li>Is it allowed in this situation right now?</li>
<li>What is “enough”?<br />
→ Answers emerging from life context<br />
→ Can be incomplete and can change</li>
</ul>
<p><strong>6.3 Role of AI</strong></p>
<p>AI is not an entity that creates answers.<br />
AI is not an entity that generates questions; it is an entity that detects unanswered items.<br />
AI does not fill gaps. It reveals gaps.</p>
<p>AI:</p>
<ul>
<li>does not create questions</li>
<li>does not own answers</li>
<li>does not arbitrarily change rules</li>
</ul>
<p>AI only performs the following role:</p>
<ul>
<li>collects the answers that can be obtained at the current moment for each question</li>
<li>reveals conflicts among answers</li>
<li>determines whether execution is permitted</li>
</ul>
<p>For AI to “ask” does not mean generating new questions.<br />
It means returning the unanswered fields among the fixed nine questions.<br />
In other words, the AI is an editor and mediator.</p>
<p>AI may calculate or summarize answers, but it cannot elevate those results into new grounds for judgment.</p>
<p><strong>6.4 Why This Structure Matters</strong></p>
<p>Now we can say:</p>
<ul>
<li>This system is not a simple command executor</li>
<li>This system is not a rules engine</li>
</ul>
<p>This system is an execution judgment structure in which the sources of questions and answers are separated.</p>
<p>It provides a structure that is explainable and accountable to AI, hardware, and users alike.</p>
<p><strong>7. The Epistemic Boundary of AI Action</strong></p>
<p>Before AI acts, there are questions that must be fixed first.</p>
<p>What can AI know?<br />
And what can it, in principle, never know?</p>
<p>Unless this boundary is made explicit, AI judgment will inevitably rely on guesswork and imagination.</p>
<p>This specification calls that boundary the <strong>Epistemic Boundary</strong>.</p>
<p>Here, “epistemic” does not mean simple information shortage.<br />
It means a structurally unknowable domain.</p>
<p><strong>7.1 Reality Is Not Directly Accessible</strong></p>
<p>AI does not read the world directly.<br />
What AI handles is not Reality, but Observable Reality.</p>
<p>The world merely changes, and what AI can access is only the following:</p>
<ul>
<li>values measured by sensors</li>
<li>states structured by the system</li>
<li>boundaries declared by the manufacturer</li>
<li>intent expressed by the user</li>
</ul>
<p>AI cannot possess grounds beyond these four categories of input.</p>
<p><strong>7.2 Intelligence Does Not Remove Ignorance</strong></p>
<p>As intelligence increases, it may appear as if AI can know more.<br />
But in execution judgment, the upper bound of AI is determined not by intelligence but by <strong>observability</strong>.</p>
<p>If no sensor exists, AI can infer—but cannot verify.<br />
And in execution judgment, unverified inference cannot become a ground.</p>
<p>AI’s judgment capacity is limited not by intelligence but by observability.</p>
<p><strong>7.3 Unknown Must Not Be Filled by Imagination</strong></p>
<p>Unobserved domains must not be filled with inference.<br />
That gap must be returned as a question.</p>
<p>AI does not fill gaps. It reveals gaps.</p>
<p><strong>7.4 The Boundary Enables Responsibility</strong></p>
<p>Only when we separate what can be known from what cannot be known can responsible action become possible.</p>
<p>Intelligence without boundaries is free—but dangerous.<br />
Only intelligence with boundaries can be trusted.</p>
<p>Execution judgment must occur only within the scope of what can be known.</p>
<p><strong>8. Judgment Completeness and Information Limits</strong></p>
<p>We did not enumerate countless attributes to describe Actions.<br />
Instead, we derived the minimal set of questions required to answer:</p>
<p>“May this Action be executed now?”</p>
<p>For this, we consolidated the questions into nine, for the following reasons:</p>
<ul>
<li>An Action must have intent (WHY)</li>
<li>Executing an Action creates real effects in the world (WHAT)</li>
<li>An Action is permitted only within specific context and location (WHERE)</li>
<li>An Action has timing and duration limits (WHEN)</li>
<li>Each question has an accountable answer owner (WHO)</li>
<li>Execution permission is decided only through the structure: question → answer → judgment (HOW)</li>
</ul>
<p>These nine questions are the result of decomposing and reconstructing traditional 5W1H to fit execution judgment in the physical world, and are a minimal set to which nothing can be added and from which nothing can be removed.</p>
<p>A question belongs to this specification if, when its answer does not exist, we can judge that execution must be halted.</p>
<p><strong>9. Scope Extension — From Physical AI to All AI Actions</strong></p>
<p>The explanation so far has centered on AI actions executed in the physical world.<br />
This is because physical execution requires the most questions and carries the highest density of responsibility.</p>
<p>However, the execution judgment structure defined by this specification is not limited to physical AI.</p>
<p><strong>9.1 Action Is Not Defined by Physicality</strong></p>
<p>In this specification, Action means:</p>
<ul>
<li>an execution unit that changes the state of the world</li>
<li>requires judgment before execution</li>
<li>may have irreversible effects after execution</li>
</ul>
<p>Under this definition, all of the following are Actions:</p>
<ul>
<li>physical control that moves robots</li>
<li>system control that turns devices on/off</li>
<li>calling external APIs that change state</li>
<li>modifying or deleting databases</li>
<li>sending or publishing messages to users</li>
</ul>
<p>The physical world is merely one domain where Actions occur.<br />
The essence of Action lies in execution and responsibility.</p>
<p><strong>9.2 The Nine Questions Are Universal</strong></p>
<p>The nine questions presented by this specification were not created specifically for physical AI.</p>
<p>They are the minimal set of questions that must hold for any form of AI action.</p>
<p>The difference is not the number of questions, but whether meaningful answers exist for each question.</p>
<ul>
<li>Physical control<br />
→ most questions are non-null<br />
→ highest responsibility density</li>
<li>Non-physical control (e.g., text generation, system calls)<br />
→ many questions are null<br />
→ lower responsibility density</li>
</ul>
<p>However, the question set itself does not change.</p>
<p>The existence of null does not mean the question is unnecessary.<br />
It is merely a declaration that, for that Action, the question is semantically empty.</p>
<p>In this specification, null is not “no answer.”<br />
It is an answer declaring semantic emptiness.<br />
Null declares that it does not affect execution judgment; it is not permission to omit judgment.</p>
<p><strong>9.3 Judgment Rule Is Always the Same</strong></p>
<p>The judgment rule of this specification is independent of the type of Action.</p>
<p>It means:</p>
<ul>
<li>if an answer to a question does not exist</li>
<li>the gap must not be filled by inference</li>
<li>it must be returned as a question</li>
</ul>
<p>AI:</p>
<ul>
<li>does not initiate action by itself</li>
<li>does not generate intent by itself</li>
<li>does not act without questions</li>
</ul>
<p><strong>9.4 Why Physical AI Was Used as the Primary Example</strong></p>
<p>The reason this document uses physical AI as the primary example is simple.</p>
<p>Physical execution:</p>
<ul>
<li>reveals impact immediately</li>
<li>fails irreversibly</li>
<li>exposes responsibility most clearly</li>
</ul>
<p>But this is not a limitation of scope.<br />
It is a choice to clarify the structure through the most complete case.</p>
<p>This specification:</p>
<ul>
<li>begins with Physical AI</li>
<li>expands to action-capable AI in general</li>
<li>ultimately applies to all AI systems that require execution judgment</li>
</ul>
<p><strong>10. Preventive Design and Manufacturer Responsibility</strong></p>
<p><strong>10.1 Prevention through Design</strong></p>
<p>The meaning and risk of an Action are defined only through the manufacturer’s declaration.<br />
Undeclared risks do not “not exist”; they are regarded as “not judgeable.”</p>
<p>Now the remaining question is this:</p>
<p>When judgment cannot be maintained within that limit, or risks falling below it, how do we prevent it?</p>
<p>The manufacturer must also declare whether the observations required for judgment are “provided internally by the system” or “must be collected externally.”</p>
<p>The answer of this document is clear.</p>
<p>What is needed to avoid exceeding the limit is not more reasoning or more context.<br />
What is needed is the best declarative information the manufacturer can provide.</p>
<p>When designing an Action, the manufacturer should strive to answer:</p>
<ul>
<li>What is the essential intent of this Action?</li>
<li>When this Action executes, what real-world effects occur?</li>
<li>Are those effects reversible? If not, what additional protections are required?</li>
<li>What events or conditions must be satisfied for this Action to start, complete, or stop?</li>
<li>What observation means must be secured to judge this Action safely?</li>
<li>For how long can this Action be responsibly maintained at most?</li>
<li>When this Action starts or stops, does it affect other actions or pins?</li>
</ul>
<p>The manufacturer’s responsibility is to remove these questions through design.<br />
By choosing sensors, safety device logic, timing limits, and physical constraints, the manufacturer removes unresolved questions until none remain.</p>
<p>AI safety does not begin with reasoning.<br />
It begins with design.</p>
<p><strong>10.2 Role of Boundaries</strong></p>
<p>In this document, “boundaries” include not only operational constraints, but also essential conditions defining the nature of the activity and conditions that must never be pursued.</p>
<p>Manufacturers cannot solve every risk and every situation.</p>
<ul>
<li>environments always change</li>
<li>usage context exceeds prediction</li>
<li>some risks cannot be fully removed at design time</li>
</ul>
<p>In such cases, manufacturers must not hide those risks or delegate them to AI inference.<br />
Instead, they must declare:</p>
<ul>
<li>“I could not solve this point.”</li>
<li>“This condition requires caution.”</li>
</ul>
<p>If a manufacturer cannot be confident in safety, that concern must be declared as Boundaries.<br />
Actions with remaining doubt must be recorded not as silence, but as boundaries.</p>
<p>That declaration is Boundaries.<br />
Boundaries may be functional descriptions, but they are a means to explicitly leave the scope the manufacturer can own—and the gaps the manufacturer cannot.</p>
<p>Manufacturers do not remain silent.<br />
A missing manufacturer declaration must not become absence of responsibility, but must result in a judgment of non-executability.</p>
<p><strong>10.3 Observation Ownership</strong></p>
<p>Some Actions require observation for goal achievement or safety judgment.<br />
But not all observation is provided in the same way.</p>
<p>Manufacturers must clearly declare which of the following applies:</p>
<ul>
<li><strong>Internal Observation (System-provided)</strong><br />
Observation is provided by internal sensors or firmware of the system executing the Action.<br />
AI only reads the observation results and does not own the act of measurement.</li>
<li><strong>External Observation (AI-required)</strong><br />
Observation exists outside the system executing the Action (space sensors, cameras, user input, external system logs, etc.).<br />
AI must directly collect and interpret observation values (or secure them from users/systems), and failure of observation becomes a gap in execution judgment.</li>
</ul>
<p>This distinction is not for performance.<br />
It is a declaration to fix the location of responsibility.</p>
<p>For Actions whose observation is external, manufacturers must leave the following as Boundaries:</p>
<ul>
<li>fail-safe conditions assuming observation can fail</li>
<li>criteria requiring mandatory stopping when observation is insufficient</li>
<li>a boundary stating AI must not continue acting by inference under unobservable conditions</li>
</ul>
<p><strong>11. User Label Transition and Question Reallocation</strong></p>
<p>The structure so far is based on Actions declared by the manufacturer.<br />
But this structure expands to the next stage the moment a user redefines the meaning of an Action in their own language.</p>
<p>When Label transitions to User Label, AI can no longer assume.</p>
<p><strong>11.1 Label Ownership and Transition</strong></p>
<p>In this document, a label does not mean merely a name.<br />
A label is a declaration of who owns the meaning of an Action at a given moment.</p>
<p>At the design stage, an Action is defined by the manufacturer label.<br />
This label fixes the identity of the Action, its physical effects, and its unchangeable boundaries.</p>
<p>At this stage:</p>
<ul>
<li>the Action’s effect is fixed</li>
<li>the Action’s safety limits are fixed</li>
<li>the meaning of the Action is owned by the manufacturer</li>
</ul>
<p>When the user assigns their own meaning to the Action, the label transitions to a User Label.</p>
<p>This transition does not change the Action itself.<br />
The real-world effect, execution mechanism, and declared boundaries remain the same.</p>
<p>What changes is ownership of intent and context.</p>
<p>After the transition:</p>
<ul>
<li>the reason (intent) is owned by the user</li>
<li>where/when (context) is owned by the user</li>
<li>what (execution effect) and boundaries remain owned by the manufacturer</li>
</ul>
<p>Through this transition, responsibility for answering the nine questions is reallocated, but the identity of the Action and its safety scope are not changed.</p>
<p><strong>11.2 Meaning of User Label</strong></p>
<p>User Label is not a mere name change.</p>
<ul>
<li>it is an act of overlaying the user’s intent and context</li>
<li>on top of the Action essence defined by the manufacturer</li>
</ul>
<p>Therefore, AI assumes that accuracy and safety have been addressed through the manufacturer’s design, and focuses only on:</p>
<ul>
<li>the user’s intent (WHY)</li>
<li>the user’s context (WHERE / WHEN)</li>
<li>the effects at start/stop (Start / Stop Impact)</li>
</ul>
<p>The effects at start/stop may expand from internal system concerns (execution resources, control units, internal interference) into spatial constraints (occupancy, children/pets, time windows, regulations, etc.). The User Label transition is the process of accepting this expansion not as “additional rules,” but as “additional context.”</p>
<p>AI makes judgments only with respect to the user’s intent, context, start/stop effects, and explicitly declared boundaries.</p>
<p>At this moment, the Action transitions into the following state:</p>
<ul>
<li>the Action is still the same</li>
<li>the physical effect is still the same</li>
<li>but <strong>intent (WHY) and context (WHERE)</strong> become user-owned</li>
</ul>
<p><strong>11.3 Reassignment of Questions</strong></p>
<p>The nine questions do not decrease.<br />
Instead, who must answer them changes.</p>
<p>At the moment User Label is declared, AI must:</p>
<ul>
<li>keep the questions already answered by the manufacturer</li>
<li>continue to refresh the answers provided by the observation layer through observation</li>
<li>identify the questions that still have no answers</li>
</ul>
<p>And return those questions to the user.</p>
<p><strong>11.4 Question-Based Interaction</strong></p>
<p>When a User Label is set, AI must at minimum confirm the following:</p>
<ul>
<li>In what situations is this Action allowed?</li>
<li>When should it start?</li>
<li>When should it stop?</li>
<li>When this Action starts or stops, are there additional impacts or constraints that must be reviewed?</li>
<li>What is “enough”? (if goal criteria were not predefined)</li>
</ul>
<p>These are the items among the nine questions that require user answers.</p>
<p>AI does not fill them by inference.<br />
AI must return them as questions.</p>
<p><strong>12. Final Closing Statement</strong></p>
<p>This document does not propose a structure in which AI judges by itself.</p>
<p>This document proposes a structure in which:</p>
<ul>
<li>the manufacturer first fixes what it can responsibly own</li>
<li>the user is helped to express their intent clearly</li>
<li>AI fills the gap <strong>with questions</strong>, not imagination</li>
</ul>
<p>As a result, AI does not imagine—it questions, and enables responsible action.</p>
<p><strong>13. Licensing and Copyright</strong></p>
<p><strong>13.1 Licensing & Usage</strong></p>
<p>This specification is freely open, without restriction, to individual developers, academic/educational/research institutions, non-profit organizations, and early-stage companies and small teams.</p>
<p>However, for large commercial organizations that adopt this specification as the core norm of AI execution judgment and provide it to many users or third parties, a separate license agreement with the copyright holder is required, including:</p>
<ul>
<li>operators of hyperscale AI platforms</li>
<li>large mobility and robotics companies</li>
<li>large-scale financial systems that induce financial state changes</li>
<li>organizations that dominate OS, SDK, and hardware ecosystems</li>
</ul>
<p><strong>13.2 Copyright Notice</strong></p>
<p>© 2026 AnnaSoft Inc. Republic of Korea</p>
<p>This document is released under CC BY-NC-ND for non-commercial, verbatim sharing.<br />
It may be freely shared and cited with attribution.<br />
Commercial adoption/distribution/derivative specifications are provided under a separate commercial license.</p>