Skip to main content

Why Logs Aren't Proof

The difference between recording what happened and proving what was enforced

Attested Intelligence|March 15, 2026|5 min read

The Logging Assumption

Most organizations assume that logging is sufficient for AI governance. The reasoning is intuitive: if every event is captured, then compliance can be demonstrated by reviewing what happened. Logs feed into monitoring tools. Monitoring tools trigger alerts. SIEM platforms aggregate signals across systems. Security teams investigate anomalies.

This infrastructure is necessary. It is not, however, proof.

Logging tells you what a system reported about itself. It does not tell you whether the system was constrained to behave that way. It does not tell you whether the log is complete. And it cannot tell you whether the log was altered between the time it was written and the time it was read. For conventional software, this distinction is often academic. For autonomous AI agents making consequential decisions at machine speed, it is the difference between governance and the appearance of governance.

Three Problems with Logs

Logs are mutable.

A log file is data stored on a filesystem. Anyone with sufficient system access can alter it, delete it, or selectively omit entries. Database-backed logs are no different: an administrator with write access can modify records without leaving a trace in the log itself. In incident response, the first question a forensic examiner asks is whether the logs can be trusted. The answer, absent cryptographic controls, is always “conditionally.” When the system under investigation controls its own audit trail, that condition is rarely satisfied.

Logs are passive.

A log records what happened. It does not prevent what should not happen. A log entry that says “tool invocation denied” tells you the system reported a denial. It does not prove that a mandatory enforcement boundary existed, that the boundary was active at the time of the request, or that the denial could not have been bypassed. Passive recording and active enforcement are fundamentally different capabilities. Conflating them is the root of the problem.

Logs are producer-controlled.

The system generating the log controls what goes into it. If an AI agent has write access to its own log infrastructure, it determines the completeness and accuracy of the record. Even when logging is handled by a separate service, the agent controls what data it sends to that service. A compromised or misaligned agent can report compliant behavior while acting otherwise. Producer-controlled evidence is not independently verifiable evidence. It is a self-assessment.

What Proof Requires

Proof, in the governance context, requires three properties that logs cannot provide.

Immutability. The record cannot be altered after creation. Not by administrators, not by the system that produced it, not by anyone. Alteration must be detectable by any party with access to the record, without requiring trust in the producing system.

Enforcement. The system must actively prevent unauthorized behavior, not merely record it. A governance mechanism that observes violations without blocking them is a monitoring tool, not an enforcement boundary. Proof of governance requires proof that the governed system could not have acted outside its authorized parameters.

Independent verifiability. Any third party must be able to validate the governance claim without trusting the producer, without network access to the producing system, and without proprietary tools. If verification requires cooperation from the system being audited, it is not verification. It is a trust relationship.

Cryptographic Runtime Enforcement

This is what Attested Governance Artifacts (AGA) provide. The architecture replaces passive logging with active, cryptographically enforced governance that produces independently verifiable proof.

Policy Artifacts are sealed, cryptographically signed objects that encode the complete set of authorized behaviors for a governed agent. The signature binds every field: agent identity, enforcement parameters, measurement cadence, and the sealed hash representing the agent's attested known-good state. Modification of any field invalidates the signature. The agent holds no cryptographic keys and cannot alter its own governance parameters.

The Portal, a mandatory runtime boundary, parses and enforces the Policy Artifact before and during agent execution. This is not advisory. If the artifact's signature is invalid, if the effective period has expired, or if a runtime measurement does not match the sealed reference, execution is blocked. Fail-closed semantics. The default state is denial.

Every enforcement action produces a signed Enforcement Receipt appended to the Continuity Chain. Each receipt is hash-linked to its predecessor, forming a tamper-evident sequence. Modification of any single receipt invalidates every receipt that follows. This is not a log. It is a cryptographic proof structure where integrity is a mathematical property, not an administrative control.

Evidence Bundles package the Policy Artifact, signed Enforcement Receipts, and Merkle inclusion proofs into a portable verification unit. Any party with the bundle and a public key can verify the complete governance record offline using standard Ed25519 and SHA-256. No network callback. No proprietary tooling. No trust in the producing system.

The Practical Gap

Regulators are beginning to require provable governance, and the gap between what logs provide and what regulations demand is widening. The EU AI Act requires high-risk AI systems to maintain logs that “allow the tracing back of the system's operation.” But traceability without tamper evidence is not sufficient when the regulated party controls the logging infrastructure. The NIST AI RMF prescribes Measure and Manage functions but does not specify how to make those measurements independently verifiable. Executive orders on AI safety mandate testing and evaluation but not continuous post-deployment proof of governance compliance.

Each of these frameworks points toward a requirement that conventional logging cannot satisfy: provable, independently verifiable evidence that an AI system operated within its authorized boundaries for the entire duration of its deployment. Not a snapshot. Not a periodic audit. Continuous proof.

The question is not whether organizations will need cryptographic proof of AI governance. The regulatory trajectory makes that inevitable. The question is when. Organizations that wait for explicit mandates will find themselves retrofitting proof capabilities onto architectures that were never designed for them. Organizations that build enforcement into their governance architecture now will already have the evidence their regulators, auditors, and counterparties will eventually require.

From Recording to Proving

Logs are necessary infrastructure. They will remain part of every security and compliance program. But they are not proof, and treating them as proof creates a governance gap that grows wider as AI systems become more autonomous, more consequential, and more difficult to audit after the fact.

Cryptographic runtime enforcement closes that gap. Sealed policy artifacts define what is permitted. A mandatory enforcement boundary ensures compliance. Signed, hash-linked receipts prove it happened. Portable evidence bundles let any third party verify the claim without trusting the system that made it.

Explore the technical architecture, try the online verifier, or review the open source reference implementation.

SharePost

Attested Intelligence Holdings LLC

© 2026 Attested Intelligence™

Cryptographic runtime enforcement for AI systems.