Public Comment on Docket NIST-2025-0035 · Submitted March 4, 2026
A 12-page public comment responding to the NIST CAISI Request for Information on AI agent security. Proposes six policy recommendations grounded in patent-pending cryptographic runtime governance.
View public docket on regulations.govMandate sealed reference states for all autonomous AI deployments
Require continuous runtime measurement against sealed baselines
Adopt tiered verification levels (self-attested → portal-enforced → third-party verified)
Mandate offline verifiability for air-gapped and DDIL environments
Standardize artifact formats using existing cryptographic primitives
Require privacy-preserving disclosure for cross-boundary attestation
Analysis of runtime integrity threats including behavioral drift, policy circumvention, and post-hoc evidence fabrication in agentic AI systems.
Sealed policy artifacts, continuous integrity measurement, and signed enforcement receipts as foundational security practices.
Tiered verification framework progressing from self-attestation through portal enforcement to independent third-party verification.
Portal architecture as a zero-trust enforcement boundary. Runtime governance for cloud, edge, and air-gapped deployments.
Privacy-preserving selective disclosure, FRAND licensing commitment, and alignment with existing NIST framework vocabulary.
The submission includes a commitment to license patent-pending AGA architectural patterns on fair, reasonable, and non-discriminatory (FRAND) terms should any element be incorporated into NIST guidelines or standards.