A new open specification called AGEF (AI Agent Evidence Format) dropped on Hacker News this week, and if you're building anything serious with AI agents, you should pay attention. The project defines a standard way to represent agent sessions as portable, tamper-evident evidence bundles that can be verified offline, transferred between systems, and reviewed by completely independent tooling.
How AGEF Works
The format relies on two core concepts: content-addressed objects and merkle-linked events. Content-addressing means every piece of data gets a unique hash based on its contents—if anything changes, the address changes, making tampering immediately obvious. Merkle-linking chains these objects together in a structure similar to what blockchains use for integrity verification. The result is evidence that you can hand to a skeptic with zero trust requirements; they can verify it without needing your infrastructure or your blessing.
Conformance Profiles
AGEF v0.1 defines two paths for implementers. The Bundle Profile handles producing and consuming AGEF bundles according to the full specification (Sections 5-14). The Substrate Profile is lighter—it maintains compatible session journals but doesn't necessarily emit bundles directly, with an export pathway handling conversion when needed. This gives flexibility without fracturing the ecosystem into incompatible islands.
Reference Implementation
The reference implementation comes through Akmon v2.0.0 and later, which ships bundle export and import functionality alongside journaling in akmon-journal. The project is currently at v0.1.1 (pre-stable), so treat it as early-stage technology, but the architecture choices signal this is being built by people who understand how evidence integrity actually works.
Licensing and Governance
The specification text falls under CC BY 4.0, while implementation code uses Apache-2.0—standard open-source terms that shouldn't create friction for adoption. The GOVERNANCE.md file outlines the evolution path, which is worth reviewing if you're considering building on this seriously rather than just experimenting.
Key Takeaways
- AGEF provides cryptographic integrity guarantees without requiring a centralized authority to verify them
- Content-addressed storage means evidence bundles are self-verifying by construction
- Two conformance profiles allow lightweight substrate implementations with optional full bundle export
- Early stage (v0.1.1) but the architecture is solid and worth watching as AI agent accountability becomes more critical
The Bottom Line
As AI agents start making decisions that matter—executing code, modifying files, interacting with external systems—the ability to produce verifiable evidence of what happened becomes non-trivial. AGEF attacks this problem at the specification level rather than locking you into some vendor's proprietary audit log format. That's exactly the kind of infrastructure the ecosystem needs right now.