A technical deep dive into the architecture that makes EPI suitable for trustworthy AI workflows, offline review, and portable evidence exchange.
The heart of EPI is its ability to capture workflow execution into a portable artifact that can be inspected later. Earlier releases introduced explicit capture, and v2.8.7 extends that foundation with stronger agent-first review flows, native OS opening, and deeper framework integrations.
EPI works best when recording is explicit. The record(...) context manager,
agent_run(...) helpers, and wrapper clients such as wrap_openai() capture
inputs, tool activity, outputs, and decisions where you choose to record them. That keeps the product
predictable and makes the resulting evidence easier to review.
Trust is not achieved by logs alone. EPI computes hashes for the sealed files in the artifact and then signs the manifest with Ed25519. The manifest is canonicalized before signing so the same artifact data produces the same verification result across platforms.
cryptography.hazmat.primitives.asymmetric.ed25519
A single callback can capture LLM calls across 100+ providers: OpenAI, Anthropic, Cohere, Mistral, Azure, Bedrock, and more. This gives teams a practical way to generate evidence without rewriting each provider integration from scratch:
import litellm
from epi_recorder.integrations.litellm import EPICallback
litellm.callbacks = [EPICallback()] # That's it
response = litellm.completion(model="gpt-4", messages=[...])
# Every call -> signed .epi evidence
The EPICallbackHandler captures LLM calls, tool invocations, chain steps, retriever
queries, and agent decisions in one handler:
from langchain_openai import ChatOpenAI
from epi_recorder.integrations.langchain import EPICallbackHandler
llm = ChatOpenAI(model="gpt-4", callbacks=[EPICallbackHandler()])
result = llm.invoke("Analyze this contract...")
# Captures: LLM, tools, chains, retrievers, agents
--epi flag)Generate signed .epi evidence per test. This is useful when teams want CI/CD runs to leave
behind portable proof instead of only ephemeral logs.
Native stream=True capture lets the caller receive streaming output while EPI records the
assembled response and token usage into the artifact.
EPISpanExporter bridges OpenTelemetry spans into signed .epi files. That
makes EPI easier to adopt inside systems that already use OTel for tracing:
from epi_recorder.integrations.opentelemetry import setup_epi_tracing
setup_epi_tracing(service_name="my-agent")
# All OTel spans -> signed .epi files automatically
Auto-recording can also be installed globally through sitecustomize.py. One command:
epi install --global. Idempotent with clean uninstall via
epi uninstall --global.
The verifier is designed to run in the browser so evidence can be inspected without sending the artifact to a server. The page uses browser-side ZIP parsing and signature tooling to inspect the package structure, hashes, and signature metadata locally.
Rendering an embedded viewer.html inside the verifier carries XSS risks. EPI handles this
by placing the embedded viewer inside a sandboxed Blob iframe:
// 1. Create a Blob from the untrusted content
const blob = new Blob([htmlContent], { type: 'text/html' });
// 2. Generate a unique, isolated URL
vizFrame.src = URL.createObjectURL(blob);
// 3. Enforce strict sandboxing (No 'allow-same-origin')
// <iframe sandbox="allow-scripts allow-popups allow-forms">
This prevents the embedded viewer from accessing the parent verifier's cookies, local storage, or DOM. In practice, it lets the website expose artifact inspection without turning the embedded viewer into a trusted same-origin page.
At pipeline scale, the epi-action GitHub Action and CLI commands make it possible
to generate and verify evidence automatically:
epi verify.
EPI is designed so one artifact can carry the workflow record, policy context, analysis, review state, and trust signals together. The recorder captures evidence, the artifact sealing step makes tampering visible, and the viewer makes the result inspectable without requiring a custom backend.