Agentic Verifiability and the Future of Online Interactions

As the Internet becomes increasingly agentic — populated by autonomous systems acting on our behalf — the stakes for verifiability, trust, and alignment rise sharply. At ETHSF, Nexus and Nethermind organized a panel discussion as part of a Verifiable Computation event that explored the future of online interactions in the era of AI agents.

Panel guests included experts shaping the future of verifiability: Sam Green (Cambrian), Griffin Dunaif (Halliday), and Brian Behlendorf (Linux Foundation). Moderated by Michal Zajac of Nethermind, the conversation ranged from cryptographic provenance to legal personhood for AI.

Watch the panel discussion:

This event was co-hosted by Nexus and Nethermind, sponsored by Halliday, and in partnership with Blockchain Builders Fund and ETHSF.

The challenge of trust in a post-photographic world

How do we trust what we see online when AI can fabricate anything? The panelists largely agreed that cryptographic provenance — using signatures from hardware sensors and public key infrastructure — offers a promising path forward. But it’s not foolproof.

Verifiable Media: Proof not Perception
Verifiable media offers a practical, scalable way to ensure authenticity — not through subjective judgment, but through mathematical certainty.

Learn more about verifiable media.

“Give me a camera that signs its content — I can extract the key and sign anything I want,” warned Sam Green, highlighting that even trusted hardware can be compromised. The real challenge, he added, is not just verification but resilience in the face of inevitable compromise.

Human or agent? Why it matters — and when it doesn’t

When does it matter whether we’re interacting with a human or an AI? The answers were nuanced. In some cases, like gaming or legal contracts, identity is critical. In others, outcomes matter more than origins.

“I don’t care who or what I’m interacting with — as long as I can trust it to behave as expected,” said Green, emphasizing behavioral trust over ontological verification.

Building trustworthy agents: Guardrails, contracts, and blockchain

The discussion turned to agent design. Verifiability is one piece; alignment and oversight are another. Here, the panel made the case for structured constraints — legal, cryptographic, or economic — as essential design tools.

“AI systems won’t be trusted because they’re perfect. They’ll be trusted because they follow organizational rules — like a C-corp of agents,” said Griffin Dunaif, envisioning agentic software as structured bureaucracies, not monolithic AIs.

Verifiable AI: Proof Over Promise
Powerful AI systems deployed into critical domains with no clear way to understand, audit, or verify what they’re doing.

Learn more about verifiable AI.

Blockchain came up repeatedly as a coordination layer for enforcing those constraints — whether for verifying AI execution, enabling agent-to-agent contracts, or distributing control.

“We’ll see agents entering into legally binding agreements, with smart contracts enforcing outcomes — and blockchains providing the audit trail,” predicted Green.

On open source, regulation, and the moat mentality

Brian Behlendorf, who helped build the open web and now advises on digital identity and trust frameworks, cautioned against ceding the AI future to centralized actors.

“We won’t trust AI until it runs on local data, under our control, with software we can inspect,” he argued. “Open source isn’t just a preference — it’s a prerequisite for personal AI.”

He also flagged the regulatory shift underway — from doom-driven narratives to capacity-building policies. Still, he warned of “moat-building” by incumbents lobbying for regulation as a competitive advantage.

Looking ahead

As agents take on more responsibility — from content creation to economic decision-making — the need for verifiability becomes not just technical, but societal.

The panel made clear that cryptographic roots, open infrastructure, and institutional safeguards will be key to ensuring these systems serve people — not the other way around.

Find the other talks that were part of the ETHSF Verifiable Computation Event on the Nexus YouTube channel.



Share this article: Link copied to clipboard!