The Road to Verifiable AI: A Conversation with Jens Groth and QuillAI

In a recent X Space conversation, Nexus Chief Scientist Jens Groth and QuillAI Network CTO Joey explored one of the most urgent questions in tech today: How do we ensure that AI systems can be trusted — by design, not by assumption?

Listen to the recording of this X Spaces

As AI agents become increasingly capable and autonomous, they’re also becoming less transparent. These systems make real-world decisions, interact with financial infrastructure, and process sensitive data, often with minimal visibility into their inner workings.

As Joey put it, “AI agents make a ton of calls… we need proofs to ensure the agent isn’t using our data for any other purpose.” That’s the core challenge: without some form of cryptographic accountability, there’s no way to independently verify what an AI agent is actually doing.

Jens, a pioneer in zero-knowledge cryptography, sees this not just as a technical issue, but as a foundational problem for the next era of computing.

Verifiability, he said, is becoming a prerequisite for trust. “We can’t just evaluate models experimentally and say they’re safe—we need rigorous definitions that prove they’re doing what they claim.” Whether it’s compliance with data privacy regulations like GDPR or defending against malicious fine-tuning, the ability to verify behavior at every layer of the stack is critical.

This is where blockchains enter the picture — not just as ledgers, but as programmable infrastructure for trust. Jens noted that as AI systems reduce the cognitive overhead of interacting with digital systems, usage will skyrocket.

That surge in autonomous behavior demands tamper-proof accountability — and blockchains offer a natural foundation. “If we make the cognitive overhead less, demand simply goes up… L1s give us digital trust anchors in a post-AI world.”

For Joey, the implications are already practical. QuillAI is building a network where AI agents can transact, verify each other’s behavior, and compete economically, all backed by provable guarantees. This requires an entirely new kind of base layer: one that integrates compute, data access, and proof generation natively. In that world, every agent call — whether it’s fetching a dataset, executing a smart contract, or choosing a model — can be verified independently by the network itself.

Nexus is tackling similar challenges through its Verifiable AI Lab. One investigation is zk-MCP, a proof system that lets agents generate attestations of external model evaluations. In other words, agents can now prove not just what decision they made, but that the decision was based on a specific model context —an essential step toward trustworthy delegation. The Nexus team is also using AI itself to generate zero-knowledge constraints more efficiently, further accelerating the path toward scalable verifiable inference.

Meanwhile, QuillAI is operationalizing these ideas with tools like Guardrails, an adversarial policy testing framework, and BackXBT, a Twitter-native agent that flags scam tokens and audits smart contracts in real time. These early applications point to a broader future where AI agents act as verifiable, composable primitives in open networks.

For all the excitement around scaling models and improving performance, the conversation served as a reminder: without verification, AI is a black box. But with the right cryptographic scaffolding — and the right coordination between builders — we can unlock a new era of agency, autonomy, and accountability.

The future won’t just be powered by AI — it will be shaped by how deeply we can trust it. And that trust starts with proofs.

Connect with us on X and Discord.
Share this article: Link copied to clipboard!

You might also like...