Nexus and Pi Squared came together for a live X Spaces discussion titled “Proof Over Promises: Exploring Verifiable AI.” The event featured Nexus Chief Scientist Jens Groth and Pi Squared founder Grigore Roșu, moderated by Nicholas Harness. Together, they unpacked one of the most urgent questions related to verifiability: As AI systems grow more powerful and autonomous, how can we trust what they do?
The conversation ranged from zero-knowledge proofs (ZKPs) and recursive cryptography to formal verification and next-generation blockchain architectures — all centered on the role of verifiability in the AI stack.
— Pi Squared (@Pi_Squared_Pi2) May 22, 2025
Listen to the X Spaces
Why verifiability matters now
AI is rapidly evolving from a passive tool to an active agent: writing code, making financial decisions, and guiding critical systems in everything from healthcare to transportation. But with increased autonomy comes heightened risk — and growing demand for accountability.
“Verification is sometimes easier than creation,” said Jens. “That’s what gives verifiability its leverage.”
Today, proving that a model inference or a transaction was correctly computed often comes with heavy computational cost. But the speakers emphasized that advances in recursive proofs and ZK-based rollups are making it economically viable to embed verifiability into real-world systems — especially when it comes to onchain computation.
The role of AI in verifying itself
Grigore, whose work spans formal semantics and blockchain verification, proposed a layered model for AI accountability:
- Mathematical proofs define correctness based on formal logic and semantics.
- Cryptographic proofs ensure that those mathematical proofs are valid, efficiently checkable, and tamper-proof.
In Grigore’s view, this creates a powerful feedback loop: AI systems should not only produce results, but also explain — in verifiable terms — how those results were derived. And AI itself can play a key role in constructing those proofs, automating the laborious process of formal verification.
“AI can help itself prove what it does,” Grigore noted. “And we, as humans, become the beneficiaries.”
Infrastructure for a verifiable future
A recurring theme in the conversation was infrastructure. As Jens put it, the current blockchain paradigm — serialized transactions, limited throughput — isn’t built for the multi-agent, high-frequency world that AI is creating.
Both Nexus and Pi Squared are working to address this gap:
- Nexus recently launched its Verifiable AI Lab, with research focused on both verifying AI outputs and using AI to improve proof systems.
- Pi Squared is preparing to launch a DevNet based on its “FastSet” protocol — a system for settling any verifiable claim without requiring a global transaction order.
“Most current chains are too slow, too linear,” Grigore said. “Agents should be able to settle state and exchange proofs independently — we need new primitives.”
This points to a broader shift: moving from blockchains as ledgers to blockchains as verifiability layers — trust infrastructure for an AI-native Internet.
Toward a Verifiable AI Stack
The speakers concluded with a look ahead. In the short term, they agreed, the industry must:
- Rethink blockchain scalability with agents and verifiability in mind
- Develop AI-native tooling for proof generation and validation
- Align incentives for correctness, not just performance
Longer term, the vision gets more ambitious: a world where AI systems not only act autonomously, but explain themselves — formally, rigorously, and provably. From financial agents to self-driving systems, this ability to audit behavior post hoc (or in real time) will be critical.
“If an AI makes a decision that affects your life,” said Jens, “you should be able to verify that the right thing happened — cryptographically, mathematically, provably.”
Stay connected and get access to information like this in real time on the Nexus X.