AI, Proof, and the Future: Five Questions with Tanner Duve
The people working at Nexus come from a wide-variety of backgrounds, and each of them bring a different skill-set that
Public trust in AI is unraveling — and not without reason.
According to a recent Pew Research study, 66% of adults and 70% of AI experts are highly concerned about people getting inaccurate information from AI. More than half worry about biased decision-making. And 57% of the public fears AI could lead to less human connection, eroding not just trust in machines, but in each other.
This isn’t a crisis of hype. It’s a crisis of accountability.
From synthetic media to AI-powered hiring, we’re seeing powerful systems deployed into critical domains with no clear way to understand, audit, or verify what they’re doing. We’re being asked to trust AI systems we can’t inspect — and in many cases, not even their creators can fully explain their behavior.
The solution isn’t to slow down AI. It’s to make it verifiable.
Today’s AI models are increasingly capable but stubbornly opaque. We don’t know exactly what data they were trained on. We can’t always reproduce their outputs. And we have no cryptographic guarantees that the model we tested is the same one making real-world decisions.
For years, the answer to this problem has been interpretability — trying to peer inside the model’s “mind” and make sense of its decisions. But interpretability has limits. It’s hard to generalize, harder to scale, and nearly impossible to prove.
What we need is not just explainability, but integrity — systems that can prove what they’re doing, when they’re doing it.
Verifiability in AI is the ability to cryptographically prove claims about a model’s behavior, origin, or outputs — without relying on trust in the developer, the infrastructure, or the process.
This includes:
These systems don’t just assert correctness — they prove it, often using techniques like zero-knowledge proofs and hardware-backed attestations.
A few years ago, the mass adoption of verifiable computation was theoretical. Today, we’re seeing the verifiability stack begin to materialize.
At Nexus, we’re contributing to that effort with:
But this isn’t just about us. A growing ecosystem of researchers and builders is working to make verifiable AI a standard, not an exception:
We’re moving toward a world where you won’t need to trust that a model did what it said — you’ll be able to verify it, cryptographically.
Bias in AI is not new. From hiring algorithms to medical diagnostics, we’ve seen repeated evidence that models can reproduce — and even amplify — existing societal inequities. These are not edge cases, they're warning signs.
And yet, many efforts to address bias have focused on process: hiring more diverse teams, refining training datasets, or implementing internal review boards. These are important steps — but they rely on institutional trust, not technical guarantees.
Verifiable AI offers a different approach. By embedding proofs into the systems themselves, we gain a new kind of accountability:
The point of all of this is to enhance trust and to make coordination amongst people and between people and machines better and more productive.
In part one of this series, we introduced the idea of the verifiable world — a future where data, content, and identity are accompanied by cryptographic proof, not just metadata and reputation.
In the next posts, we’ll explore how verifiability is being applied across other critical domains:
And eventually, we’ll return to the big picture:A Verifiable Internet — one built not on platforms and policy, but on a decentralized supply chain of proof-carrying data.