Public trust in AI is unraveling — and not without reason.
According to a recent Pew Research study, 66% of adults and 70% of AI experts are highly concerned about people getting inaccurate information from AI. More than half worry about biased decision-making. And 57% of the public fears AI could lead to less human connection, eroding not just trust in machines, but in each other.
This isn’t a crisis of hype. It’s a crisis of accountability.
From synthetic media to AI-powered hiring, we’re seeing powerful systems deployed into critical domains with no clear way to understand, audit, or verify what they’re doing. We’re being asked to trust AI systems we can’t inspect — and in many cases, not even their creators can fully explain their behavior.
The solution isn’t to slow down AI. It’s to make it verifiable.

From black boxes to proofs
Today’s AI models are increasingly capable but stubbornly opaque. We don’t know exactly what data they were trained on. We can’t always reproduce their outputs. And we have no cryptographic guarantees that the model we tested is the same one making real-world decisions.
For years, the answer to this problem has been interpretability — trying to peer inside the model’s “mind” and make sense of its decisions. But interpretability has limits. It’s hard to generalize, harder to scale, and nearly impossible to prove.
What we need is not just explainability, but integrity — systems that can prove what they’re doing, when they’re doing it.
What verifiable AI actually means
Verifiability in AI is the ability to cryptographically prove claims about a model’s behavior, origin, or outputs — without relying on trust in the developer, the infrastructure, or the process.
This includes:
- Verifiable inference: Proving that a specific model generated a specific output in a specific context
- Model attestation: Verifying the version, weights, and configuration of a deployed model
- Training provenance: Auditing what data was used to train a model, and under what conditions
- Secure execution: Running models inside tamper-resistant environments that generate attestable records
These systems don’t just assert correctness — they prove it, often using techniques like zero-knowledge proofs and hardware-backed attestations.

Why now: the verifiability stack is here
A few years ago, the mass adoption of verifiable computation was theoretical. Today, we’re seeing the verifiability stack begin to materialize.
At Nexus, we’re contributing to that effort with:
- The Nexus zkVM, which allows for provable compute using zero-knowledge proofs
- The Nexus Layer 1, built to create a secure proof supply chain
- The Nexus Network, which distributes trust across nodes and provides scalable verifiability
But this isn’t just about us. A growing ecosystem of researchers and builders is working to make verifiable AI a standard, not an exception:
- zkML frameworks are bringing zero-knowledge proofs to machine learning
- Attestation layers like C2PA are helping verify the origin of content and media
- Privacy-preserving audits are enabling onchain proof of model fairness and compliance
We’re moving toward a world where you won’t need to trust that a model did what it said — you’ll be able to verify it, cryptographically.
The stakes: bias, safety, and public trust
Bias in AI is not new. From hiring algorithms to medical diagnostics, we’ve seen repeated evidence that models can reproduce — and even amplify — existing societal inequities. These are not edge cases, they're warning signs.
And yet, many efforts to address bias have focused on process: hiring more diverse teams, refining training datasets, or implementing internal review boards. These are important steps — but they rely on institutional trust, not technical guarantees.
Verifiable AI offers a different approach. By embedding proofs into the systems themselves, we gain a new kind of accountability:
- If a company claims it uses a bias-audited model, that audit can be verifiably linked to the deployed model
- If a user is denied a loan by an AI system, the decision path can be independently validated
- If a hospital uses AI for cancer detection, patients can verify that their diagnosis was based on approved models and validated medical data
The point of all of this is to enhance trust and to make coordination amongst people and between people and machines better and more productive.
What’s next in the series
In part one of this series, we introduced the idea of the verifiable world — a future where data, content, and identity are accompanied by cryptographic proof, not just metadata and reputation.
In the next posts, we’ll explore how verifiability is being applied across other critical domains:
- Verifiable media: As deepfakes and synthetic content proliferate, how do we preserve truth online?
- Verifiable identity: In a world of bots and impersonation, how do we prove who we are — without giving up our privacy?
And eventually, we’ll return to the big picture:A Verifiable Internet — one built not on platforms and policy, but on a decentralized supply chain of proof-carrying data.