A Conversation with Steve Yu on the Nexus Ecosystem
In the latest episode of Nexus Chats, I sat down with our Head of Business Development Steve Yu to learn
The people working at Nexus come from a wide-variety of backgrounds, and each of them bring a different skill-set that will help us build the Layer 1 for the AI Era. We’re kicking off an interview series so we can introduce these Nexians to our community, readers, and others interesting in working on similar projects.
Tanner Duve is an engineer working on the Nexus zero-knowledge virtual machine (zkVM). He was born and raised in Los Angeles, and graduated from the University of Pennsylvania in May 2025 with a master's degree in computer science and a bachelor's degree in mathematical logic.
In school, I studied logic and computer science, and in logic we are interested in rigorously understanding the concepts of truth and proof. This foundation informed my approach to learning computer science; I learned how computer programs and programming languages can be viewed as mathematical objects and reasoned about using logic, and I became interested in the theory of types and programming languages.
I also began learning applied tools for automated reasoning: proof assistants, automated theorem provers, satisfiability solvers, etc. and how these tools are used in industry. Already fascinated by the theory, seeing how it can be used in the “real world” convinced me that I wanted to work in this field, and importantly that truth and proof should be fundamental tenets of the software of the future.
As AI widens its scope and gets deployed in critical systems like healthcare, finance, infrastructure, and security, unreliable and incorrect output can be disastrous to individuals and society. Provable correctness means providing formal and cryptographic guarantees for things like safety, privacy, and fairness. Beyond avoiding catastrophe, it builds trust: proofs themselves are inspectable artifacts that allow users and developers to trust that the computations they execute and interact with are safe and secure.
I’m most excited by recent developments combining AI with formal verification and automated reasoning. On one hand, AI can be used to aid in formal verification of mathematics and software; on the other, formal verification can be used to reason about and verify AI systems themselves.
There’s a growing movement within the Lean community to leverage AI for mathematics and software verification. Earlier this year, I attended the “Autoformalization for the Working Mathematician” conference at Brown University, which brought together top researchers across pure math, formal verification, and machine learning, along with members of the AI industry working to bridge AI with formal methods.
There’s some cool work happening in this space; teams are approaching the LLM reasoning problem by training models directly on Lean data, aiming to create AI systems that can automatically generate formally verified mathematics and software.
At Nexus, we recently launched the Verifiable AI Lab, which explores both directions of this relationship. We're focused on verification for AI: ensuring the correctness, safety, and fairness of AI systems using cryptographic proofs.
Conversely, we’re exploring AI for verification: intelligent systems using formal verification to reason about smart contracts, software correctness, cryptographic protocols, and even other AI models.
My own work at Nexus is on our zero-knowledge virtual machine (zkVM), written in Rust and designed to provide cryptographic proofs of correct program execution. There is growing momentum in the ZK space around formalizing and verifying cryptographic protocols, interactive proofs, and ZK circuits.
At Nexus, we’re aiming to apply formal methods to our zkVM, using tools like Lean for certified compilation, verified circuit representations, and formally specified zkVM instruction semantics. I am most excited to be working on this. This approach lays the foundation for Nexus’ goal of scalable, verifiable computation.
Historically the main barrier for formal methods has been its complexity. The certainty you get from formal verification comes at a cost: it is difficult. Formal proof requires specialized knowledge that many have considered too big a barrier of entry, and the existing tools have just not caught on outside of the areas in which they were developed.
The good news is that so much of the recent work in this space has been towards increased accessibility. The huge success we are currently seeing with Lean is in part due to how much work the community and developers have put into making it widely usable.
Additionally, recent developments in automation via SMT-solvers, and of course AI-assisted proof generation, are all making formal methods more accessible and scalable than ever before. It is an exciting time to be in this space, and I think we will start seeing these tools become more standard.
I think a lot of people see formal methods as some abstract area of academia that is far removed from the “real world” of software engineering. But proofs themselves can be thought of as valuable digital assets.
They’re machine-checkable evidence that systems behave as intended, and with zero-knowledge proofs we get similar assurances without exposing private data. Machine learning has proven powerful in its ability to make predictions and generalizations, and as models and agents are injected into a wider range of domains, being able to couple these large inductive inference machines with formal deductive proofs will provide certainty and trust that is absolutely necessary in the coming era of computing.
The Nexus Verifiable AI Lab is dedicated to exploring the frontier of verifiability, economics, and artificial intelligence in order to expand the boundaries of human and machine cooperation.