Nexus in Asia: Highlights from Shenzhen and Seoul
Nexus has been on the move. Over the past two weeks, Nexus brought our community together in Shenzhen and Seoul.
Why decentralization, cryptography, and zero trust are foundational to trustworthy AI.
At a recent CryptoMondays event hosted at the House of Web3 in San Francisco, a panel of leaders from the decentralized infrastructure space tackled one of the defining challenges of our time: how to build trust in AI systems.
The conversation — moderated by Rodrigo Coelho, CEO of Edge & Node — centered on the convergence of Web3, cryptography, and confidential computing to move from opaque, black-box AI toward verifiable, accountable systems.
The discussion featured:
Public anxiety around AI is high — and growing. From misinformation and hallucinations to privacy breaches and biased outputs, people are losing faith in the systems increasingly shaping their lives.
“We’re relying on systems we don’t understand — and that’s dangerous. The lack of transparency in AI has real-world impacts in finance, healthcare, and beyond.”
—Alex Fowler, Nexus
But as the panelists emphasized, trust isn’t just about technical performance. It’s about provenance, process, and proof — knowing where the data came from, how it was used, and being able to verify that outcomes weren’t manipulated.
Traditional trust models in AI depend on opaque vendors and unverifiable claims. That’s no longer acceptable. New models of trustless verifiability are emerging — from zero-knowledge proofs to confidential compute environments to remote attestation protocols.
“Trust doesn’t come from belief—it comes from measurement. If you can’t measure it, you can’t verify it.”
—Dylan Kawalec, Phala Network
Kawalec argued that the industry must decentralize not just compute, but also control. That means eliminating hidden dependencies on cloud providers, unlocking user-run infrastructure, and using cryptography to prove the behavior of AI systems down to the hardware level.
While the mainstream AI industry still favors centralized infrastructure, the Web3 ecosystem is positioning itself as the trust layer AI desperately needs. Blockchains offer immutable provenance, decentralized consensus, and incentive structures that can align autonomous systems.
“AI is reckless. Blockchain is slow and steady. Together, they form a complete nervous system—intelligence plus accountability.”
—Adam Leon, Vara Network
Leon envisions a future where AI agents operate on-chain, governed by transparent rules and economic incentives. This isn’t theory — many of the necessary components, from gasless transaction frameworks to agent-based staking mechanisms, are already in development.
Despite the urgency, regulatory bodies and corporate leaders are largely avoiding the hard work of building ethical, verifiable systems. Some insiders even believe AI will “solve” the trust problem itself — an idea that drew skepticism (and concern) from the panel.
“There’s a vacuum in leadership. We’re building systems that are powerful, but we’re not building guardrails — and the industry knows it.”
—Alex Fowler, Nexus
Learn more about leadership in AI in this Exponential episode.
The panelists agreed: without stronger frameworks, users will continue to be data-mined, manipulated, and misled. What’s needed is a shift in mindset — from “trust me, bro” security to verifiable systems users can inspect and audit themselves.
The AI industry is at a fork. One path leads to centralized systems, driven by data extraction and corporate secrecy. The other points toward a more verifiable, decentralized, and user-empowering future.
The takeaway from this panel is clear: trust isn’t a given — it’s a system. And the tools to build that system are already in motion.