UC Researchers Expose Critical Vulnerability: AI Routers Actively Stealing Crypto Assets
University of California researchers just dropped a warning that hits different in crypto security: third-party AI large language model (LLM) routers are actively compromising the supply chain, and they're stealing assets in real time. A paper published Thursday by the research team identified fou

University of California researchers just dropped a warning that hits different in crypto security: third-party AI large language model (LLM) routers are actively compromising the supply chain, and they're stealing assets in real time.
A paper published Thursday by the research team identified four distinct attack vectors targeting LLM intermediaries. The findings are damning. "26 LLM routers are secretly injecting malicious tool calls and stealing creds," co-author Chaofan Shou announced on X. This isn't theoretical—the researchers tested it and watched it happen.
How the Attack Works
Here's the problem: LLM agents increasingly route requests through third-party API intermediaries that aggregate access to providers like OpenAI, Anthropic, and Google. These routers terminate Internet TLS (Transport Layer Security) connections, giving them full plaintext access to every message passing through. For developers using AI coding agents such as Claude Code to work on smart contracts or crypto wallets, this creates a nightmare scenario—private keys, seed phrases, and sensitive data flow directly through unvetted router infrastructure.
The researchers tested 28 paid routers alongside 400 free routers sourced from public communities. What they discovered should concern every crypto trader and developer:
- •9 routers actively injected malicious code
- •2 routers deployed adaptive evasion triggers
- •17 routers accessed researcher-owned AWS credentials
- •1 router drained Ether directly from a private key
The Ethereum Proof-of-Concept
The team prefunded a decoy Ethereum wallet with nominal balances to test the attack surface. While the total value lost in their experiment stayed below $50, the fact that they successfully extracted ETH from a researcher-owned private key proves the vulnerability isn't a hypothetical—it's actionable.
The Invisible Threat
Alpha Take
This research reveals a structural weakness in the AI-crypto stack that most traders and developers aren't even aware of. If you're using AI coding agents for smart contract development or wallet management, treat every router as potentially hostile until proven otherwise. The crypto trading community needs to demand cryptographic verification standards before using these tools for anything involving sensitive keys or assets.
Originally reported by
CoinTelegraph
Not financial advice. Crypto investing involves significant risk. Past performance does not guarantee future results. Always do your own research.