← All projects
— AI · Web3

Veritas — a small idea

A sketch of a protocol that verifies a model's outputs on-chain without revealing the model itself. Early notes, many open questions.

Year
2026
Role
Exploration
Stack
Rust · zk · Notes
Status
Exploring

The question

If someone claims their model produced a particular output, how can they prove it without giving away the model? That's the shape of the problem I've been sketching against.

Where I am

Reading about zk-SNARKs, commitment schemes, and existing work on verifiable ML inference. It's mostly a notes-and-diagrams stage. I'm not convinced any of this lands — I'm exploring it because the question is interesting.

What's next

A minimal proof-of-concept for a toy model, just to see how hard the rough edges actually are. No promises.