A sketch of a protocol that verifies a model's outputs on-chain without revealing the model itself. Early notes, many open questions.
If someone claims their model produced a particular output, how can they prove it without giving away the model? That's the shape of the problem I've been sketching against.
Reading about zk-SNARKs, commitment schemes, and existing work on verifiable ML inference. It's mostly a notes-and-diagrams stage. I'm not convinced any of this lands — I'm exploring it because the question is interesting.
A minimal proof-of-concept for a toy model, just to see how hard the rough edges actually are. No promises.