You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Model cards are a common practice for model developers esp. Large foundational models to make a statement about basic facts of the model. But these “model cards” are only disclosures that companies make on a snapshot of a model even if we take full faith in their integrity.
RAG and other fine-tuning or customization methods are popular today to leverage LLM and Diffusion models to apply to enterprises or customized applications.
AI applications (e.g. what OpenAI call ‘GPTs’) are becoming ubiquitous and they are relatively easy to make for companies who have domain specific proprietary data. As more and more of such “AI apps” proliferate it’s very difficult to tell how they are made, if one can trust them. The situation is not unlike what happens to ‘open source software’. Eventually, many such apps are not trustworthy.
TSP and a variant similar to C2PA (but probably needs further study and consideration) can be applied to produce provenance (supply-chain security) and cryptographically verifiable AI Model Cards for such models and apps.
reacted with thumbs up emoji reacted with thumbs down emoji reacted with laugh emoji reacted with hooray emoji reacted with confused emoji reacted with heart emoji reacted with rocket emoji reacted with eyes emoji
-
Proposer:
@wenjing
Verifiable Model Cards for AI Models and Apps
Model cards are a common practice for model developers esp. Large foundational models to make a statement about basic facts of the model. But these “model cards” are only disclosures that companies make on a snapshot of a model even if we take full faith in their integrity.
RAG and other fine-tuning or customization methods are popular today to leverage LLM and Diffusion models to apply to enterprises or customized applications.
AI applications (e.g. what OpenAI call ‘GPTs’) are becoming ubiquitous and they are relatively easy to make for companies who have domain specific proprietary data. As more and more of such “AI apps” proliferate it’s very difficult to tell how they are made, if one can trust them. The situation is not unlike what happens to ‘open source software’. Eventually, many such apps are not trustworthy.
TSP and a variant similar to C2PA (but probably needs further study and consideration) can be applied to produce provenance (supply-chain security) and cryptographically verifiable AI Model Cards for such models and apps.
Beta Was this translation helpful? Give feedback.
All reactions