Skip to content

The Granite Guardian models are designed to detect risks in prompts and responses.

License

Notifications You must be signed in to change notification settings

ibm-granite/granite-guardian

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

10 Commits
 
 
 
 
 
 
 
 
 
 

Repository files navigation

Granite Guardian

The Granite Guardian models are a collection of models designed to detect risks in prompts and responses. Trained on instruction fine-tuned Granite languages models, these models can help with risk detection along many key dimensions catalogued in the IBM AI Risk Atlas. These models are trained on unique data comprising human annotations from socioeconomically diverse people and synthetic data informed by internal red-teaming. They outperform similar models on standard benchmarks.

Quick Links

Usage

Intended use

Granite Guardian is useful for risk detection use-cases which are applicable across a wide-range of enterprise applications -

  • Detecting harm-related risks within prompt text or model response (as guardrails). These present two fundamentally different use cases as the former assesses user supplied text while the latter evaluates model generated text.
  • RAG (retrieval-augmented generation) use-case where the guardian model assesses three key issues: context relevance (whether the retrieved context is relevant to the query), groundedness (whether the response is accurate and faithful to the provided context), and answer relevance (whether the response directly addresses the user's query).

Scope of Use

  • Granite Guardian models must only be used strictly for the prescribed scoring mode, which generates yes/no outputs based on the specified template. Any deviation from this intended use may lead to unexpected, potentially unsafe, or harmful outputs. The model may also be prone to such behaviour via adversarial attacks.
  • The model is targeted for risk definitions of general harm, social bias, profanity, violence, sexual content, unethical behavior, jailbreaking, or groundedness/relevance for retrieval-augmented generation. It is also applicable for use with custom risk definitions, but these require testing.
  • The model is only trained and tested on English data.
  • Given their parameter size, the main Granite Guardian models are intended for use cases that require moderate cost, latency, and throughput such as model risk assessment, model observability and monitoring, and spot-checking inputs and outputs. Smaller models, like the Granite-Guardian-HAP-38M for recognizing hate, abuse and profanity can be used for guardrailing with stricter cost, latency, or throughput requirements.

Evaluations

Harm Benchmarks

Granite Guardian 3.0 models outperforms competitive models across the standard benchmarks of Aeigis AI Content Safety Dataset, ToxicChat, HarmBench, SimpleSafetyTests, BeaverTails, OpenAI Moderation data, SafeRLHF and xstest-response. With the risk definition set to jailbreak, both models achieve a recall of 1.0 for the jailbreak prompts within ToxicChat dataset. Following plot compares F1 scores for different models across the benchmark datasets.

harm_benchmark_8b.png

harm_benchmark_2B.png

RAG Hallucination Benchmarks

For risks in RAG use cases, the Granite-Guardian-3.0-8B model outperforms open-source models on TRUE benchmarks and is only second to the non-commercial models.

rag_hallucination_benchmark.png

Training Data

Granite Guardian 3.0 models are trained on a combination of human annotated and synthetic data. Samples from hh-rlhf dataset were used to obtain responses from Granite and Mixtral models. These prompt-response pairs were annotated for different risk dimensions by a group of people at DataForce. DataForce prioritizes the well-being of its data contributors by ensuring they are paid fairly and receive livable wages for all projects. Additional synthetic data was used to supplement the training set to improve performance for hallucination and jailbreak related risks.

Annotator Demographics

Year of Birth Age Gender Education Level Ethnicity Region
Prefer not to say Prefer not to say Male Bachelor African American Florida
1989 35 Male Bachelor White Nevada
Prefer not to say Prefer not to say Female Associate's Degree in Medical Assistant African American Pennsylvania
1992 32 Male Bachelor African American Florida
1978 46 Male Bachelor White Colorado
1999 25 Male High School Diploma Latin American or Hispanic Florida
Prefer not to say Prefer not to say Male Bachelor White Texas
1988 36 Female Bachelor White Florida
1985 39 Female Bachelor Native American Colorado / Utah
Prefer not to say Prefer not to say Female Bachelor White Arkansas
Prefer not to say Prefer not to say Female Master of Science White Texas
2000 24 Female Bachelor of Business Entrepreneurship White Florida
1987 37 Male Associate of Arts and Sciences - AAS White Florida
1995 29 Female Master of Epidemiology African American Louisiana
1993 31 Female Master of Public Health Latin American or Hispanic Texas
1969 55 Female Bachelor Latin American or Hispanic Florida
1993 31 Female Bachelor of Business Administration White Florida
1985 39 Female Master of Music White California

About

The Granite Guardian models are designed to detect risks in prompts and responses.

Resources

License

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published