Attention: We have decided to move this project to https://github.com/precize/OWASP-Agentic-AI. Please review that Github repo for any update. This Repo is depreciated.
This project documents the top 10 security risks specifically related to AI Agents, representing a comprehensive analysis of vulnerabilities unique to autonomous AI systems. The document provides detailed descriptions, examples, and mitigation strategies for each risk, helping organizations secure their AI agent deployments effectively.
As AI agents become increasingly prevalent because GenAI models, understanding and mitigating their security risks becomes crucial. This guide aims to:
- Identify and explain the most critical security risks in AI agent systems
- Provide practical mitigation strategies for each identified risk
- Help organizations implement secure AI agent architectures
- Promote best practices in AI agent security
The documentation is organized into top ten main security risks, each covering a specific risk category:
- Agent Authorization and Control Hijacking
- Agent Critical Systems Interaction
- Agent Goal and Instruction Manipulation
- Agent Hallucination Exploitation
- Agent Impact Chain and Blast Radius
- Agent Memory and Context Manipulation
- Agent Orchestration and Multi-Agent Exploitation
- Agent Resource and Service Exhaustion
- Agent Supply Chain and Dependency Attacks
- Agent Knowledge Base Poisoning
- Vishwas Manral: Initial document framework and early contributions
- Ken Huang, CISSP: Overall editing and conversion of initial document to OWASP format
- Akram Sheriff: Orchestration Loop, Planner Agentic security, Multi-modal agentic security
- Aruneesh Salhotra: Technical review and content organization
- Anton Chuvakin: DoS and Capitalize overfitting sections
- Akram Sheriff: Planner security Orchestration Loop, Multi-modal agentic security, Confused Deputy
- Aradhna Chetal: Agent Supply Chain
- Ken Huang, CISSP: Document structure and OWASP standardization
- Raj B.: Capitalize Agentic Overfitting, Model extraction
- Govindaraj Palanisamy: Alignment of sections to OWASP TOP 10 Mapping, Threat Mapping
- Mateo Rojas-Carulla: Data poisoning at scale from untrusted sources, Overreliance and lack of oversight
- Matthias Kraft: Data poisoning at scale from untrusted sources, Overreliance and lack of oversight
- Royce Lu: Stealth Propagation Agent Threats, Agent Memory Exploitation
- Anatoly Chikanov: Technical contributions
- Alex Shulman-Peleg, Ph.D.-Peleg: Security analysis
- Sahana S
- John Sotiropoulos
- Sriram Gopalan
- Parthasarathi Chakraborty
- Ron F. Del Rosario
- Vladislav Shapiro
- Vivek S. Menon
- Shobhit Mehta
- Jon Frampton
- Moushmi Banerjee
- Sid Dadana
- Michael Machado
- Alok Talgaonkar
- Sunil Arora: Technical input
- S M Zia Ur Rashid: Content contributions
This project has been made possible through the support and contributions of professionals from leading organizations including:
- Jacobs
- Cisco Systems
- GSK
- Palo Alto Networks
- Precize
- Lakera
- EY
- Distributedappps.ai
- Humana
- GlobalPayments
- TIAA
This project is part of OWASP and follows OWASP's licensing terms.
We welcome contributions from the security community. Please see our contribution guidelines for more information on how to participate in this project.
For questions, suggestions, or concerns, please open an issue in this repository or contact the project maintainers.
Special thanks to all contributors who have dedicated their time and expertise to make this project possible, and to the organizations that have supported their participation in this important security initiative.
This document will be maintained by the OWASP community and represents a collaborative effort to improve security in AI agent systems.