diff --git a/model/D-SA-1-A/index.html b/model/D-SA-A-1/index.html similarity index 100% rename from model/D-SA-1-A/index.html rename to model/D-SA-A-1/index.html diff --git a/model/D-SA-2-A/index.html b/model/D-SA-A-2/index.html similarity index 100% rename from model/D-SA-2-A/index.html rename to model/D-SA-A-2/index.html diff --git a/model/D-SA-3-A/index.html b/model/D-SA-A-3/index.html similarity index 100% rename from model/D-SA-3-A/index.html rename to model/D-SA-A-3/index.html diff --git a/model/D-SA-1-B/index.html b/model/D-SA-B-1/index.html similarity index 100% rename from model/D-SA-1-B/index.html rename to model/D-SA-B-1/index.html diff --git a/model/D-SA-2-B/index.html b/model/D-SA-B-2/index.html similarity index 100% rename from model/D-SA-2-B/index.html rename to model/D-SA-B-2/index.html diff --git a/model/D-SA-3-B/index.html b/model/D-SA-B-3/index.html similarity index 100% rename from model/D-SA-3-B/index.html rename to model/D-SA-B-3/index.html diff --git a/model/D-SR-1-A/index.html b/model/D-SR-A-1/index.html similarity index 100% rename from model/D-SR-1-A/index.html rename to model/D-SR-A-1/index.html diff --git a/model/D-SR-2-A/index.html b/model/D-SR-A-2/index.html similarity index 100% rename from model/D-SR-2-A/index.html rename to model/D-SR-A-2/index.html diff --git a/model/D-SR-3-A/index.html b/model/D-SR-A-3/index.html similarity index 100% rename from model/D-SR-3-A/index.html rename to model/D-SR-A-3/index.html diff --git a/model/D-SR-1-B/index.html b/model/D-SR-B-1/index.html similarity index 100% rename from model/D-SR-1-B/index.html rename to model/D-SR-B-1/index.html diff --git a/model/D-SR-2-B/index.html b/model/D-SR-B-2/index.html similarity index 100% rename from model/D-SR-2-B/index.html rename to model/D-SR-B-2/index.html diff --git a/model/D-SR-3-B/index.html b/model/D-SR-B-3/index.html similarity index 100% rename from model/D-SR-3-B/index.html rename to model/D-SR-B-3/index.html diff --git a/model/D-TA-1-A/index.html b/model/D-TA-A-1/index.html similarity index 100% rename from model/D-TA-1-A/index.html rename to model/D-TA-A-1/index.html diff --git a/model/D-TA-2-A/index.html b/model/D-TA-A-2/index.html similarity index 100% rename from model/D-TA-2-A/index.html rename to model/D-TA-A-2/index.html diff --git a/model/D-TA-3-A/index.html b/model/D-TA-A-3/index.html similarity index 100% rename from model/D-TA-3-A/index.html rename to model/D-TA-A-3/index.html diff --git a/model/D-TA-1-B/index.html b/model/D-TA-B-1/index.html similarity index 100% rename from model/D-TA-1-B/index.html rename to model/D-TA-B-1/index.html diff --git a/model/D-TA-2-B/index.html b/model/D-TA-B-2/index.html similarity index 100% rename from model/D-TA-2-B/index.html rename to model/D-TA-B-2/index.html diff --git a/model/D-TA-3-B/index.html b/model/D-TA-B-3/index.html similarity index 100% rename from model/D-TA-3-B/index.html rename to model/D-TA-B-3/index.html diff --git a/model/G-EG-1-A/index.html b/model/G-EG-A-1/index.html similarity index 100% rename from model/G-EG-1-A/index.html rename to model/G-EG-A-1/index.html diff --git a/model/G-EG-2-A/index.html b/model/G-EG-A-2/index.html similarity index 100% rename from model/G-EG-2-A/index.html rename to model/G-EG-A-2/index.html diff --git a/model/G-EG-3-A/index.html b/model/G-EG-A-3/index.html similarity index 100% rename from model/G-EG-3-A/index.html rename to model/G-EG-A-3/index.html diff --git a/model/G-EG-1-B/index.html b/model/G-EG-B-1/index.html similarity index 100% rename from model/G-EG-1-B/index.html rename to model/G-EG-B-1/index.html diff --git a/model/G-EG-2-B/index.html b/model/G-EG-B-2/index.html similarity index 100% rename from model/G-EG-2-B/index.html rename to model/G-EG-B-2/index.html diff --git a/model/G-EG-3-B/index.html b/model/G-EG-B-3/index.html similarity index 100% rename from model/G-EG-3-B/index.html rename to model/G-EG-B-3/index.html diff --git a/model/G-PC-1-A/index.html b/model/G-PC-A-1/index.html similarity index 100% rename from model/G-PC-1-A/index.html rename to model/G-PC-A-1/index.html diff --git a/model/G-PC-2-A/index.html b/model/G-PC-A-2/index.html similarity index 100% rename from model/G-PC-2-A/index.html rename to model/G-PC-A-2/index.html diff --git a/model/G-PC-3-A/index.html b/model/G-PC-A-3/index.html similarity index 100% rename from model/G-PC-3-A/index.html rename to model/G-PC-A-3/index.html diff --git a/model/G-PC-1-B/index.html b/model/G-PC-B-1/index.html similarity index 100% rename from model/G-PC-1-B/index.html rename to model/G-PC-B-1/index.html diff --git a/model/G-PC-2-B/index.html b/model/G-PC-B-2/index.html similarity index 100% rename from model/G-PC-2-B/index.html rename to model/G-PC-B-2/index.html diff --git a/model/G-PC-3-B/index.html b/model/G-PC-B-3/index.html similarity index 100% rename from model/G-PC-3-B/index.html rename to model/G-PC-B-3/index.html diff --git a/model/G-SM-1-A/index.html b/model/G-SM-A-1/index.html similarity index 100% rename from model/G-SM-1-A/index.html rename to model/G-SM-A-1/index.html diff --git a/model/G-SM-2-A/index.html b/model/G-SM-A-2/index.html similarity index 100% rename from model/G-SM-2-A/index.html rename to model/G-SM-A-2/index.html diff --git a/model/G-SM-3-A/index.html b/model/G-SM-A-3/index.html similarity index 100% rename from model/G-SM-3-A/index.html rename to model/G-SM-A-3/index.html diff --git a/model/G-SM-1-B/index.html b/model/G-SM-B-1/index.html similarity index 100% rename from model/G-SM-1-B/index.html rename to model/G-SM-B-1/index.html diff --git a/model/G-SM-2-B/index.html b/model/G-SM-B-2/index.html similarity index 100% rename from model/G-SM-2-B/index.html rename to model/G-SM-B-2/index.html diff --git a/model/G-SM-3-B/index.html b/model/G-SM-B-3/index.html similarity index 100% rename from model/G-SM-3-B/index.html rename to model/G-SM-B-3/index.html diff --git a/model/I-DM-1-A/index.html b/model/I-DM-A-1/index.html similarity index 100% rename from model/I-DM-1-A/index.html rename to model/I-DM-A-1/index.html diff --git a/model/I-DM-2-A/index.html b/model/I-DM-A-2/index.html similarity index 100% rename from model/I-DM-2-A/index.html rename to model/I-DM-A-2/index.html diff --git a/model/I-DM-3-A/index.html b/model/I-DM-A-3/index.html similarity index 100% rename from model/I-DM-3-A/index.html rename to model/I-DM-A-3/index.html diff --git a/model/I-DM-1-B/index.html b/model/I-DM-B-1/index.html similarity index 100% rename from model/I-DM-1-B/index.html rename to model/I-DM-B-1/index.html diff --git a/model/I-DM-2-B/index.html b/model/I-DM-B-2/index.html similarity index 100% rename from model/I-DM-2-B/index.html rename to model/I-DM-B-2/index.html diff --git a/model/I-DM-3-B/index.html b/model/I-DM-B-3/index.html similarity index 100% rename from model/I-DM-3-B/index.html rename to model/I-DM-B-3/index.html diff --git a/model/I-SB-1-A/index.html b/model/I-SB-A-1/index.html similarity index 100% rename from model/I-SB-1-A/index.html rename to model/I-SB-A-1/index.html diff --git a/model/I-SB-2-A/index.html b/model/I-SB-A-2/index.html similarity index 100% rename from model/I-SB-2-A/index.html rename to model/I-SB-A-2/index.html diff --git a/model/I-SB-3-A/index.html b/model/I-SB-A-3/index.html similarity index 100% rename from model/I-SB-3-A/index.html rename to model/I-SB-A-3/index.html diff --git a/model/I-SB-1-B/index.html b/model/I-SB-B-1/index.html similarity index 100% rename from model/I-SB-1-B/index.html rename to model/I-SB-B-1/index.html diff --git a/model/I-SB-2-B/index.html b/model/I-SB-B-2/index.html similarity index 100% rename from model/I-SB-2-B/index.html rename to model/I-SB-B-2/index.html diff --git a/model/I-SB-3-B/index.html b/model/I-SB-B-3/index.html similarity index 100% rename from model/I-SB-3-B/index.html rename to model/I-SB-B-3/index.html diff --git a/model/I-SD-1-A/index.html b/model/I-SD-A-1/index.html similarity index 100% rename from model/I-SD-1-A/index.html rename to model/I-SD-A-1/index.html diff --git a/model/I-SD-2-A/index.html b/model/I-SD-A-2/index.html similarity index 100% rename from model/I-SD-2-A/index.html rename to model/I-SD-A-2/index.html diff --git a/model/I-SD-3-A/index.html b/model/I-SD-A-3/index.html similarity index 100% rename from model/I-SD-3-A/index.html rename to model/I-SD-A-3/index.html diff --git a/model/I-SD-1-B/index.html b/model/I-SD-B-1/index.html similarity index 100% rename from model/I-SD-1-B/index.html rename to model/I-SD-B-1/index.html diff --git a/model/I-SD-2-B/index.html b/model/I-SD-B-2/index.html similarity index 100% rename from model/I-SD-2-B/index.html rename to model/I-SD-B-2/index.html diff --git a/model/I-SD-3-B/index.html b/model/I-SD-B-3/index.html similarity index 100% rename from model/I-SD-3-B/index.html rename to model/I-SD-B-3/index.html diff --git a/model/O-EM-1-A/index.html b/model/O-EM-A-1/index.html similarity index 100% rename from model/O-EM-1-A/index.html rename to model/O-EM-A-1/index.html diff --git a/model/O-EM-2-A/index.html b/model/O-EM-A-2/index.html similarity index 100% rename from model/O-EM-2-A/index.html rename to model/O-EM-A-2/index.html diff --git a/model/O-EM-3-A/index.html b/model/O-EM-A-3/index.html similarity index 100% rename from model/O-EM-3-A/index.html rename to model/O-EM-A-3/index.html diff --git a/model/O-EM-1-B/index.html b/model/O-EM-B-1/index.html similarity index 100% rename from model/O-EM-1-B/index.html rename to model/O-EM-B-1/index.html diff --git a/model/O-EM-2-B/index.html b/model/O-EM-B-2/index.html similarity index 100% rename from model/O-EM-2-B/index.html rename to model/O-EM-B-2/index.html diff --git a/model/O-EM-3-B/index.html b/model/O-EM-B-3/index.html similarity index 100% rename from model/O-EM-3-B/index.html rename to model/O-EM-B-3/index.html diff --git a/model/O-IM-1-A/index.html b/model/O-IM-A-1/index.html similarity index 100% rename from model/O-IM-1-A/index.html rename to model/O-IM-A-1/index.html diff --git a/model/O-IM-2-A/index.html b/model/O-IM-A-2/index.html similarity index 100% rename from model/O-IM-2-A/index.html rename to model/O-IM-A-2/index.html diff --git a/model/O-IM-3-A/index.html b/model/O-IM-A-3/index.html similarity index 100% rename from model/O-IM-3-A/index.html rename to model/O-IM-A-3/index.html diff --git a/model/O-IM-1-B/index.html b/model/O-IM-B-1/index.html similarity index 100% rename from model/O-IM-1-B/index.html rename to model/O-IM-B-1/index.html diff --git a/model/O-IM-2-B/index.html b/model/O-IM-B-2/index.html similarity index 100% rename from model/O-IM-2-B/index.html rename to model/O-IM-B-2/index.html diff --git a/model/O-IM-3-B/index.html b/model/O-IM-B-3/index.html similarity index 100% rename from model/O-IM-3-B/index.html rename to model/O-IM-B-3/index.html diff --git a/model/O-OM-1-A/index.html b/model/O-OM-A-1/index.html similarity index 100% rename from model/O-OM-1-A/index.html rename to model/O-OM-A-1/index.html diff --git a/model/O-OM-2-A/index.html b/model/O-OM-A-2/index.html similarity index 100% rename from model/O-OM-2-A/index.html rename to model/O-OM-A-2/index.html diff --git a/model/O-OM-3-A/index.html b/model/O-OM-A-3/index.html similarity index 100% rename from model/O-OM-3-A/index.html rename to model/O-OM-A-3/index.html diff --git a/model/O-OM-1-B/index.html b/model/O-OM-B-1/index.html similarity index 100% rename from model/O-OM-1-B/index.html rename to model/O-OM-B-1/index.html diff --git a/model/O-OM-2-B/index.html b/model/O-OM-B-2/index.html similarity index 100% rename from model/O-OM-2-B/index.html rename to model/O-OM-B-2/index.html diff --git a/model/O-OM-3-B/index.html b/model/O-OM-B-3/index.html similarity index 100% rename from model/O-OM-3-B/index.html rename to model/O-OM-B-3/index.html diff --git a/model/V-AA-1-A/index.html b/model/V-AA-A-1/index.html similarity index 100% rename from model/V-AA-1-A/index.html rename to model/V-AA-A-1/index.html diff --git a/model/V-AA-2-A/index.html b/model/V-AA-A-2/index.html similarity index 100% rename from model/V-AA-2-A/index.html rename to model/V-AA-A-2/index.html diff --git a/model/V-AA-3-A/index.html b/model/V-AA-A-3/index.html similarity index 100% rename from model/V-AA-3-A/index.html rename to model/V-AA-A-3/index.html diff --git a/model/V-AA-1-B/index.html b/model/V-AA-B-1/index.html similarity index 100% rename from model/V-AA-1-B/index.html rename to model/V-AA-B-1/index.html diff --git a/model/V-AA-2-B/index.html b/model/V-AA-B-2/index.html similarity index 100% rename from model/V-AA-2-B/index.html rename to model/V-AA-B-2/index.html diff --git a/model/V-AA-3-B/index.html b/model/V-AA-B-3/index.html similarity index 100% rename from model/V-AA-3-B/index.html rename to model/V-AA-B-3/index.html diff --git a/model/V-RT-1-A/index.html b/model/V-RT-A-1/index.html similarity index 100% rename from model/V-RT-1-A/index.html rename to model/V-RT-A-1/index.html diff --git a/model/V-RT-2-A/index.html b/model/V-RT-A-2/index.html similarity index 100% rename from model/V-RT-2-A/index.html rename to model/V-RT-A-2/index.html diff --git a/model/V-RT-3-A/index.html b/model/V-RT-A-3/index.html similarity index 100% rename from model/V-RT-3-A/index.html rename to model/V-RT-A-3/index.html diff --git a/model/V-RT-1-B/index.html b/model/V-RT-B-1/index.html similarity index 100% rename from model/V-RT-1-B/index.html rename to model/V-RT-B-1/index.html diff --git a/model/V-RT-2-B/index.html b/model/V-RT-B-2/index.html similarity index 100% rename from model/V-RT-2-B/index.html rename to model/V-RT-B-2/index.html diff --git a/model/V-RT-3-B/index.html b/model/V-RT-B-3/index.html similarity index 100% rename from model/V-RT-3-B/index.html rename to model/V-RT-B-3/index.html diff --git a/model/V-ST-1-A/index.html b/model/V-ST-A-1/index.html similarity index 100% rename from model/V-ST-1-A/index.html rename to model/V-ST-A-1/index.html diff --git a/model/V-ST-2-A/index.html b/model/V-ST-A-2/index.html similarity index 100% rename from model/V-ST-2-A/index.html rename to model/V-ST-A-2/index.html diff --git a/model/V-ST-3-A/index.html b/model/V-ST-A-3/index.html similarity index 100% rename from model/V-ST-3-A/index.html rename to model/V-ST-A-3/index.html diff --git a/model/V-ST-1-B/index.html b/model/V-ST-B-1/index.html similarity index 100% rename from model/V-ST-1-B/index.html rename to model/V-ST-B-1/index.html diff --git a/model/V-ST-2-B/index.html b/model/V-ST-B-2/index.html similarity index 100% rename from model/V-ST-2-B/index.html rename to model/V-ST-B-2/index.html diff --git a/model/V-ST-3-B/index.html b/model/V-ST-B-3/index.html similarity index 100% rename from model/V-ST-3-B/index.html rename to model/V-ST-B-3/index.html diff --git a/model/design/secure-architecture/stream-a/index.html b/model/design/secure-architecture/stream-a/index.html index d83a9da8..2f2d73cc 100644 --- a/model/design/secure-architecture/stream-a/index.html +++ b/model/design/secure-architecture/stream-a/index.html @@ -1,13 +1,13 @@ -
Sets of security basic principles available to product teams
During design, technical staff on the product team use a short checklist of security principles. Typically, security principles include defense in depth, securing the weakest link, use of secure defaults, simplicity in design of security functionality, secure failure, balance of security and usability, running with least privilege, avoidance of security by obscurity, etc.
For perimeter interfaces, the team considers each principle in the context of the overall system and identifies features that can be added to bolster security at each such interface. Limit these such that they only take a small amount of extra effort beyond the normal implementation cost of functional requirements. Note anything larger, and schedule it for future releases.
Train each product team with security awareness before this process, and incorporate more security-savvy staff to aid in making design decisions.
Do teams use security principles during design?
You have an agreed upon checklist of security principles |
You store your checklist in an accessible location |
Relevant stakeholders understand security principles |
No |
Yes, for some applications |
Yes, for at least half of the applications |
Yes, for most or all of the applications |
Reusable security services available for product teams
Identify shared infrastructure or services with security functionality. These typically include single-sign-on services, access control or entitlements services, logging and monitoring services or application-level firewalling. Collect and evaluate reusable systems to assemble a list of such resources and categorize them by the security mechanism they fulfill. Consider each resource in terms of why a product team would want to integrate with it, i.e. the benefits of using the shared resource.
If multiple resources exist in each category, select and standardize on one or more shared services per category. Because future software development will rely on these services, review each thoroughly to ensure understanding of the baseline security posture. For each selected service, create design guidance for product teams to understand how to integrate with the system. Make the guidance available through training, mentorship, guidelines, and standards.
Establish a set of best practices representing sound methods of implementing security functionality. You can research them or purchase them, and it is often more effective if you customize them so they are more specific to your organization. Example patterns include a single-sign-on subsystem, a cross-tier delegation model, a separation-of-duties authorization model, a centralized logging pattern, etc.
These patterns can originate from specific projects or applications, but make sure you share them between different teams across the organization for efficient and consistent application of appropriate security solutions.
To increase adoption of these patterns, link them to the shared security services, or implement them into actual component solutions that can be easily integrated into an application during development. Support the key technologies within the organization, for instance in case of different development stacks. Treat these solutions as actual applications with proper support in case of questions or issues.
Do you use shared security services during design?
You have a documented list of reusable security services, available to relevant stakeholders |
You have reviewed the baseline security posture for each selected service |
Your designers are trained to integrate each selected service following available guidance |
No |
Yes, for some applications |
Yes, for at least half of the applications |
Yes, for most or all of the applications |
Full transparency of quality and usability of centrally provided security solutions
Build a set of reference architectures that select and combine a verified set of security components to ensure a proper design of security. Reference platforms have advantages in terms of shortening audit and security-related reviews, increasing efficiency in development, and lowering maintenance overhead. Continuously maintain and improve the reference architecture based on new insights in the organization and within the community. Have architects, senior developers and other technical stakeholders participate in design and creation of reference platforms. After creation, teams maintain ongoing support and updates.
Reference architectures may materialize into a set of software libraries and tools upon which project teams build their software. They serve as a starting point that standardizes the configuration-driven, security-by-default security approach. You can bootstrap the framework by selecting a particular project early in the lifecycle and having security-savvy staff work with them to build the security functionality in a generic way so that it can be extracted from the project and used elsewhere in the organization.
Monitor weaknesses or gaps in the set of security solutions available in your organization continuously in the context of discussions on architecture, development, or operations. This serves as an input to improve the appropriateness and effectiveness of the reference architectures that you have in place.
Do you base your design on available reference architectures?
You have one or more approved reference architectures documented and available to stakeholders |
You improve the reference architectures continuously based on insights and best practices |
You provide a set of components, libraries, and tools to implement each reference architecture |
No |
Yes, for some applications |
Yes, for at least half of the applications |
Yes, for most or all of the applications |
There's no guidance for this Stream, yet. Be the first to provide Community guidance!
Complete this Google Form with guidance for this Stream.
To learn more about Stream guidance for the SAMM model, see the Stream guidance page.
OWASP SAMM is published under the CC BY-SA 4.0 license diff --git a/model/design/secure-architecture/stream-b/index.html b/model/design/secure-architecture/stream-b/index.html index b57e6664..ef078af0 100644 --- a/model/design/secure-architecture/stream-b/index.html +++ b/model/design/secure-architecture/stream-b/index.html @@ -1,13 +1,13 @@ -
Transparency of technologies that introduce security risk
People often take the path of least resistance in developing, deploying or operating a software solution. New technologies are often included when they can facilitate or speed up the effort or enable the solution to scale better. These new technologies might, however, introduce new risks to the organization that you need to manage.
Identify the most important technologies, frameworks, tools and integrations being used for each application. Use the knowledge of the architect to study the development and operating environment as well as artefacts. Then evaluate them for their security quality and raise important findings to be managed.
Do you evaluate the security quality of important technologies used for development?
You have a list of the most important technologies used in, or in support of, each application |
You identify and track technological risks |
You ensure the risks to these technologies are in line with the organizational baseline |
No |
Yes, for some applications |
Yes, for at least half of the applications |
Yes, for most or all of the applications |
Technologies with appropriate security level available to product teams
Identify commonly used technologies, frameworks and tools in use across software projects in the organization, whereby you focus on capturing the high-level technologies.
Create a list and share it across the development organization as recommended technologies. When selecting them, consider incident history, track record for responding to vulnerabilities, appropriateness of functionality for the organization, excessive complexity in usage of the third-party component, and sufficient knowledge within the organization.
Senior developers and architects create this list, including input from managers and security auditors. Share this list of recommended components with the development organization. Ultimately, the goal is to provide well-known defaults for project teams. Perform a periodic review of these technologies for security and appropriateness.
Do you have a list of recommended technologies for the organization?
The list is based on technologies used in the software portfolio |
Lead architects and developers review and approve the list |
You share the list across the organization |
You review and update the list at least yearly |
No |
Yes, for some of the technology domains |
Yes, for at least half of the technology domains |
Yes, for most or all of the technology domains |
Limited attack surface due to usage of vetted technologies
For all proprietary development (in-house or acquired), impose and monitor the use of standardized technology. Depending on your organization, either implement these restrictions into build or deployment tools, by means of after-the-fact automated analysis of application artefacts (e.g., source code, configuration files or deployment artefacts), or periodically review focusing on the correct use of these frameworks.
Verify several factors with project teams. Identify use of non-recommended technologies to determine if there are gaps in recommendations versus the organization’s needs. Examine unused or incorrectly used design patterns and reference platform modules to determine if updates are needed. Additionally, implement functionality in the reference platforms as the organization evolves and project teams request it.
Do you enforce the use of recommended technologies within the organization?
You monitor applications regularly for the correct use of the recommended technologies |
You solve violations against the list accoranding to organizational policies |
You take action if the number of violations falls outside the yearly objectives |
No |
Yes, for some applications |
Yes, for at least half of the applications |
Yes, for most or all of the applications |
There's no guidance for this Stream, yet. Be the first to provide Community guidance!
Complete this Google Form with guidance for this Stream.
To learn more about Stream guidance for the SAMM model, see the Stream guidance page.
OWASP SAMM is published under the CC BY-SA 4.0 license diff --git a/model/design/security-requirements/stream-a/index.html b/model/design/security-requirements/stream-a/index.html index 2662d9c2..1f663c42 100644 --- a/model/design/security-requirements/stream-a/index.html +++ b/model/design/security-requirements/stream-a/index.html @@ -1,13 +1,13 @@ -
Understanding of key security requirements during development
Perform a review of the functional requirements of the software project. Identify relevant security requirements (i.e. expectations) for this functionality by reasoning on the desired confidentiality, integrity or availability of the service or data offered by the software project. Requirements state the objective (e.g., “personal data for the registration process should be transferred and stored securely”), but not the actual measure to achieve the objective (e.g., “use TLSv1.2 for secure transfer”).
At the same time, review the functionality from an attacker perspective to understand how it could be misused. This way you can identify extra protective requirements for the software project at hand.
Security objectives can relate to specific security functionality you need to add to the application (e.g., “Identify the user of the application at all times”) or to the overall application quality and behavior (e.g., “Ensure personal data is properly protected in transit”), which does not necessarily lead to new functionality. Follow good practices for writing security requirements. Make them specific, measurable, actionable, relevant and time-bound (SMART). Beware of adding requirements too general-purpose to not relate to the application at hand (e.g., The application should protect against the OWASP Top 10). While they can be true, they don’t add value to the discussion.
Do project teams specify security requirements during development?
Teams derive security requirements from functional requirements and customer or organization concerns |
Security requirements are specific, measurable, and reasonable |
Security requirements are in line with the organizational baseline |
No |
Yes, for some applications |
Yes, for at least half of the applications |
Yes, for most or all of the applications |
Alignment of security requirements with other types of requirements
Security requirements can originate from other sources including policies and legislation, known problems within the application, and intelligence from metrics and feedback. At this level, a more systematic elicitation of security requirements must be achieved by analysing different sources of such requirements. Ensure that appropriate input is received from these sources to help the elicitation of requirements. For example, organize interviews or brainstorm sessions (e.g., in the case of policy and legislation), analyze historical logs or vulnerability systems.
Use a structured notation of security requirements across applications and an appropriate formalism that integrates well with how you specify other (functional) requirements for the project. This could mean, for example, extending analysis documents, writing user stories, etc.
When requirements are specified, it is important to ensure that these requirements are taken into account during product development. Setup a mechanism to stimulate or force project teams to meet these requirements in the product. For example, annotate requirements with priorities, or influence the handling of requirements to enforce sufficient security appetite (while balancing against other non-functional requirements).
Do you define, structure, and include prioritization in the artifacts of the security requirements gathering process?
Security requirements take into consideration domain specific knowledge when applying policies and guidance to product development |
Domain experts are involved in the requirements definition process |
You have an agreed upon structured notation for security requirements |
Development teams have a security champion dedicated to reviewing security requirements and outcomes |
No |
Yes, some of the time |
Yes, at least half of the time |
Yes, most or all of the time |
Efficient and effective handling of security requirements in your organization
Setup a security requirements framework to help projects elicit an appropriate and complete requirements set for their project. This framework considers the different types of requirements and sources of requirements. It should be adapted to the organizational habits and culture, and provide effective methodology and guidance in the elicitation and formation of requirements.
The framework helps project teams increase the efficiency and effectiveness of requirements engineering. It can provide a categorisation of common requirements and a number of reusable requirements. Do remember that, while thoughtless copying is ineffective, the fact of having potential relevant requirements to reason about is often productive.
The framework also gives clear guidance on the quality of requirements and formalizes how to describe them. For user stories, for instance, concrete guidance can explain what to describe in the definition of done, definition of ready, story description, and acceptance criteria.
Do you use a standard requirements framework to streamline the elicitation of security requirements?
A security requirements framework is available for project teams |
The framework is categorized by common requirements and standards-based requirements |
The framework gives clear guidance on the quality of requirements and how to describe them |
The framework is adaptable to specific business requirements |
No |
Yes, for some applications |
Yes, for at least half of the applications |
Yes, for most or all of the applications |
Complete this Google Form with guidance for this Stream.
To learn more about Stream guidance for the SAMM model, see the Stream guidance page.
OWASP SAMM is published under the CC BY-SA 4.0 license diff --git a/model/design/security-requirements/stream-b/index.html b/model/design/security-requirements/stream-b/index.html index 16588d59..a0acff97 100644 --- a/model/design/security-requirements/stream-b/index.html +++ b/model/design/security-requirements/stream-b/index.html @@ -1,13 +1,13 @@ -
Transparency of security practices of your software suppliers
The security competences and habits of the external suppliers involved in the development of your software can have a significant impact on the security posture of the final product. Consequently, it is important to know and evaluate your suppliers on this front.
Carry out a vendor assessment to understand the strengths and weaknesses of your suppliers. Use a basic checklist or conduct interviews to review their typical practices and deliveries. This gives you an idea of how they organize themselves and elements to evaluate whether you need to take additional measures to mitigate potential risks. Ideally, speak to different roles in the organization, or even set up a small maturity evaluation to this end. Strong suppliers will run their own software assurance program and will be able to answer most of your questions. If suppliers have weak competences in software security, discuss with them how and to what extent they plan to work on this and evaluate whether this is enough for your organization. A software supplier might be working on a low-risk project, but this could change.
It is important that your suppliers understand and align to the risk appetite and are able to meet your requirements in that area. Make what you expect from them explicit and discuss this clearly.
Do stakeholders review vendor collaborations for security requirements and methodology?
You consider including specific security requirements, activities, and processes when creating third-party agreements |
A vendor questionnaire is available and used to assess the strengths and weaknesses of your suppliers |
No |
Yes, some of the time |
Yes, at least half of the time |
Yes, most or all of the time |
Clearly defined security responsibilities of your software suppliers
Increase your confidence in the capability of your suppliers for software security. Discuss concrete responsibilities and expectations from your suppliers and your own organization and establish a contract with the supplier. The responsibilities can be specific quality requirements or particular tasks, and minimal service can be detailed in a Service Level Agreement (SLA). A quality requirement example is that they will deliver software that is protected against the OWASP Top 10, and in case issues are detected, these will be fixed. A task example is that they have to perform continuous static code analysis, or perform an independent penetration test before a major release. The agreement stipulates liabilities and caps in case an important issue arises.
Once you have implemented this for a few suppliers, work towards a standard agreement for suppliers that forms the basis of your negotiations. You can deviate from this standard agreement on a case by case basis, but it will help you to ensure you do not overlook important topics.
Do vendors meet the security responsibilities and quality measures of service level agreements defined by the organization?
You discuss security requirements with the vendor when creating vendor agreements |
Vendor agreements provide specific guidance on security defect remediation within an agreed upon timeframe |
The organization has a templated agreement of responsibilities and service levels for key vendor security processes |
You measure key performance indicators |
No |
Yes, some of the time |
Yes, at least half of the time |
Yes, most or all of the time |
Alignment of software development practices with suppliers to limit security risks
The best way to minimize the risk of issues in software is to align maximally and integrate closely between the different parties. From a process perspective, this means using similar development paradigms and introducing regular milestones to ensure proper alignment and qualitative progress. From a tools perspective, this might mean using similar build, verification and deployment environments, and sharing other supporting tools (e.g. requirements, architecture tools, or code repositories).
In case suppliers cannot meet the objectives that you have set, implement compensating controls so that, overall, you meet your objectives. Execute extra activities (e.g., threat modelling before starting the actual implementation cycle) or implement extra tooling (e.g., 3rd party library analysis at solution intake). The more suppliers deviate from your requirements, the more work will be required to compensate.
Are vendors aligned with standard security controls and software development tools and processes that the organization utilizes?
The vendor has a secure SDLC that includes secure build, secure deployment, defect management, and incident management, meets the security expectations of your organization, and is able to demonstrate operating effectiveness of practices. |
You verify the solution meets quality and security objectives before every major release |
When standard verification processes are not available, you use compensating controls such as software composition analysis and independent penetration testing |
No |
Yes, some of the time |
Yes, at least half of the time |
Yes, most or all of the time |
Complete this Google Form with guidance for this Stream.
To learn more about Stream guidance for the SAMM model, see the Stream guidance page.
OWASP SAMM is published under the CC BY-SA 4.0 license diff --git a/model/design/threat-assessment/stream-a/index.html b/model/design/threat-assessment/stream-a/index.html index c3525d3f..8a852080 100644 --- a/model/design/threat-assessment/stream-a/index.html +++ b/model/design/threat-assessment/stream-a/index.html @@ -1,13 +1,13 @@ -
Ability to classify applications according to risk
Use a simple method to evaluate the application risk per application, estimating the potential business impact that it poses for the organization in case of an attack. To achieve this, evaluate the impact of a breach in the confidentiality, integrity and availability of the data or service. Consider using a set of 5-10 questions to understand important application characteristics, such as whether the application processes financial data, whether it is internet facing, or whether privacy-related data is involved. The application risk profile tells you whether these factors are applicable and if they could significantly impact the organization.
Next, use a scheme to classify applications according to this risk. A simple, qualitative scheme (e.g. high/medium/low) that translates these characteristics into a value is often effective. It is important to use these values to represent and compare the risk of different applications against each other. Mature highly risk-driven organizations might make use of more quantitative risk schemes. Don’t invent a new risk scheme if your organization already has one that works well.
Do you classify applications according to business risk based on a simple and predefined set of questions?
An agreed-upon risk classification exists |
The application team understands the risk classification |
The risk classification covers critical aspects of business risks the organization is facing |
The organization has an inventory for the applications in scope |
No |
Yes, some of them |
Yes, at least half of them |
Yes, most or all of them |
Solid understanding of the risk level of your application portfolio
The goal of this activity is to thoroughly understand the risk level of all applications within the organization, to focus the effort of your software assurance activities where it really matters.
From a risk evaluation perspective, the basic set of questions is not enough to thoroughly evaluate the risk of all applications. Create an extensive and standardized way to evaluate the risk of the application, among others via their impact on information security (confidentiality, integrity and availability of data). Next to security, you also want to evaluate the privacy risk of the application. Understand the data that the application processes and what potential privacy violations are relevant. Finally, study the impact that this application has on other applications within the organization (e.g., the application might be modifying data that was considered read-only in another context). Evaluate all applications within the organization, including all existing and legacy ones.
Leverage business impact analysis to quantify and classify application risk. A simple qualitative scheme (such as high/medium/low) is not enough to effectively manage and compare applications on an enterprise-wide level.
Based on this input, Security Officers leverage the classification to define the risk profile to build a centralized inventory of risk profiles and manage accountability. This inventory gives Product Owners, Managers, and other organizational stakeholders an aligned view of the risk level of an application in order to assign appropriate priority to security-related activities.
Do you use centralized and quantified application risk profiles to evaluate business risk?
The application risk profile is in line with the organizational risk standard |
The application risk profile covers impact to security and privacy |
You validate the quality of the risk profile manually and/or automatically |
The application risk profiles are stored in a central inventory |
No |
Yes, for some applications |
Yes, for at least half of the applications |
Yes, for most or all of the applications |
Timely update of the application classification in case of changes
The application portfolio of an organization changes, as well as the conditions and constraints in which an application lives (e.g., driven by the company strategy). Periodically review the risk inventory to ensure correctness of the risk evaluations of the different applications.
Have a periodic review at an enterprise-wide level. Also, as your enterprise matures in software assurance, stimulate teams to continuously question which changes in conditions might impact the risk profile. For instance, an internal application might become exposed to the internet by a business decision. This should trigger the teams to rerun the risk evaluation and update the application risk profile accordingly.
In a mature implementation of this practice, train and continuously update teams on lessons learned and best practices from these risk evaluations. This leads to a better execution and a more accurate representation of the application risk profile.
Do you regularly review and update the risk profiles for your applications?
The organizational risk standard considers historical feedback to improve the evaluation method |
Significant changes in the application or business context trigger a review of the relevant risk profiles |
No |
Yes, sporadically |
Yes, upon change of the application |
Yes, at least annually |
Complete this Google Form with guidance for this Stream.
To learn more about Stream guidance for the SAMM model, see the Stream guidance page.
OWASP SAMM is published under the CC BY-SA 4.0 license diff --git a/model/design/threat-assessment/stream-b/index.html b/model/design/threat-assessment/stream-b/index.html index 3992725b..a21fc0ef 100644 --- a/model/design/threat-assessment/stream-b/index.html +++ b/model/design/threat-assessment/stream-b/index.html @@ -1,13 +1,13 @@ -
Identification of architectural design flaws in your applications
Threat modeling is a structured activity for identifying, evaluating, and managing system threats, architectural design flaws, and recommended security mitigations. It is typically done as part of the design phase or as part of a security assessment.
Threat modeling is a team exercise, including product owners, architects, security champions, and security testers. At this maturity level, expose teams and stakeholders to threat modeling to increase security awareness and to create a shared vision on the security of the system.
At maturity level 1, you perform threat modeling ad-hoc for high-risk applications and use simple threat checklists, such as STRIDE. Avoid lengthy workshops and overly detailed lists of low-relevant threats. Perform threat modeling iteratively to align to more iterative development paradigms. If you add new functionality to an existing application, look only into the newly added functions instead of trying to cover the entire scope. A good starting point is the existing diagrams that you annotate during discussion workshops. Always persist the outcome of a threat modeling discussion for later use.
Your most important tool to start threat modeling is a whiteboard, smartboard, or a piece of paper. Aim for security awareness, a simple process, and actionable outcomes that you agree upon with your team.
Do you identify and manage architectural design flaws with threat modeling?
You perform threat modeling for high-risk applications |
You use simple threat checklists, such as STRIDE |
You persist the outcome of a threat model for later use |
No |
Yes, some of them |
Yes, at least half of them |
Yes, most or all of them |
Clear expectations of the quality of threat modeling activities
Use a standardized threat modeling methodology for your organization and align this on your application risk levels. Think about ways to support the scaling of threat modeling throughout the organization.
Train your architects, security champions, and other stakeholders on how to do practical threat modeling. Threat modeling requires understanding, clear playbooks and templates, organization-specific examples, and experience, which is hard to automate.
Your threat modeling methodology includes at least diagramming, threat identification, design flaw mitigations, and how to validate your threat model artifacts. Your threat model diagram allows a detailed understanding of the environment and the mechanics of the application. You discover threats to your application with checklists, such as STRIDE or more organization-specific threats. For identified design flaws (ranked according to risk for your organization), you add mitigating controls to support stakeholders in dealing with particular threats. Define what triggers updating a threat model, for example, a technology change or deployment of an application in a new environment.
Feed the output of threat modeling to the defect management process for adequate follow-up. Capture the threat modeling artifacts with tools used by your application teams.
Do you use a standard methodology, aligned with your application risk levels?
You train your architects, security champions, and other stakeholders on how to do practical threat modeling |
Your threat modeling methodology includes at least diagramming, threat identification, design flaw mitigations, and how to validate your threat model artifacts |
Changes in the application or business context trigger a review of the relevant threat models |
You capture the threat modeling artifacts with tools used by your application teams |
No |
Yes, for some applications |
Yes, for at least half of the applications |
Yes, for most or all of the applications |
Assurance of continuous improvement of threat modeling activities
Threat modeling is integrated into your SDLC and has become part of the developer security culture. Reusable risk patterns, comprising related threat libraries, design flaws, and security mitigations, are created and improved, based on the organization’s threat models. You regularly (e.g., yearly) review the existing threat models to verify that no new threats are relevant for your applications.
You optimize your threat modeling methodology. You capture lessons learned from threat models and use these to improve your threat modeling methodology. You review the threat categories relevant to your organization and update your methodology appropriately. From time to time, you evaluate the quality of your threat models independently.
You automate parts of your threat modeling process with threat modeling tools. You integrate your threat modeling tools with other security tools, such as security verification tools and risk tracking tools. You consider “threat modeling as code” practices to integrate threat modeling artifacts with application code.
Do you regularly review and update the threat modeling methodology for your applications?
The threat model methodology considers historical feedback for improvement |
You regularly (e.g., yearly) review the existing threat models to verify that no new threats are relevant for your applications |
You automate parts of your threat modeling process with threat modeling tools |
No |
Yes, but review is ad-hoc |
Yes, we review it at regular times |
Yes, we review it at least annually |
Complete this Google Form with guidance for this Stream.
To learn more about Stream guidance for the SAMM model, see the Stream guidance page.
OWASP SAMM is published under the CC BY-SA 4.0 license diff --git a/model/governance/education-and-guidance/stream-a/index.html b/model/governance/education-and-guidance/stream-a/index.html index 9d539a94..339f9567 100644 --- a/model/governance/education-and-guidance/stream-a/index.html +++ b/model/governance/education-and-guidance/stream-a/index.html @@ -1,13 +1,13 @@ -
Basic security awareness for all relevant employees
Conduct security awareness training for all roles currently involved in the management, development, testing, or auditing of the software. The goal is to increase the awareness of application security threats and risks, security best practices, and secure software design principles. Develop training internally or procure it externally. Ideally, deliver training in person so participants can have discussions as a team, but Computer-Based Training (CBT) is also an option.
Course content should include a range of topics relevant to application security and privacy, while remaining accessible to a non-technical audience. Suitable concepts are secure design principles including Least Privilege, Defense-in-Depth, Fail Secure (Safe), Complete Mediation, Session Management, Open Design, and Psychological Acceptability. Additionally, the training should include references to any organization-wide standards, policies, and procedures defined to improve application security. The OWASP Top 10 vulnerabilities should be covered at a high level.
Training is mandatory for all employees and contractors involved with software development and includes an auditable sign-off to demonstrate compliance. Consider incorporating innovative ways of delivery (such as gamification) to maximize its effectiveness and combat desensitization.
Do you require employees involved with application development to take SDLC training?
Training is repeatable, consistent, and available to anyone involved with software development lifecycle |
Training includes relevant content from the latest OWASP Top 10 and includes concepts such as Least Privilege, Defense-in-Depth, Fail Secure (Safe), Complete Mediation, Session Management, Open Design, and Psychological Acceptability |
Training requires a sign-off or an acknowledgement from attendees |
You have reviewed the training content within the last 12 months, and have completed any required updates |
All new covered staff are required to complete training during their onboarding process |
Existing covered staff are required to complete training when content is added/revised, or complete refresher training at least every 24 months, whichever comes first |
No |
Yes, some of them |
Yes, at least half of them |
Yes, most or all of them |
Relevant employee roles trained according to their specific role
Conduct instructor-led or CBT security training specific to the organization’s roles and technologies, starting with the core development team. The organization customizes training for product managers, software developers, testers, and security auditors, based on each group’s technical needs.
Include all training content from the Maturity Level 1 activities of this stream and additional role-specific and technology-specific content. Eliminate unnecessary aspects of the training.
Ideally, identify a subject-matter expert in each technology to assist with procuring or developing the training content and updating it regularly. The training consists of demonstrations of vulnerability exploitation using intentionally weakened applications, such as WebGoat or Juice Shop. Include results of the previous penetration as examples of vulnerabilities and implemented remediation strategies. Ask a penetration tester to assist with developing examples of vulnerability exploitation demonstrations.
Training is mandatory for all employees and contractors involved with software development, and includes an auditable sign-off to demonstrate compliance. Whenever possible, training should also include a test to ensure understanding, not just compliance. Update and deliver training annually to include changes in the organization, technology, and trends. Poll training participants to evaluate the quality and relevance of the training. Gather suggestions of other information relevant to their work or environments.
Is training customized for individual roles such as developers, testers, or security champions?
Training includes all topics from maturity level 1, and adds more specific tools, techniques, and demonstrations |
Training is mandatory for all employees and contractors |
Training includes input from in-house SMEs and trainees |
Training includes demonstrations of tools and techniques developed in-house |
You use feedback to enhance and make future training more relevant |
No |
Yes, for some of the training |
Yes, for at least half of the training |
Yes, for most or all of the training |
Adequate security knowledge of all employees ensured prior to working on critical tasks
Implement a formal training program requiring anyone involved with the software development lifecycle to complete appropriate role and technology-specific training as part of the onboarding process. Based on the criticality of the application and user’s role, consider restricting access until the onboarding training has been completed. While the organization may source some modules externally, the program is facilitated and managed in-house and includes content specific to the organization going beyond general security best practices. The program has a defined curriculum, checks participation, and tests understanding and competence. The training consists of a combination of industry best practices and organization’s internal standards, including training on specific systems used by the organization.
In addition to issues directly related to security, the organization includes other standards to the program, such as code complexity, code documentation, naming convention, and other process-related disciplines. This training minimizes issues resulting from employees following practices incorporated outside the organization and ensures continuity in the style and competency of the code.
To facilitate progress monitoring and successful completion of each training module the organization has a learning management platform or another centralized portal with similar functionality. Employees can monitor their progress and have access to all training resources even after they complete initial training.
Review issues resulting from employees not following established standards, policies, procedures, or security best practices at least annually to gauge the effectiveness of the training and ensure it covers all issues relevant to the organization. Update the training periodically and train employees on any changes and most prevalent security deficiencies.
Have you implemented a Learning Management System or equivalent to track employee training and certification processes?
A Learning Management System (LMS) is used to track trainings and certifications |
Training is based on internal standards, policies, and procedures |
You use certification programs or attendance records to determine access to development systems and resources |
No |
Yes, for some of the training |
Yes, for at least half of the training |
Yes, for most or all of the training |
Complete this Google Form with guidance for this Stream.
To learn more about Stream guidance for the SAMM model, see the Stream guidance page.
OWASP SAMM is published under the CC BY-SA 4.0 license diff --git a/model/governance/education-and-guidance/stream-b/index.html b/model/governance/education-and-guidance/stream-b/index.html index adf2258a..750efab0 100644 --- a/model/governance/education-and-guidance/stream-b/index.html +++ b/model/governance/education-and-guidance/stream-b/index.html @@ -1,13 +1,13 @@ -
Basic embedding of security in the development organization
Implement a program where each software development team has a member considered a “Security Champion” who is the liaison between Information Security and developers. Depending on the size and structure of the team the “Security Champion” may be a software developer, tester, or a product manager. The “Security Champion” has a set number of hours per week for Information Security related activities. They participate in periodic briefings to increase awareness and expertise in different security disciplines. “Security Champions” have additional training to help develop these roles as Software Security subject-matter experts. You may need to customize the way you create and support “Security Champions” for cultural reasons.
The goals of the position are to increase effectiveness and efficiency of application security and compliance and to strengthen the relationship between various teams and Information Security. To achieve these objectives, “Security Champions” assist with researching, verifying, and prioritizing security and compliance related software defects. They are involved in all Risk Assessments, Threat Assessments, and Architectural Reviews to help identify opportunities to remediate security defects by making the architecture of the application more resilient and reducing the attack threat surface.
In addition to assisting Information Security, “Security Champions” provide periodic reviews of all security-related issues for the project team so everyone is aware of the problems and any current and future remediation efforts. These reviews are leveraged to help brainstorm solutions to more complex problems by engaging the entire development team.
Have you identified a Security Champion for each development team?
Security Champions receive appropriate training |
Application Security and Development teams receive periodic briefings from Security Champions on the overall status of security initiatives and fixes |
The Security Champion reviews the results of external testing before adding to the application backlog |
No |
Yes, for some teams |
Yes, for at least half of the teams |
Yes, for most or all of the teams |
Specific security best practices tailored to the organization
The organization implements a formal secure coding center of excellence, with architects and senior developers representing the different business units and technology stacks. The team has an official charter and defines standards and best practices to improve software development practices. The goal is to mitigate the way velocity of change in technology, programming languages, and development frameworks and libraries makes it difficult for Information Security professionals to be fully informed of all the technical nuances that impact security. Even developers often struggle keeping up with all the changes and new tools intended to make software development faster, better, and safer.
This ensures all current programming efforts follow industry’s best practices and organization’s development and implementation standards include all critical configuration settings. It helps identify, train, and support “Product Champions”, responsible for assisting different teams with implementing tools that automate, streamline, or improve various aspects of the SDLC. It identifies development teams with higher maturity levels within their SDLC and the practices and tools that enable these achievements, with the goal of replicating them to other teams.
The group provides subject matter expertise, helping information security teams evaluate tools and solutions to improve application security, ensuring these tools are not only useful but also compatible with the way different teams develop applications. Teams looking to make significant architectural changes to their software consult with this group to avoid adversely impacting the SDLC lifecycle or established security controls.
Does the organization have a Secure Software Center of Excellence (SSCE)?
The SSCE has a charter defining its role in the organization |
Development teams review all significant architectural changes with the SSCE |
The SSCE publishes SDLC standards and guidelines related to Application Security |
Product Champions are responsible for promoting the use of specific security tools |
No |
Yes, we started implementing it |
Yes, for part of the organization |
Yes, for the entire organization |
Collective development of security know-how among all product teams
Security is the responsibility of all employees, not just the Information Security team. Deploy communication and knowledge sharing platforms to help developers build communities around different technologies, tools, and programming languages. In these communities employees share information, discuss challenges with other developers, and search the knowledge base for answers to previously discussed issues.
Form communities around roles and responsibilities. Enable developers and engineers from different teams and business units to communicate freely so they can benefit from each other’s expertise. Encourage participation, set up a program to promote those who help the most people as thought leaders, and have management recognize them. In addition to improving application security, this platform may help identify future members of the Secure Software Center of Excellence, or ‘Security Champions’ based on their expertise and willingness to help others.
The Secure Software Center of Excellence and Application Security teams review the information portal regularly for insights into the new and upcoming technologies, as well as opportunities to assist the development community with new initiatives, tools, programs, and training resources. Use the portal to disseminate information about new standards, tools, and resources to all developers for the continued improvement of SDLC maturity and application security.
Is there a centralized portal where developers and application security professionals from different teams and business units are able to communicate and share information?
The organization promotes use of a single portal across different teams and business units |
The portal is used for timely information such as notification of security incidents, tool updates, architectural standard changes, and other related announcements |
The portal is widely recognized by developers and architects as a centralized repository of the organization-specific application security information |
All content is considered persistent and searchable |
The portal provides access to application-specific security metrics |
No |
Yes, we started implementing it |
Yes, for part of the organization |
Yes, for the entire organization |
Complete this Google Form with guidance for this Stream.
To learn more about Stream guidance for the SAMM model, see the Stream guidance page.
OWASP SAMM is published under the CC BY-SA 4.0 license diff --git a/model/governance/policy-and-compliance/stream-a/index.html b/model/governance/policy-and-compliance/stream-a/index.html index 4ca5ac85..cd043126 100644 --- a/model/governance/policy-and-compliance/stream-a/index.html +++ b/model/governance/policy-and-compliance/stream-a/index.html @@ -1,13 +1,13 @@ -
Clear expectation of minimum security level in the organization
Develop a library of policies and standards to govern all aspects of software development in the organization. Policies and standards are based on existing industry standards and appropriate for the organization’s industry. Due to the full range of technology-specific limitations and best practices, review proposed standards with the various product teams. With the overarching objective of increasing security of the applications and computing infrastructure, invite product teams to offer feedback on any aspects of the standards that would not be feasible or cost-effective to implement, as well as opportunities for standards to go further with little effort on the product teams.
For policies, emphasize high-level definitions and aspects of application security that do not depend on specific technology or hosting environment. Focus on broader objectives of the organization to protect the integrity of its computing environment, safety and privacy of the data, and maturity of the software development life-cycles. For larger organizations, policies may qualify specific requirements based on data classification or application functionality, but should not be detailed enough to offer technology-specific guidance.
For standards, incorporate requirements set forth by policies, and focus on technology-specific implementation guidance intended to capture and take advantage of the security features of different programming languages and frameworks. Standards require input from senior developers and architects considered experts in various technologies in use by the organization. Create them in a format that allows for periodic updates. Label or tag individual requirements with the policy or a 3rd party requirement, to make maintenance and audits easier and more efficient.
Do you have and apply a common set of policies and standards throughout your organization?
You have adapted existing standards appropriate for the organization’s industry to account for domain-specific considerations |
Your standards are aligned with your policies and incorporate technology-specific implementation guidance |
No |
Yes, for some applications |
Yes, for at least half of the applications |
Yes, for most or all of the applications |
Common understanding of how to reach compliance with security policies for product teams
To assist with the ongoing implementation and verification of compliance with policies and standards, develop application security and appropriate test scripts related to each applicable requirement. Organize these documents into libraries and make them available to all application teams in formats most conducive for inclusion into each application. Clearly label the documents and link them to the policies and standards they represent, to assist with the ongoing updates and maintenance. Version policies and standards and include detailed change logs with each iterative update to make ongoing inclusion into different products' SDLC easier.
Write application security requirements in a format consistent with the existing requirements management processes. You may need more than one version catering to different development methodologies or technologies. The goal is to make it easy for various product teams to incorporate policies and standards into their existing development life-cycles needing minimal interpretation of requirements.
Test scripts help reinforce application security requirements through clear expectations of application functionality, and guide automated or manual testing efforts that may already be part of the development process. These efforts not only help each team establish the current state of compliance with existing policies and standards, but also ensure compliance as applications continue to change.
Do you publish the organization's policies as test scripts or run-books for easy interpretation by development teams?
You create verification checklists and test scripts where applicable, aligned with the policy's requirements and the implementation guidance in the associated standards |
You create versions adapted to each development methodology and technology the organization uses |
No |
Yes, some content |
Yes, at least half of the content |
Yes, most or all of the content |
Understanding of your organization’s compliance with policies and standards
Develop a program to measure each application’s compliance with existing policies and standards. Mandatory requirements should be motivated and reported consistently across all teams. Whenever possible, tie compliance status into automated testing and report with each version. Compliance reporting includes the version of policies and standards and appropriate code coverage factors.
Encourage non-compliant teams to review available resources such as security requirements and test scripts, to ensure non-compliance is not a result of inadequate guidance. Forward issues resulting from insufficient guidance to the teams responsible for publishing application requirements and test scripts, to include them in the future releases. Escalate issues resulting from the inability to meet policies and standards to teams that handle application security risks.
Do you regularly report on policy and standard compliance, and use that information to guide compliance improvement efforts?
You have procedures (automated, if possible) to regularly generate compliance reports |
You deliver compliance reports to all relevant stakeholders |
Stakeholders use the reported compliance status information to identify areas for improvement |
No |
Yes, but reporting is ad-hoc |
Yes, we report at regular times |
Yes, we report at least annually |
Complete this Google Form with guidance for this Stream.
To learn more about Stream guidance for the SAMM model, see the Stream guidance page.
OWASP SAMM is published under the CC BY-SA 4.0 license diff --git a/model/governance/policy-and-compliance/stream-b/index.html b/model/governance/policy-and-compliance/stream-b/index.html index 1e46159b..c0a4bdd6 100644 --- a/model/governance/policy-and-compliance/stream-b/index.html +++ b/model/governance/policy-and-compliance/stream-b/index.html @@ -1,13 +1,13 @@ -
Security policies and standards aligned with external compliance drivers
Create a comprehensive list of all compliance requirements, including any triggers that could help determine which applications are in scope. Compliance requirements may be considered in scope based on factors such as geographic location, types of data, or contractual obligations with clients or business partners. Review each identified compliance requirement with the appropriate experts and legal, to ensure the obligation is understood. Since many compliance obligations vary in applicability based on how the data is processed, stored, or transmitted across the computing environment, compliance drivers should always indicate opportunities for lowering the overall compliance burden by changing how the data is handled.
Evaluate publishing a compliance matrix to help identify which factors could put an application in scope for a specific regulatory requirement. Have the matrix indicate which compliance requirements are applicable at the organization level and do not depend on individual applications. The matrix provides at least a basic understanding of useful compliance requirements to review obligations around different applications.
Since many compliance standards are focused around security best-practices, many compliance requirements may already be a part of the Policy and Standards library published by the organization. Therefore, once you review compliance requirements, map them to any applicable existing policies and standards. Whenever there are discrepancies, update the policies and standards to include organization-wide compliance requirements. Then, begin creating compliance-specific standards only applicable to individual compliance requirements. The goal is to have a compliance matrix that indicates which policies and standards have more detailed information about compliance requirements, as well as ensure individual policies and standards reference applicable compliance requirements.
Do you have a complete picture of your external compliance obligations?
You have identified all sources of external compliance obligations |
You have captured and reconciled compliance obligations from all sources |
No |
Yes, for some applications |
Yes, for at least half of the applications |
Yes, for most or all of the applications |
Common understanding how to reach compliance with external compliance drivers for product teams
Develop a library of application requirements and test scripts to establish and verify regulatory compliance of applications. Some of these are tied to individual compliance requirements like PCI or GDPR, while others are more general in nature and address global compliance requirements such as ISO. The library is available to all application development teams. It includes guidance for determining all applicable requirements including considerations for reducing the compliance burden and scope. Implement a process to periodically re-assess each application’s compliance requirements. Re-assessment includes reviewing all application functionality and opportunities to reduce scope to lower the overall cost of compliance.
Requirements include enough information for developers to understand functional and non-functional requirements of the different compliance obligations. They include references to policies and standards, and provide explicit references to regulations. If there are questions about the implementation of a particular requirement, the original text of the regulation can help interpret the intent more accurately. Each requirement includes a set of test scripts for verifying compliance. In addition to assisting QA with compliance verification, these can help clarify compliance requirements for developers and make the compliance process transparent. Requirements have a format that allows importing them into individual requirements repositories. further clarify compliance requirements for developers and ensure the process of achieving compliance is fully transparent.
Do you have a standard set of security requirements and verification procedures addressing the organization's external compliance obligations?
You map each external compliance obligation to a well-defined set of application requirements |
You define verification procedures, including automated tests, to verify compliance with compliance-related requirements |
No |
Yes, for some obligations |
Yes, for at least half of the obligations |
Yes, for most or all of the obligations |
Understanding of your organization’s compliance with external compliance drivers
Develop a program for measuring and reporting on the status of compliance between different applications. Application requirements and test scripts help determine the status of compliance. Leverage testing automation to promptly detect compliance regressions in frequently updated applications and ensure compliance is maintained through the different application versions. Whenever fully automated testing is not possible, QA, Internal Audit, or Information Security teams assess compliance periodically through a combination of manual testing and interview.
While full compliance is always the ultimate goal, include tracking remediation actions and periodic updates in the program. Review compliance remediation activities periodically to check teams are making appropriate progress, and that remediation strategies will be effective in achieving compliance. To further improve the process, develop a series of standard reports and compliance scorecards. These help individual teams understand the current state of compliance, and the organization manage assistance for remediating compliance gaps more effectively.
Review compliance gaps requiring significant expenses or development with the subject-matter experts and compare them against the cost of reducing the application’s functionality, minimizing scope or eliminating the compliance requirement. longterm compliance gaps require management approval and a formal compliance risk acceptance, so they receive appropriate attention and scrutiny from the organization’s leadership.
Do you regularly report on adherence to external compliance obligations and use that information to guide efforts to close compliance gaps?
You have established, well-defined compliance metrics |
You measure and report on applications' compliance metrics regularly |
Stakeholders use the reported compliance status information to identify compliance gaps and prioritize gap remediation efforts |
No |
Yes, but reporting is ad-hoc |
Yes, we report at regular times |
Yes, we report at least annually |
Complete this Google Form with guidance for this Stream.
To learn more about Stream guidance for the SAMM model, see the Stream guidance page.
OWASP SAMM is published under the CC BY-SA 4.0 license diff --git a/model/governance/strategy-and-metrics/stream-a/index.html b/model/governance/strategy-and-metrics/stream-a/index.html index e2b6744c..e63190d0 100644 --- a/model/governance/strategy-and-metrics/stream-a/index.html +++ b/model/governance/strategy-and-metrics/stream-a/index.html @@ -1,13 +1,13 @@ -
Common understanding of your organization’s security posture
Understand, based on application risk exposure, what threats exist or may exist, as well as how tolerant executive leadership is of these risks. This understanding is a key component of determining software security assurance priorities. To ascertain these threats, interview business owners and stakeholders and document drivers specific to industries where the organization operates as well as drivers specific to the organization. Gathered information includes worst-case scenarios that could impact the organization, as well as opportunities where an optimized software development lifecycle and more secure applications could provide a market-differentiator or create additional opportunities.
Gathered information provides a baseline for the organization to develop and promote its application security program. Items in the program are prioritized to address threats and opportunities most important to the organization. The baseline is split into several risk factors and drivers linked directly to the organization’s priorities and used to help build a risk profile of each custom-developed application by documenting how they can impact the organization if they are compromised.
The baseline and individual risk factors should be published and made available to application development teams to ensure a more transparent process of creating application risk profiles and incorporating the organization’s priorities into the program. Additionally, these goals should provide a set of objectives which should be used to ensure all application security program enhancements provide direct support of the organization’s current and future needs.
Do you understand the enterprise-wide risk appetite for your applications?
You capture the risk appetite of your organization's executive leadership |
The organization's leadership vet and approve the set of risks |
You identify the main business and technical threats to your assets and data |
You document risks and store them in an accessible location |
No |
Yes, it covers general risks |
Yes, it covers organization-specific risks |
Yes, it covers risks and opportunities |
Available and agreed upon roadmap of your AppSec program
Based on the magnitude of assets, threats, and risk tolerance, develop a security strategic plan and budget to address business priorities around application security. The plan covers 1 to 3 years and includes milestones consistent with the organization’s business drivers and risks. It provides tactical and strategic initiatives and follows a roadmap that makes its alignment with business priorities and needs visible.
In the roadmap, reach a balance between changes requiring financial expenditures, changes of processes and procedures, and changes impacting the organization’s culture. This balance helps accomplish multiple milestones concurrently and without overloading or exhausting available resources or development teams. The milestones are frequent enough to help monitor program success and trigger timely roadmap adjustments.
For the program to be successful, the application security team obtains buy-in from the organization’s stakeholders and application development teams. A published plan is available to anyone who is required to support or participate in its implementation.
Do you have a strategic plan for application security and use it to make decisions?
The plan reflects the organization's business priorities and risk appetite |
The plan includes measurable milestones and a budget |
The plan is consistent with the organization's business drivers and risks |
The plan lays out a roadmap for strategic and tactical initiatives |
You have buy-in from stakeholders, including development teams |
No |
Yes, we review it annually |
Yes, we consult the plan before making significant decisions |
Yes, we consult the plan often, and it is aligned with our application security strategy |
Continuous AppSec program alignment with the organization’s business goals
You review the application security plan periodically for ongoing applicability and support of the organization’s evolving needs and future growth. To do this, you repeat the steps from the first two maturity levels of this Security Practice at least annually. The goal is for the plan to always support the current and future needs of the organization, which ensures the program is aligned with the business.
In addition to reviewing the business drivers, the organization closely monitors the success of the implementation of each of the roadmap milestones. You evaluate the success of the milestones based on a wide range of criteria, including completeness and efficiency of the implementation, budget considerations, and any cultural impacts or changes resulting from the initiative. You review missed or unsatisfactory milestones and evaluate possible changes to the overall program.
The organization develops dashboards and measurements for management and teams responsible for software development to monitor the implementation of the roadmap. These dashboards are detailed enough to identify individual projects and initiatives and provide a clear understanding of whether the program is successful and aligned with the organization’s needs.
Do you regularly review and update the Strategic Plan for Application Security?
You review and update the plan in response to significant changes in the business environment, the organization, or its risk appetite |
Plan update steps include reviewing the plan with all the stakeholders and updating the business drivers and strategies |
You adjust the plan and roadmap based on lessons learned from completed roadmap activities |
You publish progress information on roadmap activities, making sure they are available to all stakeholders |
No |
Yes, but review is ad-hoc |
Yes, we review it at regular times |
Yes, we review it at least annually |
Complete this Google Form with guidance for this Stream.
To learn more about Stream guidance for the SAMM model, see the Stream guidance page.
OWASP SAMM is published under the CC BY-SA 4.0 license diff --git a/model/governance/strategy-and-metrics/stream-b/index.html b/model/governance/strategy-and-metrics/stream-b/index.html index e6552f22..47e403f8 100644 --- a/model/governance/strategy-and-metrics/stream-b/index.html +++ b/model/governance/strategy-and-metrics/stream-b/index.html @@ -1,13 +1,13 @@ -
Basic insights into your AppSec program’s effectiveness and efficiency
Define and document metrics to evaluate the effectiveness and efficiency of the application security program. This way improvements are measurable and you can use them to secure future support and funding for the program. Considering the dynamic nature of most development environments, metrics should be comprised of measurements in the following categories
Effort
metrics measure the effort spent on security. For example training hours, time spent performing code reviews, and number of applications scanned for vulnerabilities.Result
metrics measure the results of security efforts. Examples include number of outstanding patches with security defects and number of security incidents involving application vulnerabilities.Environment
metrics measure the environment where security efforts take place. Examples include number of applications or lines of code as a measure of difficulty or complexity.Each metric by itself is useful for a specific purpose, but a combination of two or three metrics together helps explain spikes in metrics trends. For example, a spike in a total number of vulnerabilities may be caused by the organization on-boarding several new applications that have not been previously exposed to the implemented application security mechanisms. Alternatively, an increase in the environment metrics without a corresponding increase in the effort or result could be an indicator of a mature and efficient security program.
While identifying metrics, it’s always recommended to stick to the metrics that meet several criteria
Document metrics and include descriptions of best and most efficient methods for gathering data, as well as recommended methods for combining individual measures into meaningful metrics. For example, a number of applications and a total number of defects across all applications may not be useful by themselves but, when combined as a number of outstanding high-severity defects per application, they provide a more actionable metric.
Do you use a set of metrics to measure the effectiveness and efficiency of the application security program across applications?
You document each metric, including a description of the sources, measurement coverage, and guidance on how to use it to explain application security trends |
Metrics include measures of efforts, results, and the environment measurement categories |
Most of the metrics are frequently measured, easy or inexpensive to gather, and expressed as a cardinal number or a percentage |
Application security and development teams publish metrics |
No |
Yes, for one metrics category |
Yes, for two metrics categories |
Yes, for all three metrics categories |
Transparency on your AppSec program’s performance
Once the organization has defined its application security metrics, collect enough information to establish realistic goals. Test identified metrics to ensure you can gather data consistently and efficiently over a short period. After the initial testing period, the organization should have enough information to commit to goals and objectives expressed through Key Performance Indicators (KPIs).
While several measurements are useful for monitoring the information security program and its effectiveness, KPIs are comprised of the most meaningful and effective metrics. Aim to remove volatility common in application development environments from KPIs to reduce chances of unfavorable numbers resulting from temporary or misleading individual measurements. Base KPIs on metrics considered valuable not only to Information Security professionals but also to individuals responsible for the overall success of the application, and organization’s leadership. View KPIs as definitive indicators of the success of the whole program and consider them actionable.
Fully document KPIs and distribute them to the teams contributing to the success of the program as well as organization’s leadership. Ideally, include a brief explanation of the information sources for each KPI and the meaning if the numbers are high or low. Include short and long-term goals, and ranges for unacceptable measurements requiring immediate intervention. Share action plans with application security and application development teams to ensure full transparency in understanding of the organization’s objectives and goals.
Did you define Key Performance Indicators (KPI) from available application security metrics?
You defined KPIs after gathering enough information to establish realistic objectives |
You developed KPIs with the buy-in from the leadership and teams responsible for application security |
KPIs are available to the application teams and include acceptability thresholds and guidance in case teams need to take action |
Success of the application security program is clearly visible based on defined KPIs |
No |
Yes, for some of the metrics |
Yes, for at least half of the metrics |
Yes, for most or all of the metrics |
Continuous improvement of your program according to results
Define guidelines for influencing the Application Security program based on the KPIs and other application security metrics. These guidelines combine the maturity of the application development process and procedures with different metrics to make the program more efficient. The following examples show a relationship between measurements and ways of evolving and improving application security
When defining the overall metrics strategy, keep the end-goal in mind and define what decisions can be made as a result of changes in KPIs and metrics as soon as possible, to help guide development of metrics.
Do you update the Application Security strategy and roadmap based on application security metrics and KPIs?
You review KPIs at least yearly for their efficiency and effectiveness |
KPIs and application security metrics trigger most of the changes to the application security strategy |
No |
Yes, but review is ad-hoc |
Yes, we review it at regular times |
Yes, we review it at least annually |
Complete this Google Form with guidance for this Stream.
To learn more about Stream guidance for the SAMM model, see the Stream guidance page.
OWASP SAMM is published under the CC BY-SA 4.0 license diff --git a/model/implementation/defect-management/stream-a/index.html b/model/implementation/defect-management/stream-a/index.html index 32f03738..44124c88 100644 --- a/model/implementation/defect-management/stream-a/index.html +++ b/model/implementation/defect-management/stream-a/index.html @@ -1,13 +1,13 @@ -
Transparency of known security defects impacting particular applications
Introduce a common definition / understanding of a security defect and define the most common ways of identifying these. These typically include, but are not limited to:
Foster a culture of transparency and avoid blaming any teams for introducing or identifying security defects. Record and track all security defects in a defined location. This location doesn’t necessarily have to be centralized for the whole organization, however ensure that you’re able to get an overview of all defects affecting a particular application at any single point in time. Define and apply access rules for the tracked security defects to mitigate the risk of leakage and abuse of this information.
Introduce at least rudimentary qualitative classificiation of security defects so that you are able to prioritize fixing efforts accordingly. Strive for limiting duplication of information and presence of false positives to increase the trustworthiness of the process.
Do you track all known security defects in accessible locations?
You can easily get an overview of all security defects impacting one application |
You have at least a rudimentary classification scheme in place |
The process includes a strategy for handling false positives and duplicate entries |
The defect management system covers defects from various sources and activities |
No |
Yes, for some applications |
Yes, for at least half of the applications |
Yes, for most or all of the applications |
Consistent classification of security defects with clear expectations of their handling
Introduce and apply a well defined rating methodology for your security defects consistently across the whole organization, based on the probability and expected impact of the defect being exploited. This will allow you to identify applications which need higher attention and investments. In case you don’t store the information about security defects centrally, ensure that you’re still able to easily pull the information from all sources and get an overview about “hot spots” needing your attention.
Introduce SLAs for timely fixing of security defects according to their criticality rating and centrally monitor and regularly report on SLA breaches. Define a process for cases where it’s not feasible or economical to fix a defect within the time defined by the SLAs. This should at least ensure that all relevant stakeholders have a solid understanding of the imposed risk. If suitable, employ compensating controls for these cases.
Even if you don’t have any formal SLAs for fixing low severity defects, ensure that responsible teams still get a regular overview about issues affecting their applications and understand how particular issues affect or amplify each other.
Do you keep an overview of the state of security defects across the organization?
A single severity scheme is applied to all defects across the organization |
The scheme includes SLAs for fixing particular severity classes |
You regularly report compliance to SLAs |
No |
Yes, for some applications |
Yes, for at least half of the applications |
Yes, for most or all of the applications |
Assurance that security defects are handled within predefined SLAs
Implement an automated alerting on security defects if the fix time breaches the defined SLAs. Ensure that these defects are automatically transferred into the risk management process and rated by a consistent quantitative methodology. Evaluate how particular defects influence / amplify each other not only on the level of separate teams, but on the level of the whole organization. Use the knowledge of the full kill chain to prioritize, introduce and track compensating controls mitigating the respective business risks.
Integrate your defect management system with the automated tooling introduced by other practices, e.g.:
Do you enforce SLAs for fixing security defects?
You automatically alert of SLA breaches and transfer respective defects to the risk management process |
You integrate relevant tooling (e.g. monitoring, build, deployment) with the defect management system |
No |
Yes, for some applications |
Yes, for at least half of the applications |
Yes, for most or all of the applications |
Complete this Google Form with guidance for this Stream.
To learn more about Stream guidance for the SAMM model, see the Stream guidance page.
OWASP SAMM is published under the CC BY-SA 4.0 license diff --git a/model/implementation/defect-management/stream-b/index.html b/model/implementation/defect-management/stream-b/index.html index 5c4124c0..cb3b77fe 100644 --- a/model/implementation/defect-management/stream-b/index.html +++ b/model/implementation/defect-management/stream-b/index.html @@ -1,13 +1,13 @@ -
Identification of quick wins derived from available defect information
Once per defined period of time (typically at least once per year), go over your both resolved and still open recorded security defects in every team and extract basic metrics from the available data. These might include:
Identify and carry out sensible quick win activities which you can derive from the newly acquired knowledge. These might include things like a knowledge sharing session about one particular vulnerability type or carrying out / automating a security scan.
Do you use basic metrics about recorded security defects to carry out quick win improvement activities?
You analyzed your recorded metrics at least once in the last year |
At least basic information about this initiative is recorded and available |
You have identified and carried out at least one quick win activity based on the data |
No |
Yes, for some applications |
Yes, for at least half of the applications |
Yes, for most or all of the applications |
Improved learning from security defects in your organization
Define, collect and calculate unified metrics across the whole organization. These might include:
Generate a regular (e.g. monthly) report for a suitable audience. This would typically reach audience like managers and security officer and engineers. Use the information in the report as an input for your security strategy, e.g. improving trainings or security verification activities.
Share the most prominent or interesting technical details about security defects including the fixing strategy to other teams once these defects are fixed, e.g. in a regular knowledge sharing meeting. This will help scale the learning effect from defects to the whole organization and limit their occurrence in the future.
Do you improve your security assurance program upon standardized metrics?
You document metrics for defect classification and categorization and keep them up to date |
Executive management regularly receives information about defects and has acted upon it in the last year |
You regularly share technical details about security defects among teams |
No |
Yes, for some applications |
Yes, for at least half of the applications |
Yes, for most or all of the applications |
Optimized security strategy based on defect information
Regularly (at least once per year) revisit the defect management metrics you’re collecting and compare the effort needed to collect and track these to the expected outcomes. Make knowledgeable decision about removing metrics which don’t deliver the overall expected value. Wherever possible, include and automate verification activities for the quality of the collected data and ensure sustainable improvement if any differences are detected.
Aggregate the data with your threat intelligence and incident management metrics and use the results as input for other initiatives over the whole organization, such as:
Do you regularly evaluate the effectiveness of your security metrics so that its input helps drive your security strategy?
You have analyzed the effectiveness of the security metrics at least once in the last year |
Where possible, you verify the correctness of the data automatically |
The metrics is aggregated with other sources like threat intelligence or incident management |
You derived at least one strategic activity from the metrics in the last year |
No |
Yes, for some applications |
Yes, for at least half of the applications |
Yes, for most or all of the applications |
Complete this Google Form with guidance for this Stream.
To learn more about Stream guidance for the SAMM model, see the Stream guidance page.
OWASP SAMM is published under the CC BY-SA 4.0 license diff --git a/model/implementation/secure-build/stream-a/index.html b/model/implementation/secure-build/stream-a/index.html index 118e7437..35adf86a 100644 --- a/model/implementation/secure-build/stream-a/index.html +++ b/model/implementation/secure-build/stream-a/index.html @@ -1,13 +1,13 @@ -
Limited risk of human error during build process minimizing security issues
Define the build process, breaking it down into a set of clear instructions to either be followed by a person or an automated tool. The build process definition describes the whole process end-to-end so that the person or tool can follow it consistently each time and produce the same result. The definition is stored centrally and accessible to any tools or people. Avoid storing multiple copies as they may become unaligned and outdated.
The process definition does not include any secrets (specifically considering those needed during the build process).
Review any build tools, ensuring that they are actively maintained by vendors and up-to-date with security patches. Harden each tool’s configuration so that it is aligned with vendor guidelines and industry best practices.
Determine a value for each generated artifact that can be later used to verify its integrity, such as a signature or a hash. Protect this value and, if the artifact is signed, the private signing certificate.
Ensure that build tools are routinely patched and properly hardened.
Is your full build process formally described?
You have enough information to recreate the build processes |
Your build documentation up to date |
Your build documentation is stored in an accessible location |
Produced artifact checksums are created during build to support later verification |
You harden the tools that are used within the build process |
No |
Yes, for some applications |
Yes, for at least half of the applications |
Yes, for most or all of the applications |
Efficient build process with integrated security tools
Automate the build process so that builds can be executed consistently anytime. The build process shouldn’t typically require any intervention, further reducing the likelihood of human error.
The use of an automated system increases reliance on security of the build tooling and makes hardening and maintaining the toolset even more critical. Pay particular attention to the interfaces of those tools, such as web-based portals and how they can be locked-down. The exposure of a build tool to the network could allow a malicious actor to tamper with the integrity of the process. This might, for example, allow malicious code to be built into software.
The automated process may require access to credentials and secrets required to build the software, such as the code signing certificate or access to repositories. Handle these with care. Sign generated artifacts using a certificate that identifies the organization or business unit that built it, so you can verify its integrity.
Finally, add appropriate automated security checks (e.g. using SAST tools) in the pipeline to leverage the automation for security benefit.
Is the build process fully automated?
The build process itself doesn't require any human interaction |
Your build tools are hardened as per best practice and vendor guidance |
You encrypt the secrets required by the build tools and control access based on the principle of least privilege |
No |
Yes, for some applications |
Yes, for at least half of the applications |
Yes, for most or all of the applications |
Assurance that you build software complying with a security baseline
Define security checks suitable to be carried out during the build process, as well as minimum criteria for passing the build - these might differ according to the risk profiles of various applications. Include the respective security checks in the build and enforce breaking the build process in case the predefined criteria are not met. Trigger warnings for issues below the threshold and log these to a centralized system to track them and take actions. If sensible, implement an exception mechanism to bypass this behavior if the risk of a particular vulnerability has been accepted or mitigated. However, ensure these cases are explicitly approved first and log their occurrence together with a rationale.
If technical limitations prevent the organization from breaking the build automatically, ensure the same effect via other measures, such as a clear policy and regular audit.
Handle code signing on a separate centralized server which does not expose the certificate to the system executing the build. Where possible, use a deterministic method that outputs byte-for-byte reproducible artifacts.
Do you enforce automated security checks in your build processes?
Builds fail if the application doesn't meet a predefined security baseline |
You have a maximum accepted severity for vulnerabilities |
You log warnings and failures in a centralized system |
You select and configure tools to evaluate each application against its security requirements at least once a year |
No |
Yes, for some applications |
Yes, for at least half of the applications |
Yes, for most or all of the applications |
Complete this Google Form with guidance for this Stream.
To learn more about Stream guidance for the SAMM model, see the Stream guidance page.
OWASP SAMM is published under the CC BY-SA 4.0 license diff --git a/model/implementation/secure-build/stream-b/index.html b/model/implementation/secure-build/stream-b/index.html index 6d168631..4e931319 100644 --- a/model/implementation/secure-build/stream-b/index.html +++ b/model/implementation/secure-build/stream-b/index.html @@ -1,13 +1,13 @@ -
Available information on known security issues in dependencies
Keep a record of all dependencies used throughout the target production environment. This is sometimes referred to as a Bill of Materials (BOM). Consider that different components of the application may consume entirely different dependencies. For example, if the software package is a web application, cover both the server-side application code and client-side scripts. In building these records, consider the various locations where dependencies might be specified such as configuration files, the project’s directory on disk, a package management tool or the actual code (e.g. via an IDE that supports listing dependencies).
Gather the following information about each dependency:
Check the records to discover any dependencies with known vulnerabilities and update or replace them accordingly.
Do you have solid knowledge about dependencies you're relying on?
You have a current bill of materials (BOM) for every application |
You can quickly find out which applications are affected by a particular CVE |
You have analyzed, addressed, and documented findings from dependencies at least once in the last three months |
No |
Yes, for some applications |
Yes, for at least half of the applications |
Yes, for most or all of the applications |
Full transparency of known security issues in dependencies
Evaluate used dependencies and establish a list of acceptable ones approved for use within a project, team, or the wider organization according to a defined set of criteria.
Introduce a central repository of dependencies that all software can be built from.
Review used dependencies regularly to ensure that:
React timely and appropriately to non-conformities by handling these as defects. Consider using an automated tool to scan for vulnerable dependencies and assign the identified issues to the respective development teams.
Do you handle 3rd party dependency risk by a formal process?
You keep a list of approved dependencies that meet predefined criteria |
You automatically evaluate dependencies for new CVEs and alert responsible staff |
You automatically detect and alert to license changes with possible impact on legal application usage |
You track and alert to usage of unmaintained dependencies |
You reliably detect and remove unnecessary dependencies from the software |
No |
Yes, for some applications |
Yes, for at least half of the applications |
Yes, for most or all of the applications |
Handling of security issues in dependencies comparable to those in your own code
Maintain a whitelist of approved dependencies and versions, and ensure that the build process fails upon a presence of dependency not being on the list. Include a sign-off process for handling exceptions to this rule if sensible.
Perform security verification activities against dependencies on the whitelist in a comparable way to the target applications themselves (esp. using SAST and analyzing transitive dependencies). Ensure that these checks also aim to identify possible backdoors or easter eggs in the dependencies. Establish vulnerability disclosure processes with the dependency authors including SLAs for fixing issues. In case enforcing SLAs is not realistic (e.g. with open source vulnerabilities), ensure that the most probable cases are expected and you are able to implement compensating measures in a timely manner. Implement regression tests for the fixes to identified issues.
Track all identified issues and their state using your defect tracking system. Integrate your build pipeline with this system to enable failing the build whenever the included dependencies contain issues above a defined criticality level.
Do you prevent build of software if it's affected by vulnerabilities in dependencies?
Your build system is connected to a system for tracking 3rd party dependency risk, causing build to fail unless the vulnerability is evaluated to be a false positive or the risk is explicitly accepted |
You scan your dependencies using a static analysis tool |
You report findings back to dependency authors using an established responsible disclosure process |
Using a new dependency not evaluated for security risks causes the build to fail |
No |
Yes, for some applications |
Yes, for at least half of the applications |
Yes, for most or all of the applications |
Complete this Google Form with guidance for this Stream.
To learn more about Stream guidance for the SAMM model, see the Stream guidance page.
OWASP SAMM is published under the CC BY-SA 4.0 license diff --git a/model/implementation/secure-deployment/stream-a/index.html b/model/implementation/secure-deployment/stream-a/index.html index 98acc0a7..e36ec0a6 100644 --- a/model/implementation/secure-deployment/stream-a/index.html +++ b/model/implementation/secure-deployment/stream-a/index.html @@ -1,13 +1,13 @@ -
Limited risk of human error during deployment process minimizing security issues
Define the deployment process over all stages, breaking it down into a set of clear instructions to either be followed by a person or an automated tooling. The deployment process definition should describe the whole process end-to-end so that it can be consistently followed each time to produce the same result. The definition is stored centrally and accessible to all relevant personnel. Do not store or distribute multiple copies, some of which may become outdated.
Deploy applications to production either using an automated process, or manually by personnel other than the developers. Ensure that developers do not need direct access to the production environment for application deployment.
Review any deployment tools, ensuring that they are actively maintained by vendors and up to date with security patches. Harden each tool’s configuration so that it is aligned with vendor guidelines and industry best practices. Given that most of these tools require access to the production environment, their security is extremely critical. Ensure the integrity of the tools themselves and the workflows they follow, and configure access rules to these tools according to the least privilege principle.
Have personnel with access to the production environment go through at least a minimum level of training or certification to ensure their competency in this matter.
Do you use repeatable deployment processes?
You have enough information to run the deployment processes |
Your deployment documentation up to date |
Your deployment documentation is accessible to relevant stakeholders |
You ensure that only defined qualified personnel can trigger a deployment |
You harden the tools that are used within the deployment process |
No |
Yes, for some applications |
Yes, for at least half of the applications |
Yes, for most or all of the applications |
Efficient deployment process with integrated security tools
Automate the deployment process to cover various stages, so that no manual configuration steps are needed and the risk of isolated human errors is eliminated. Ensure and verify that the deployment is consistent over all stages.
Integrate automated security checks in your deployment process, e.g. using Dynamic Application Security Testing (DAST) and vulnerability scanning tools. Also, verify the integrity of the deployed artefacts where this makes sense. Log the results from these tests centrally and take any necessary actions. Ensure that in case any defects are detected, relevant personnel is notified automatically. In case any issues exceeding predefined criticality are identified, stop or reverse the deployment either automatically, or introduce a separate manual approval workflow so that this decision is recorded, containing an explanation for the exception.
Account for and audit all deployments to all stages. Have a system in place to record each deployment, including information about who conducted it, the software version that was deployed, and any relevant variables specific to the deploy.
Are deployment processes automated and employing security checks?
Deployment processes are automated on all stages |
Deployment includes automated security testing procedures |
You alert responsible staff to identified vulnerabilities |
You have logs available for your past deployments for a defined period of time |
No |
Yes, for some applications |
Yes, for at least half of the applications |
Yes, for most or all of the applications |
Assured integrity of artifacts being deployed to production
Take advantage of binaries being signed at the build time and include automatic verification of the integrity of software being deployed by checking their signatures against trusted certificates. This may include binaries developed and built in-house, as well as third-party artifacts. Do not deploy artifacts if their signatures cannot be verified, including those with invalid or expired certificates.
If the list of trusted certificates includes third-party developers, check them periodically, and keep them in line with the organization’s wider governance surrounding trusted third-party suppliers.
Manually approve the deployment at least once during an automated deployment. Whenever a human check is significantly more accurate than an automated one during the deployment process, go for this option.
Do you consistently validate the integrity of deployed artifacts?
You prevent or roll back deployment if you detect an integrity breach |
The verification is done against signatures created during the build time |
If checking of signatures is not possible (e.g. externally build software), you introduce compensating measures |
No |
Yes, for some applications |
Yes, for at least half of the applications |
Yes, for most or all of the applications |
Complete this Google Form with guidance for this Stream.
To learn more about Stream guidance for the SAMM model, see the Stream guidance page.
OWASP SAMM is published under the CC BY-SA 4.0 license diff --git a/model/implementation/secure-deployment/stream-b/index.html b/model/implementation/secure-deployment/stream-b/index.html index 6ec7d41d..39879e51 100644 --- a/model/implementation/secure-deployment/stream-b/index.html +++ b/model/implementation/secure-deployment/stream-b/index.html @@ -1,13 +1,13 @@ -
Defined and limited access to your production secrets
Developers should not have access to secrets or credentials for production environments. Have a mechanism in place to adequately protect production secrets, for instance by (i) having specific persons adding them to relevant configuration files upon deployment (the separation of duty principle) or (ii) by encrypting the production secrets contained in the configuration files upfront.
Do not use production secrets in configuration files for development or testing environments, as such environments may have a significantly lower security posture. Similarly, do not keep secrets unprotected in configuration files stored in code repositories.
Store sensitive credentials and secrets for production systems with encryption-at-rest at all times. Consider using a purpose-built tool for this. Handle key management carefully so only personnel with responsibility for production deployments are able to access this data.
Do you limit access to application secrets according to the least privilege principle?
You store production secrets protected in a secured location |
Developers do not have access to production secrets |
Production secrets are not available in non-production environments |
No |
Yes, for some applications |
Yes, for at least half of the applications |
Yes, for most or all of the applications |
Detection of potential leakage of production secrets
Have an automated process to add credentials and secrets to configuration files during the deployment process to respective stages. This way, developers and deployers do not see or handle those sensitive values.
Implement checks that detect the presence of secrets in code repositories and files, and run them periodically. Configure tools to look for known strings and unknown high entropy strings. In systems such as code repositories, where there is a history, include the versions in the checks. Mark potential secrets you discover as sensitive values, and remove them where appropriate. If you cannot remove them from a historic file in a code repository, for example, you may need to refresh the value on the system that consumes the secret. This way, if an attacker discovers the secret, it will not be useful to them.
Make the system used to store and process the secrets and credentials robust from a security perspective. Encrypt all secrets at rest and in transit. Users who configure this system and the secrets it contains are subject to the principle of least privilege. For example, a developer might need to manage the secrets for a development environment, but not a user acceptance test or production environment.
Do you inject production secrets into configuration files during deployment?
Source code files no longer contain active application secrets |
Under normal circumstances, no humans access secrets during deployment procedures |
You log and alert when abnormal secrets access is attempted |
No |
Yes, for some applications |
Yes, for at least half of the applications |
Yes, for most or all of the applications |
Minimized possibility and timely detection of production secret abuse
Implement lifecycle management for production secrets, and ensure the generation of new secrets as much as possible, and for every application instance. The use of secrets per application instance ensures that unexpected application behavior can be traced back and properly analyzed. Tools can help in automatically and seamlessly updating the secrets in all relevant places upon change.
Ensure that all access to secrets (both reading and writing) is logged in a central infrastructure. Review these logs regularly to identify unexpected behavior and perform proper analysis to understand why this happened. Feed issues and root causes into the defect management practice to make sure that the organization will resolve any unacceptable situations.
Do you practice proper lifecycle management for application secrets?
You generate and synchronize secrets using a vetted solution |
Secrets are different between different application instances |
Secrets are regularly updated |
No |
Yes, for some applications |
Yes, for at least half of the applications |
Yes, for most or all of the applications |
Complete this Google Form with guidance for this Stream.
To learn more about Stream guidance for the SAMM model, see the Stream guidance page.
OWASP SAMM is published under the CC BY-SA 4.0 license diff --git a/model/operations/environment-management/stream-a/index.html b/model/operations/environment-management/stream-a/index.html index 30c318e2..d8b831c8 100644 --- a/model/operations/environment-management/stream-a/index.html +++ b/model/operations/environment-management/stream-a/index.html @@ -1,13 +1,13 @@ -
Hardened basic configuration settings of your components
Understanding the importance of securing the technology stacks you’re using, apply secure configuration to stack elements, based on readily available guidance (e.g., open source projects, vendor documentation, blog articles). When your teams develop configuration guidance for their applications, based on trial-and-error and information gathered by team members, encourage them to share their learnings across the organization.
Identify key elements of common technology stacks, and establish configuration standards for those, based on teams' experiences of what works.
At this level of maturity, you don’t yet have a formal process for managing configuration baselines. Configurations may not be applied consistently across applications and deployments, and monitoring of conformance is likely absent.
Do you harden configurations for key components of your technology stacks?
You have identified the key components in each technology stack used |
You have an established configuration standard for each key component |
No |
Yes, for some components |
Yes, for at least half of the components |
Yes, for most or all of the components |
Consistent hardening of technology stack components in your organization
Establish configuration hardening baselines for all components in each technology stack used. To assist with consistent application of the hardening baselines, develop configuration guides for the components. Require product teams to apply configuration baselines to all new systems, and to existing systems when practicable.
Place hardening baselines and configuration guides under change management, and assign an owner to each. Owners have ongoing responsibility to keep them up-to-date, based on evolving best practices or changes to the relevant components (e.g., version updates, new features).
In larger environments, derive configurations of instances from a locally maintained master, with relevant configuration baselines applied. Employ automated tools for hardening configurations.
Do you have hardening baselines for your components?
You have assigned an owner for each baseline |
The owner keeps their assigned baselines up to date |
You store baselines in an accessible location |
You train employees responsible for configurations in these baselines |
No |
Yes, for some components |
Yes, for at least half of the components |
Yes, for most or all of the components |
Clear view on component configurations to avoid non-conformities
Actively monitor the security configurations of deployed technology stacks, performing regular checks against established baselines. Ensure results of configuration checks are readily available, through published reports and dashboards.
When you detect non-conforming configurations, treat each occurrence as a security finding, and manage corrective actions within your established Defect Management practice.
Further gains may be realized using automated measures, such as “self-healing” configurations and security information and event management (SIEM) alerts.
As part of the process for updating components (e.g., new releases, vendor patches), review corresponding baselines and configuration guides, updating them as needed to maintain their relevance and accuracy. Review other baselines and configuration guides at least annually.
Periodically review your baseline management process, incorporating feedback and lessons learned from teams applying and maintaining configuration baselines and configuration guides.
Do you monitor and enforce conformity with hardening baselines?
You perform conformity checks regularly, preferably using automation |
You store conformity check results in an accessible location |
You follow an established process to address reported non-conformities |
You review each baseline at least annually, and update it when required |
No |
Yes, for some components |
Yes, for at least half of the components |
Yes, for most or all of the components |
Complete this Google Form with guidance for this Stream.
To learn more about Stream guidance for the SAMM model, see the Stream guidance page.
OWASP SAMM is published under the CC BY-SA 4.0 license diff --git a/model/operations/environment-management/stream-b/index.html b/model/operations/environment-management/stream-b/index.html index 4b4b203a..ce36d707 100644 --- a/model/operations/environment-management/stream-b/index.html +++ b/model/operations/environment-management/stream-b/index.html @@ -1,13 +1,13 @@ -
Mitigation of well-known issues in third-party components
Identify applications and third-party components which need to be updated or patched, including underlying operating systems, application servers, and third-party code libraries.
At this level of maturity, your identification and patching activities are best-effort and ad hoc, without a managed process for tracking component versions, available updates, and patch status. However, high-level requirements for patching activities (e.g., testing patches before pushing to production) may exist, and product teams are achieving best-effort compliance with those requirements.
Except for critical security updates (e.g., an exploit for a third-party component has been publicly released), teams leverage maintenance windows established for other purposes to apply component patches. For software developed by the organization, component patches are delivered to customers and organization-managed solutions only as part of feature releases.
Teams share their awareness of available updates, and their experiences with patching, on an ad hoc basis. Ensure teams can determine the versions of all components in use, to evaluate whether their products are affected by a security vulnerability when notified. However, the process for generating and maintaining component lists may require significant analyst effort.
Do you identify and patch vulnerable components?
You have an up-to-date list of components, including version information |
You regularly review public sources for vulnerabilities related to your components |
No |
Yes, for some components |
Yes, for at least half of the components |
Yes, for most or all of the components |
Consistent and proactive patching of technology stack components
Develop and follow a well-defined process for managing patches to application components across the technology stacks in use. Ensure processes include regular schedules for applying vendor updates, aligned with vendor update calendars (e.g., Microsoft Patch Tuesday). For software developed by the organization, deliver releases to customers and organization-managed solutions on a regular basis (e.g., monthly), regardless of whether you are including new features.
Create guidance for prioritizing component patching, reflecting your risk tolerance and management objectives. Consider operational factors (e.g., criticality of the application, severity of the vulnerabilities addressed) in determining priorities for testing and applying patches.
In the event receive a notification for a critical vulnerability in a component, while no patch is yet available, triage and handle the situation as a risk management issue (e.g., implement compensating controls, obtain customer risk acceptance, or disable affected applications/features).
Do you follow an established process for updating components of your technology stacks?
The process includes vendor information for third-party patches |
The process considers external sources to gather information about zero day attacks, and includes appropriate risk mitigation steps |
The process includes guidance for prioritizing component updates |
No |
Yes, for some components |
Yes, for at least half of the components |
Yes, for most or all of the components |
Clear view on component patch state to avoid non-conformities
Develop and use management dashboards/reports to track compliance with patching processes and SLAs, across the portfolio. Ensure dependency management and application packaging processes can support applying component-level patches at any time, to meet required SLAs.
Treat missed updates as security-related product defects, and manage their triage and correction in accordance with your established Defect Management practice.
Don’t rely on routine notifications from component vendors to learn about vulnerabilities and associated patches. Monitor a variety of external threat intelligence sources, to learn about zero day vulnerabilities; handle those affecting your applications as risk management issues.
Do you regularly evaluate components and review patch level status?
You update the list with components and versions |
You identify and update missing updates according to existing SLA |
You review and update the process based on feedback from the people who perform patching |
No |
Yes, for some components |
Yes, for at least half of the components |
Yes, for most or all of the components |
Complete this Google Form with guidance for this Stream.
To learn more about Stream guidance for the SAMM model, see the Stream guidance page.
OWASP SAMM is published under the CC BY-SA 4.0 license diff --git a/model/operations/incident-management/stream-a/index.html b/model/operations/incident-management/stream-a/index.html index 2e62fb06..7f8dca4d 100644 --- a/model/operations/incident-management/stream-a/index.html +++ b/model/operations/incident-management/stream-a/index.html @@ -1,13 +1,13 @@ -
Ability to detect the most obvious security incidents
Analyze available log data (e.g., access logs, application logs, infrastructure logs), to detect possible security incidents in accordance with known log data retention periods.
In small setups, you can do this manually with the help of common command-line tools. With larger log volumes, employ automation techniques. Even a cron
job, running a simple script to look for suspicious events, is a step forward!
If you send logs from different sources to a dedicated log aggregation system, analyze the logs there and employ basic log correlation principles.
Even if you don’t have a 24/7 incident detection process, ensure that unavailability of the responsible person (e.g., due to vacation or illness) doesn’t significantly impact detection speed or quality.
Establish and share points of contact for formal creation of security incidents.
Do you analyze log data for security incidents periodically?
You have a contact point for the creation of security incidents |
You analyze data in accordance with the log data retention periods |
The frequency of this analysis is aligned with the criticality of your applications |
No |
Yes, for some applications |
Yes, for at least half of the applications |
Yes, for most or all of the applications |
Timely and consistent detection of expected security incidents
Establish a dedicated owner for the incident detection process, make clear documentation accessible to all process stakeholders, and ensure it is regularly reviewed and updated as necessary. Ensure employees responsible for incident detection follow this process (e.g., using training).
The process typically relies on a high degree of automation, collecting and correlating log data from different sources, including application logs. You may aggregate logs in a central place, if suitable. Periodically verify the integrity of analyzed data. If you add a new application, ensure the process covers it within a reasonable period of time.
Detect possible security incidents using an available checklist. The checklist should cover expected attack vectors and known or expected kill chains. Evaluate and update it regularly.
When you determine an event is a security incident (with sufficiently high confidence), notify responsible staff immediately, even outside business hours. Perform further analysis, as appropriate, and start the escalation process.
Do you follow a documented process for incident detection?
The process has a dedicated owner |
You store process documentation in an accessible location |
The process considers an escalation path for further analysis |
You train employees responsible for incident detection in this process |
You have a checklist of potential attacks to simplify incident detection |
No |
Yes, for some applications |
Yes, for at least half of the applications |
Yes, for most or all of the applications |
Ability to timely detect security incidents
Ensure process documentation includes measures for continuous process improvement. Check the continuity of process improvement (e.g., via tracking of changes).
Ensure the checklist for suspicious event detection is correlated at least from (i) sources and knowledge bases external to the company (e.g., new vulnerability announcements affecting the used technologies), (ii) past security incidents, and (iii) threat model outcomes.
Use correlation of logs for incident detection for all reasonable incident scenarios. If the log data for incident detection is not available, document its absence as a defect, triage and handle it according to your established Defect Management process.
The quality of the incident detection does not depend on the time or day of the event. If security events are not acknowledged and resolved within a specified time (e.g., 20 minutes), ensure further notifications are generated according to an established escalation path.
Do you review and update the incident detection process regularly?
You perform reviews at least annually |
You update the checklist of potential attacks with external and internal data |
No |
Yes, for some applications |
Yes, for at least half of the applications |
Yes, for most or all of the applications |
Complete this Google Form with guidance for this Stream.
To learn more about Stream guidance for the SAMM model, see the Stream guidance page.
OWASP SAMM is published under the CC BY-SA 4.0 license diff --git a/model/operations/incident-management/stream-b/index.html b/model/operations/incident-management/stream-b/index.html index 42e1c895..de1b2b1d 100644 --- a/model/operations/incident-management/stream-b/index.html +++ b/model/operations/incident-management/stream-b/index.html @@ -1,13 +1,13 @@ -
Ability to efficiently solve most common security incidents
The first step is to recognize the incident response competence as such, and define a responsible owner. Provide them the time and resources they need to keep up with current state of incident handling best practices and forensic tooling.
At this level of maturity, you may not have established a dedicated incident response team, but you have defined the participants of the process (usually different roles). Assign a single point of contact for the process, known to all relevant stakeholders. Ensure that the point of contact knows how to reach each participant, and define on-call responsibilities for those who have them.
When security incidents happen, document all actions taken. Protect this information from unauthorized access.
Do you respond to detected incidents?
You have a defined person or role for incident handling |
You document security incidents |
No |
Yes, for some incidents |
Yes, for at least half of the incidents |
Yes, for most or all of the incidents |
Understanding and efficient handling of most security incidents
Establish and document the formal security incident response process. Ensure documentation includes information like:
Ensure a knowledgeable and properly trained incident response team is available both during and outside of business hours. Define timelines for action and a war room. Keep hardware and software tools up to date and ready for use anytime.
Do you use a repeatable process for incident handling?
You have an agreed upon incident classification |
The process considers Root Cause Analysis for high severity incidents |
Employees responsible for incident response are trained in this process |
Forensic analysis tooling is available |
No |
Yes, for some incident types |
Yes, for at least half of the incident types |
Yes, for most or all of the incident types |
Efficient incident response independent of time, location, or type of incident
Establish a dedicated incident response team, continuously available and responsible for continuous process improvement with the help of regular RCAs. For distributed organizations, define and document logistics rules for all relevant locations if sensible.
Document detailed incident response procedures and keep them up to date. Automate procedures where appropriate. Keep all resources necessary for these procedures (e.g., separate communicating infrastructure or reliable external location) ready to use. Detect and correct unavailability of these resources in a timely manner.
Carry out incident and emergency exercises are regularly. Use the results for process improvement.
Define, gather, evaluate, and act upon metrics on the incident response process, including its continuous improvement.
Do you have a dedicated incident response team available?
The team performs Root Cause Analysis for all security incidents unless there is a specific reason not to do so |
You review and update the response process at least annually |
No |
Yes, some of the time |
Yes, at least half of the time |
Yes, most or all of the time |
Complete this Google Form with guidance for this Stream.
To learn more about Stream guidance for the SAMM model, see the Stream guidance page.
OWASP SAMM is published under the CC BY-SA 4.0 license diff --git a/model/operations/operational-management/stream-a/index.html b/model/operations/operational-management/stream-a/index.html index 8e79558b..a81b407e 100644 --- a/model/operations/operational-management/stream-a/index.html +++ b/model/operations/operational-management/stream-a/index.html @@ -1,13 +1,13 @@ -