-
Notifications
You must be signed in to change notification settings - Fork 530
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
First cut at security audit guidelines #125
Conversation
closing this PR, content included in #140 |
security audit => assessment
figured out how to submit PR to @JustinCappos original PR, so re-opened this and closed #140 which is now redundant |
simplified readme with high level goals and link to guide
remove SAFE references, add outline
adjust README content for improved navigation
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The document looks great and very well articulated! Nothing much to add to it, I do have 1-2 comments that perhaps you guys already discussed at some point.
* Vulnerability Response Process. Who is responsible for responding to a report. What is the reporting process? How would you respond? | ||
* Ecosystem. How does your software fit into the cloud native ecosystem? (e.g. Flibber is integrated with both Flocker and Noodles which covers virtualization for 80% of cloud users. So, our small number of "users" actually represents very | ||
wide usage across the ecosytem since every virtual instance uses Flibber | ||
encryption by default.) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Within the ecosystem, is there a concern of delegation of security responsibilities from one project to another and vice versa? Perhaps a project is relying on security mechanisms of another CNCF technology which has not been reviewed yet? How would this affect the security posture of the project? I suppose that this would be a rare case that can be handled whenever it comes up.
Side comment: Along the lines of a point mentioned in the earlier section about AES/sha256, I like the idea of a list of technologies and algorithms/libraries/projects with their security assumptions accepted. This could help in getting project-leads to reduce the assumptions they are making and/or default to more well established security libraries/components.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
+1 on explicitly listing the technologies and assumptions. This would also inform mapping which domains and subject matter experts are needed. even more important it might highlight areas where stuff should NOT have been invented (we rolled our own symmetric encryption algorithm instead of using X cuz we thought X was too slow...oops we never considered side channel attacks)
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This is covered by the assessment, but could benefit from more detailed documentation of effective techniques. I've opened an issue on that, if anyone wants to elaborate further on the need: #155
of security audits for a software project they manage. Note that it is | ||
encouraged to have participation (shadowing) from participants that are not | ||
yet qualified to help them gain the necessary skills to be a reviewer | ||
in the future. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Should we add something about managing conflict of interest of the reviewer/project?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
|
||
## Time and effort | ||
|
||
The level of effort for the reviewers is expected to be 10 hours. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Should we require (encourage) folks to log their time so we can actually benchmark this and track and normalize (ie # hours per SLOC or feature count, etc?)
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I think this is mostly to set expectations about the level of commitment. Some reviewers will want to spend more time, based on their interest in the project or level of experience. If reviewers find that the time estimate is way off, then I think running an experiment where people track time could be helpful. For now, I think we need a bit more experience with the process first.
organizations. The ideal reviewer should also have been the recipient | ||
of security audits for a software project they manage. Note that it is | ||
encouraged to have participation (shadowing) from participants that are not | ||
yet qualified to help them gain the necessary skills to be a reviewer |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Perhaps we should explicitly catalog what those skills are? Not only is that useful for those hoping to participate, but it lends itself to segregating the work and parallel processing. And not all experts will be experts in everything. for example I have deep crypto review expertise, but very little linux kernel review expertise.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
There's not a clear consensus on what this is, so we decided to do a number of assessments before adding more detail -- which i think we set as 6-8 assessments, but could revisit earlier if it causes friction in the process or there are specific recommendations that help the process
@rficcaglia -- thanks for chiming in on #142 -- feel free to add more thoughts / ideas there on this topic, and we'll revisit at a future checkpoint
there are hidden assumptions, underestimated risk, or design issues that | ||
harm the security of the project. It may be useful to reach out to | ||
community members to understand the answers to some questions, especially | ||
involving deployment scenarios and the impact of attacks. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Perhaps there should be a formal "request for comment" process where domain experts are solicited and there is a waiting period to allow those experts to weigh in? eg 10 days? perhaps we should put the burden on the project under review to nominate 2-3 non-affiliated subject matter experts to participate and advise the reviewer team?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This makes sense but I think might be a little too heavyweight for the initial iterations of this process. We will grow and add to this over time in the most helpful ways possible and this certainly could be one of those directions.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
that is specified in outline.md "Project posts their document to the group mailing list, allowing at least
one week for review before presenting to the WG"
@@ -0,0 +1,31 @@ | |||
# Security Reviewers | |||
|
|||
Reviewers will try to understand the |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I recommend we have at least N (N>=2, 3?) reviewers to ensure diverse and possibly even conflicting opinions, lest there is too much dependence on one or two reviewers? it also encourages expansion of the network of reviewers so we cultivate a large, diverse experience base.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
specified in outline.md " assigned to lead security reviewer who
will recruit a second reviewer"
* Vulnerability Response Process. Who is responsible for responding to a report. What is the reporting process? How would you respond? | ||
* Ecosystem. How does your software fit into the cloud native ecosystem? (e.g. Flibber is integrated with both Flocker and Noodles which covers virtualization for 80% of cloud users. So, our small number of "users" actually represents very | ||
wide usage across the ecosytem since every virtual instance uses Flibber | ||
encryption by default.) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
+1 on explicitly listing the technologies and assumptions. This would also inform mapping which domains and subject matter experts are needed. even more important it might highlight areas where stuff should NOT have been invented (we rolled our own symmetric encryption algorithm instead of using X cuz we thought X was too slow...oops we never considered side channel attacks)
* Internal. How do team members communicate with each other? | ||
* Inbound. How do users or prospective users communicate with the team? | ||
* Outbound. How do you communicate with your users? (e.g. flibble-announce@ mailing list) | ||
* Vulnerability Response Process. Who is responsible for responding to a report. What is the reporting process? How would you respond? |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
at a minimum shouldn't this SIG be informed "formally"? i suggest there is a mechanism (GHI, commit to some markdown file, etc) for explicit capture of all known vulns over time.
how keys are likely to be managed and stored. | ||
|
||
## Project design | ||
* Design. A description of the system design that discusses how it works. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
is there any interest in promoting/recommending formal design specification ,eg TLA+ and such?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This seems great if a project has it. I don't think we want to require this at the outset.
|
||
## Additional Process Notes | ||
|
||
Iteration is expected; however, we expect quick turnaround (at most a week). |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Is there a desire to have an explicit followup process, ie an annual review? I think the notion that a single point in time review ignores the reality that threats change over time. someone using this review process as evidence of "trustworthiness" can be lulled into thinking something is "safe" since it was reviewed N years ago.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
yes! added as open issue: #152 (so we can get this PR merged for initial reviews that are currently blocked, then add more detail on annual review process)
cloud native ecosystem | ||
|
||
Due to the nature and timeframe for the analysis, *this review is not meant | ||
to subsume the need for a professional security audit of the code*. Audits |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I agree with separating the concerns here, but I think there would be huge benefit in providing open source code audits that do not rely on vendors, and encourage transparent and diverse methodologies. Perhaps an open code audit sub-SIG? maybe put the burden on the USERS to volunteer to participate and collectively encourage more active engagement with the user community instead of just passively harvesting value from open source?
I think open source code audit would also have the side benefit of providing real world education to a larger group of engineers on how to code securely (and how to review code for security). this is a skill that should really be part of every developer's forte, and the more we can model and inculcate secure coding skills, the better for all of us.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
There's a different CNCF audit process -- we should link that here (or open an issue if it's not yet documented).
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I believe I have captured all of the actionable comments as issues tagged with assessment
I checked in with @JustinCappos who originally authored this PR based on prior security reviews that were presented to and accepted by the CNCF TOC. Based on his feedback (and prior agreement in one of the live meetings a few weeks ago), I'm going to merge this PR and we'll address improvements with additional PRs, as needed or with data from retrospective checkpoint after 6-8 assessments. |
First cut at security audit guidelines
No description provided.