-
-
Notifications
You must be signed in to change notification settings - Fork 58
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Request: Evidence for Vulnerabilities #333
Comments
Excellent suggestion @nickvido. I think we need to clarify a few things...
|
Thanks, @stevespringett RE: Duplication and details of methods
RE: |
I'd like to get feedback from @brianf and @planetlevel if possible. |
Are you thinking this is only for CVEs? Lots of the tools and techniques listed identify vulns that will never be a CVE. Should we enable organizations to use CDX to communicate about these internal vulns? I didn't see a lot about exploitability here. Just binary "presence." Since a huge number of "vulnerabilities" are unexploitable in real systems, I think it's important to capture the evidence to help organizations decide what to do. A lot of overlap with the idea of VEX here. Vulnerabilities don't exist in a single line of code or stack frame. They typically span many files, methods, and libraries. I have a lot more thoughts, but want to be sure I'm thinking about the right use case for this. |
@planetlevel yes, CDX should (and does) allow orgs to communicate about internal vulns. This enhancement proposal would expand the specs existing support for VEX and VDR by providing evidence of how a vulnerability was discovered. The spec currently supports a description of the vulnerability, details, and recommendation, along with proof of concept including reproduction steps, environment information, and supporting material. While all that information is good (and necessary) for vulnerability communication and disclosure, it's mostly textual based. This proposal is to add a bit more machine parsable metadata about how the vulnerability was discovered. When looking at this proposal, think of it as the application as a whole, where 1st party or 3rd party code may be the culprit. |
IMO, I think the |
The follow is the proposed enumeration of techniques and the existing techniques used in component identity evidence.
@nickvido can I get some feedback on this please? Also, I'd like to get feedback from @jkowalleck, @coderpatros, and @christophergates Are there techniques that are missing that can substantiate the presence or status of a vulnerability? I'm also interested in further guidance that could fulfill the following use case:
While it is impossible to prove a negative, using evidence to build a case is an interesting perspective and could give credibility to VEX decisions. |
I'm also hopeful that we can come up with a common set of techniques that can be leveraged across component identity evidence and vulnerability presence evidence - similar to how we have a common set of external references that can be applied to virtually any object type. |
@nickvido with respect to:
I'm wondering if "presence" is the correct noun to use. We may want to choose something else. OR, we may want to have both "presence" and "absence". For example, if we had both, it could be possible to supply evidence in support of, and against the affected state of a vulnerability. If we only have "presence", then we will need to also rely on vulnerability->analysis->state being set, as we will not know if the evidence is in support of the application being affected by a vulnerability, or not affected by a vulnerability. |
Because there exists such a massive range of quality and coverage in tools, I'm not sure it helps to list the generic category of tool that reports (or doesn't report) a vulnerability. For example SAST tools range from simple grep to full dataflow. So I'm afraid people will seek to check a bunch of boxes as "proof" when really they didn't prove jack shit. There are three options that would be useful.
For (2) I'm imagining somehow detailing the contribution that the tool makes to the exploitability argument. A SAST tool reports the presence of a vulnerability, but generally doesn't have enough context to contribute to the exploitability discussion. Same with static SCA. But some tools calculate static reachability. That's one step closer to exploitability. Runtime Security tools (IAST/RASP) capture runtime reachability... that's one more step closer. Runtime Security tools also capture runtime data reachability... one more step. Pentest tools can go even further, actually demonstrating exploitability on the actual system -- that's pretty convincing evidence. For what it's worth, here's my conception of "levels" of exploitability proof... Is there any way we could make this what CDX captures? Like, how far did you get in proving exploitability?
So, a set of evidence that says But I'd be really convinced if the argument said: All you'd really need to report is the last step -- the actual exploit is 100% convincing. But I kind of like reporting the "discovery and analysis history." |
This is great. Thanks everyone! An additional request I would have would be describing the prompt(s) used to attack a Large Language (LLM) or other AI model. Currently, we are using the "properties" field to record prompts because "proofOfConcept/supportingMaterial" does not allow for labeling of what the content is. Thus, it would be difficult to make clear which prompt came first, which entries are responses, etc. As a result, we use the "properties" field to generally describe the approach used to exploit the vulnerability, for example (which is not optimal but how we are doing it now):
Optimally, however, we would have a way to look up prompts and responses in a standardized manner. A potential solution would be to add a "label" or "description" field to the supportingMaterial object. But there may be more elegant solutions. Please let me know what questions or feedback you might have. |
This ticket needs addition discussion. Moving to v1.7 so that we can have ample time to flush this out. |
Request: Evidence for Vulnerabilities
Similar to existing support for evidence for components, and other requests for evidence elsewhere, the request is to support evidence in the
Vulnerability
object. Specifically, what evidence can be provided to substantiate the presence or status of the vulnerability. Evidence can also be used in the "negative" context - to establish that a vulnerability is NOT AFFECTED, for example.identity
on component evidence) “Evidence that substantiates the presence or absence of the vulnerability.”Example
The text was updated successfully, but these errors were encountered: