-
Notifications
You must be signed in to change notification settings - Fork 275
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Prototype descriptors and validation functions #1366
Prototype descriptors and validation functions #1366
Conversation
Those functions are created to be used during model initialization. They are the stepping stone for other validation solutions (like python descriptors, decorators, and ValidationMixin) which will reuse part of them. For more context read ADR 0007. Signed-off-by: Martin Vrachev <[email protected]>
Add validation functions for all Root dictionary key and values as described in the spec. Signed-off-by: Martin Vrachev <[email protected]>
This experiment tries to help us envision how validation will look like if we decide to use validation functions + descriptors. What I found with this experiment is that python descriptors can't handle dictionary keys validation as well as dictionary values. More precisely, we can't validate that each of the "roles" in Root is one of the 4 metadata types or that Root role keyids are unique. The reason is that for example if we are to run this: "root.roles = 3" this will invoke validation for root.roles, but if we invoke: "root.roles["dwad"] = 3" then we are getting an element of the dictionary roles and then assigning it a new value. Signed-off-by: Martin Vrachev <[email protected]>
Writing down some thoughts on this (wall of text sorry). The task at hand is not "write validation code" -- or if it was that I want to now change it to "1. improve security against malicious json, and 2. make API easy to use correctly and hard to use incorrectly (with least amount of most obvious code)". This is abstract but I do have practical suggestions below. The reason I say this is that I fear we are concentrating too much on adding validation when there are other solutions (like modifying the implementation) to achieve the same goals. How should we proceed with this task? I believe we need to one by one analyze individual items in the API (starting with the small bits like strings and continuing to larger items). For each item we should consider and document
and only then implement chosen improvement, if any. Many of these improvements will include validation but we should not add validation if we can achieve the goals without it and validation may be only part of the solution to a specific issue. Also what exactly is validated should be spelled out somewhere -- otherwise it's impossible to tell if the validation implementation is correct or not. This is hard work but I believe it's the path to better code. Trying to implement a lot validation at once will not lead to good quality. As examples I went through analysis for _type and spec_version:
I would start with individual PRs with that sort of analysis and proposed fixes. I would probably start by just adding the validation calls in constructors and maybe making the attributes properties with setters and see how it goes from there -- but I can see the value in trying other integration methods as well (descriptors/decorations)... I'm assuming a single solution is not going to cover all cases and starting simple is good (so we can change our minds). More detail comments:
|
I agree with the general idea to rethink how we store and operate with the different metadata fields. I plan to follow this process for each of the metadata attributes:
|
Closing this one as it's no longer applicable. |
A while ago we decided that it's best to research each of the individuals attributes one by one and identify what level of validation it needs compared to how we use it: theupdateframework#1366 (comment). This work is ongoing and there are a couple of commits already merged for this: - theupdateframework@6c5d970 - theupdateframework@f20664d - theupdateframework@41afb1e We want to be able to test the attributes validation against known bad values. The way we want to do that is with table testing we have added using decorators for our metadata classes defined in New API: theupdateframework#1416. This gives us an easy way to add new cases for each of the attributes and not depend on external files. Signed-off-by: Martin Vrachev <[email protected]>
A while ago we decided that it's best to research each of the individuals attributes one by one and identify what level of validation it needs compared to how we use it: theupdateframework#1366 (comment). This work is ongoing and there are a couple of commits already merged for this: - theupdateframework@6c5d970 - theupdateframework@f20664d - theupdateframework@41afb1e We want to be able to test the attributes validation against known bad values. The way we want to do that is with table testing we have added using decorators for our metadata classes defined in New API: theupdateframework#1416. This gives us an easy way to add new cases for each of the attributes and not depend on external files. Signed-off-by: Martin Vrachev <[email protected]>
A while ago we decided that it's best to research each of the individuals attributes one by one and identify what level of validation it needs compared to how we use it: theupdateframework#1366 (comment). This work is ongoing and there are a couple of commits already merged for this: - theupdateframework@6c5d970 - theupdateframework@f20664d - theupdateframework@41afb1e We want to be able to test the attributes validation against known bad values. The way we want to do that is with table testing we have added using decorators for our metadata classes defined in New API: theupdateframework#1416. This gives us an easy way to add new cases for each of the attributes and not depend on external files. Signed-off-by: Martin Vrachev <[email protected]>
A while ago we decided that it's best to research each of the individuals attributes one by one and identify what level of validation it needs compared to how we use it: theupdateframework#1366 (comment). This work is ongoing and there are a couple of commits already merged for this: - theupdateframework@6c5d970 - theupdateframework@f20664d - theupdateframework@41afb1e We want to be able to test the attributes validation against known bad values. The way we want to do that is with table testing we have added using decorators for our metadata classes defined in New API: theupdateframework#1416. This gives us an easy way to add new cases for each of the attributes and not depend on external files. Signed-off-by: Martin Vrachev <[email protected]>
A while ago we decided that it's best to research each of the individuals attributes one by one and identify what level of validation it needs compared to how we use it: theupdateframework#1366 (comment). This work is ongoing and there are a couple of commits already merged for this: - theupdateframework@6c5d970 - theupdateframework@f20664d - theupdateframework@41afb1e We want to be able to test the attributes validation against known bad values. The way we want to do that is with table testing we have added using decorators for our metadata classes defined in New API: theupdateframework#1416. This gives us an easy way to add new cases for each of the attributes and not depend on external files. Signed-off-by: Martin Vrachev <[email protected]>
Description of the changes being introduced by the pull request:
This experiment tries to help us envision how validation will look like
if we decide to use validation functions + descriptors.
What I found with this experiment is that python descriptors can't
handle dictionary keys validation as well as dictionary values.
More precisely, we can't validate that each of the "roles" in Root is
one of the 4 metadata types or that Root role keyids are unique.
The reason is that for example if we are to run this:
"root.roles = 3" this will invoke validation for root.roles, but
if we invoke: "root.roles["dwad"] = 3" then we are getting an element
of the dictionary roles and then assigning it a new value.
Please verify and check that the pull request fulfills the following
requirements: