You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Describe the test strategy & approach for this feature, and describe how the approach verifies the functions delivered by this feature.
Repackage and automate the TCK tests provided by the jsonb API project to ensure we are running all applicable tests that have already been created to verify that the jsonb implementation, yasson, we are shipping passes all required tests for certification.
Re-ran the existing com.ibm.ws.jsonb_fat test suite using Jakarta EE 10 features.
Wrote a new test suite, io.openliberty.jakarta.jsonb.internal_fat , to test new Jakarta EE 10 features
List of FAT projects affected
io.openliberty.jakarta.jsonb.3.0_fat_tck
Regression and New Function Testing
Package Name
Purpose
ee.jakarta.tck.json.bind.api.annotation
Tests annotations such as @JsonbNillable and @JsonbProperty work
ee.jakarta.tck.json.bind.api.builder
Tests the JsonbBuilder api methods
ee.jakarta.tck.json.bind.api.config
Tests the JsonbConfig api methods
ee.jakarta.tck.json.bind.api.exception
Negative tests to ensure correct exceptions are thrown
JsonbNillable is allowed on methods and fields and determines if a JSON null value is written versus omitting the property entirely. The jsonbContainer is used to add a fake JSON-B provider that switches the default PROPERTY_NAMING_STRATEGY to LOWER_CASE_WITH_DASHES.
Uses jsonbContainer + jsonp features with a bell pointing to a thrid party implementation of Jsonb
Tests jsonb running in a servlet and CDI bean. Ensures the correct jsonb provider is used. Tests previous release features
JSONBInAppTest
Uses jsonbContainer + jsonpContainer features with a bell pointing to yasson(RI) and a thrid party implementation of jsonp
Tests jsonb running in a servlet and CDI bean. Ensures the correct jsonb provider is used. Tests previous release features.
JSONBTest
Uses jsonb and jsonp features
Tests jsonb running in a servlet and CDI bean. Ensures the correct jsonb provider is used. Tests previous release features.
JSONPContainerTest
Uses jsonpContainer feature with a bell pointing to a thrid party implementation of jsonp
Tests jsonp running in a servlet. Ensures the correct jsonp provider is used. Tests previous release features.
JsonUserFeatureTest
Uses jsonbContainer + jsonpContainer features with a bell pointing to yasson(RI) and a thrid party implementation of jsonp along with user features that use jsonb and jsonp during their activate methods
Tests jsonb and jsonp were able to be used during OSGi service method via injection.
Test strategy
What functionality is new or modified by this feature?
Allow @JsonbNillable at field and method level, and then deprecate nillable=true/false on @JsonbProperty
Allow @JsonbTypeDeserializer and @JsonbTypeAdapter to be specified at parameter level
Introduce @JsonbRequired to designate which @JsonbCreator fields are optional vs required.
When deserializing a JsonValue typed attribute, a null value should be serialized as JsonValue.NULL rather than null
Polymorphic serialization/deserialization
What are the positive and negative tests for that functionality? (Tell me the specific scenarios you tested. What kind of tests do you have for when everything ends up working (positive tests)? What about tests that verify we fail gracefully when things go wrong (negative tests)?
Previously outlined in io.openliberty.jakarta.jsonb.internal_fat section
What manual tests are there (if any)? (Note: Automated testing is expected for all features with manual testing considered an exception to the rule.)
There are no manual tests
Confidence Level
Collectively as a team you need to assess your confidence in the testing delivered based on the values below. This should be done as a team and not an individual to ensure more eyes are on it and that pressures to deliver quickly are absorbed by the team as a whole.
Please indicate your confidence in the testing (up to and including FAT) delivered with this feature by selecting one of these values:
0 - No automated testing delivered
1 - We have minimal automated coverage of the feature including golden paths. There is a relatively high risk that defects or issues could be found in this feature.
2 - We have delivered a reasonable automated coverage of the golden paths of this feature but are aware of gaps and extra testing that could be done here. Error/outlying scenarios are not really covered. There are likely risks that issues may exist in the golden paths
3 - We have delivered all automated testing we believe is needed for the golden paths of this feature and minimal coverage of the error/outlying scenarios. There is a risk when the feature is used outside the golden paths however we are confident on the golden path. Note: This may still be a valid end state for a feature... things like Beta features may well suffice at this level.
4 - We have delivered all automated testing we believe is needed for the golden paths of this feature and have good coverage of the error/outlying scenarios. While more testing of the error/outlying scenarios could be added we believe there is minimal risk here and the cost of providing these is considered higher than the benefit they would provide.
5 - We have delivered all automated testing we believe is needed for this feature. The testing covers all golden path cases as well as all the error/outlying scenarios that make sense. We are not aware of any gaps in the testing at this time. No manual testing is required to verify this feature.
Based on your answer above, for any answer other than a 4 or 5 please provide details of what drove your answer. Please be aware, it may be perfectly reasonable in some scenarios to deliver with any value above. We may accept no automated testing is needed for some features, we may be happy with low levels of testing on samples for instance so please don't feel the need to drive to a 5. We need your honest assessment as a team and the reasoning for why you believe shipping at that level is valid. What are the gaps, what is the risk etc. Please also provide links to the follow on work that is needed to close the gaps (should you deem it needed)
KyleAure
added
Epic
Used to track Feature Epics that are following the UFO process
and removed
Epic
Used to track Feature Epics that are following the UFO process
labels
May 23, 2022
KyleAure
changed the title
Feature Test Summary
Feature Test Summary For JSON Binding (JSONB) 3.0 for Jakarta EE 10
May 23, 2022
Test Strategy
Describe the test strategy & approach for this feature, and describe how the approach verifies the functions delivered by this feature.
Repackage and automate the TCK tests provided by the jsonb API project to ensure we are running all applicable tests that have already been created to verify that the jsonb implementation, yasson, we are shipping passes all required tests for certification.
Re-ran the existing com.ibm.ws.jsonb_fat test suite using Jakarta EE 10 features.
Wrote a new test suite, io.openliberty.jakarta.jsonb.internal_fat , to test new Jakarta EE 10 features
List of FAT projects affected
io.openliberty.jakarta.jsonb.3.0_fat_tck
Regression and New Function Testing
io.openliberty.jakarta.jsonb.internal_fat
New Functionality
com.ibm.ws.jsonb_fat
Regression Testing - Container features
Test strategy
What functionality is new or modified by this feature?
What are the positive and negative tests for that functionality? (Tell me the specific scenarios you tested. What kind of tests do you have for when everything ends up working (positive tests)? What about tests that verify we fail gracefully when things go wrong (negative tests)?
What manual tests are there (if any)? (Note: Automated testing is expected for all features with manual testing considered an exception to the rule.)
Confidence Level
Collectively as a team you need to assess your confidence in the testing delivered based on the values below. This should be done as a team and not an individual to ensure more eyes are on it and that pressures to deliver quickly are absorbed by the team as a whole.
Please indicate your confidence in the testing (up to and including FAT) delivered with this feature by selecting one of these values:
0 - No automated testing delivered
1 - We have minimal automated coverage of the feature including golden paths. There is a relatively high risk that defects or issues could be found in this feature.
2 - We have delivered a reasonable automated coverage of the golden paths of this feature but are aware of gaps and extra testing that could be done here. Error/outlying scenarios are not really covered. There are likely risks that issues may exist in the golden paths
3 - We have delivered all automated testing we believe is needed for the golden paths of this feature and minimal coverage of the error/outlying scenarios. There is a risk when the feature is used outside the golden paths however we are confident on the golden path. Note: This may still be a valid end state for a feature... things like Beta features may well suffice at this level.
4 - We have delivered all automated testing we believe is needed for the golden paths of this feature and have good coverage of the error/outlying scenarios. While more testing of the error/outlying scenarios could be added we believe there is minimal risk here and the cost of providing these is considered higher than the benefit they would provide.
5 - We have delivered all automated testing we believe is needed for this feature. The testing covers all golden path cases as well as all the error/outlying scenarios that make sense. We are not aware of any gaps in the testing at this time. No manual testing is required to verify this feature.
Based on your answer above, for any answer other than a 4 or 5 please provide details of what drove your answer. Please be aware, it may be perfectly reasonable in some scenarios to deliver with any value above. We may accept no automated testing is needed for some features, we may be happy with low levels of testing on samples for instance so please don't feel the need to drive to a 5. We need your honest assessment as a team and the reasoning for why you believe shipping at that level is valid. What are the gaps, what is the risk etc. Please also provide links to the follow on work that is needed to close the gaps (should you deem it needed)
Voting Results: 5
Voting Log: https://ibm-cloud.slack.com/archives/C31DXH4GJ/p1653420724727189
The text was updated successfully, but these errors were encountered: