You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
our own UI, building upon Liberty Maven/Gradle Plugins and other Eclipse core components (like launching of Run/Debug configurations) and m2e/buildship (Maven+Gradle Eclipse tooling, respectively)
integration of a set of Language Server implementations and extensions for domain-specific code assist : LSP4MP (MicroProfile APIs), LSP4Jakarta (Jakarta V9/10 APIs), LCLS (bootstrap.properties and server.env server config files, with XML LS extension for server.xml support).
We have automated integration tests that run on every pull request, via GitHub Actions, for each of Mac, Linux, and Windows. We also have a Jenkins pipeline that runs the same tests, on Linux, on a weekly schedule and also before we do a release.
These tests can also be run locally via Maven and from the Eclipse IDE.
List of FAT projects affected
None (of the runtime FAT projects are affected, and this checklist comes from the OL runtime repo)
Test strategy
What functionality is new or modified by this feature?
All functionality is new.
What are the positive and negative tests for that functionality? (Tell me the specific scenarios you tested. What kind of tests do you have for when everything ends up working (positive tests)? What about tests that verify we fail gracefully when things go wrong (negative tests)?
What manual tests are there (if any)? (Note: Automated testing is expected for all features with manual testing considered an exception to the rule.)
Manual Testing
There are a few key areas where we rely on manual testing:
We do NOT exercise the function of the various LS(s) as they are integrated into the Liberty Tools Eclipse environment. We rely on manual testing of the integration, i.e. manual testing of the ultimate function of the LS components within Liberty Tools Eclipse.
However, that being said, note that each LS component contains its own automated tests. These tests abstract out the integration aspect of the various IDE environments and test the LS operations at a "core", lower level, one might say.
We do NOT exercise our automated tests against the installation of our feature as installed from the Eclipse Marketplace on top of one of the standard Eclipse packages. Another way of saying this is that we assume that the Eclipse install (the collection of installed and activated plugins & features) that we construct via our test mechanisms (using Tycho, various Eclipse PDE technologies, etc.) is substantially equivalent to the Eclipse install that an end user ends up with.
After spending the release cycle working on this, we believe that 1. continues to be a priority, and a gap we should close. We would like to at least include tests covering a sampling of key functions: snippet support, completion, diagnostic, quick fix, though without the need to include every or many variations of each of these.
On the other hand, we have not yet identified a real gap in ignoring 2. It's possible we'll discover such a gap one day, some difference between the set of plugins & versions that we end up with in an end user that isn't adequately covered with our Tycho config somehow. But at the moment it doesn't seem like a priority.
Confidence Level
3 - Ideally we would have automated testing for the LS integration. However, the fact that the LS(s) have their own automated testing mitigates this, and we mark this as a 3 rather than a 2.
Criteria (left from template)
Please indicate your confidence in the testing (up to and including FAT) delivered with this feature by selecting one of these values:
0 - No automated testing delivered
1 - We have minimal automated coverage of the feature including golden paths. There is a relatively high risk that defects or issues could be found in this feature.
2 - We have delivered a reasonable automated coverage of the golden paths of this feature but are aware of gaps and extra testing that could be done here. Error/outlying scenarios are not really covered. There are likely risks that issues may exist in the golden paths
3 - We have delivered all automated testing we believe is needed for the golden paths of this feature and minimal coverage of the error/outlying scenarios. There is a risk when the feature is used outside the golden paths however we are confident on the golden path. Note: This may still be a valid end state for a feature... things like Beta features may well suffice at this level.
4 - We have delivered all automated testing we believe is needed for the golden paths of this feature and have good coverage of the error/outlying scenarios. While more testing of the error/outlying scenarios could be added we believe there is minimal risk here and the cost of providing these is considered higher than the benefit they would provide.
5 - We have delivered all automated testing we believe is needed for this feature. The testing covers all golden path cases as well as all the error/outlying scenarios that make sense. We are not aware of any gaps in the testing at this time. No manual testing is required to verify this feature.
Based on your answer above, for any answer other than a 4 or 5 please provide details of what drove your answer. Please be aware, it may be perfectly reasonable in some scenarios to deliver with any value above. We may accept no automated testing is needed for some features, we may be happy with low levels of testing on samples for instance so please don't feel the need to drive to a 5. We need your honest assessment as a team and the reasoning for why you believe shipping at that level is valid. What are the gaps, what is the risk etc. Please also provide links to the follow on work that is needed to close the gaps (should you deem it needed)
The text was updated successfully, but these errors were encountered:
Test Strategy
Liberty Tools Eclipse incorporates:
We have automated integration tests that run on every pull request, via GitHub Actions, for each of Mac, Linux, and Windows. We also have a Jenkins pipeline that runs the same tests, on Linux, on a weekly schedule and also before we do a release.
These tests can also be run locally via Maven and from the Eclipse IDE.
List of FAT projects affected
Test strategy
All functionality is new.
The table in this Box note (IBM only, sorry: https://ibm.ent.box.com/notes/1157516815468?s=10zd7i79jhtt1smt4kwu71xnfs3ymizm) contains details.
Manual Testing
There are a few key areas where we rely on manual testing:
After spending the release cycle working on this, we believe that 1. continues to be a priority, and a gap we should close. We would like to at least include tests covering a sampling of key functions: snippet support, completion, diagnostic, quick fix, though without the need to include every or many variations of each of these.
On the other hand, we have not yet identified a real gap in ignoring 2. It's possible we'll discover such a gap one day, some difference between the set of plugins & versions that we end up with in an end user that isn't adequately covered with our Tycho config somehow. But at the moment it doesn't seem like a priority.
Confidence Level
3 - Ideally we would have automated testing for the LS integration. However, the fact that the LS(s) have their own automated testing mitigates this, and we mark this as a 3 rather than a 2.
Criteria (left from template)
The text was updated successfully, but these errors were encountered: