-
-
Notifications
You must be signed in to change notification settings - Fork 209
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Tests for annotations #652
Comments
Have you seen the existing output tests? I think those and this are related. Is there value in having both? Is annotations without the output format something that we want to support/encourage? |
There's definitely some overlap with output format testing, but I see these as separate things and it makes sense to test these things separately. Tests that just cover an output format wouldn't have been sufficient for the tool I wrote. One problem is that the annotation utilities I wrote don't use any of the output formats. I could have built it as a transformation on an existing output format, but introducing that intermediate step was unnecessary. It would have just made the implementation more complicated and less performant. The annotation behavior is what matters, the rest is implementation details. The other problem is that although an output format includes annotations, output format tests are not directly testing for annotation behaviors, which means it's not helpful for the kind of tooling I wrote. I could have used one of the output formats as an intermediate representation in my implementation, but testing that intermediate representation isn't the same as testing the annotation behavior in my implementation. The step of getting the annotations from the output format wouldn't be covered and writing tests to cover that step would pretty much duplicate the cases covered in the output format. |
Then I think the output tests need a different focus, and I'm not sure what that would be or how to test it. Currently, they test things like whether the right annotations are emitted and they're reported in the right places, like this. |
Could you elaborate on this with some examples please? I'm not sure I understand the distinction. |
I disagree. I think annotation tests serve a different purpose than output tests. It's no different than how validation tests serve a different purpose than output tests. Validation tests test the validation behavior (given a schema and an instance, what is the true/false result). That behavior isn't (and shouldn't be) dependent on the output structure. When testing the validation behavior, it doesn't matter what part of the schema failed, it only matters that it failed. Annotation tests are the same. They test the annotation behavior (what annotations end up attached to which values). That behavior isn't (and shouldn't be) dependent on the output structure. When testing the annotation behavior, It doesn't matter what part of the schema contributed the annotation, only that a value in the instance was annotated. The value of the output format is that it provides the details of the evaluation process to a third-party application that might want to do something with them. Let's assume a third-party application takes an output format result and transforms it into human friendly error messages. The result of that transformation is the behavior they would want to have tests for. It's true that all of the data necessary to accomplish that task is in the output format, but that doesn't mean output format tests are helpful in testing their behavior. Output format tests are important because they are needed to ensure that a third-party can take the output result from any implementation and their software's behaviors will be consistent. Otherwise the output tests don't help them test their behaviors. Annotation behaviors are no different.
Sure. Here's a schema that when applied to an instance, annotates the {
"type": "object",
"properties": {
"name": {
"$ref": "#/$defs/name",
"title": "Given Name"
}
},
"$defs": {
"name": {
"type": "string",
"title": "Name"
}
}
} Let's say that there's a const output = validate(schema, instance);
const titles = annotations(output, "title", "/name"); Output format tests can ensure that |
The examples I linked are testing for annotation behaviors.
This example is heavily influenced by your implementation, and the spec doesn't say anything about extracting annotations from the output or returning them separately. They're part of the output and should be tested accordingly. If you want to have annotations tested for specifically and separately, that's fine, but if we do that, we need to figure out what output tests are checking. Currently (aside from this PR), they're checking for annotations because that's all that the output really contains aside from validation. This is what I mean by refocusing the output tests. Maybe there's overlap between the suites, and that's just something we (I) have to accept. |
Okay. Discussed offline a bit, and I think this is a good addition to the suite. I think that output testing approach needs to be reconsidered somewhat to minimize overlap between these annotation tests and the output tests (some overlap is unavoidable). I've made a proposal ☝️. |
I recently published a library for working with annotations and included a set of tests in JSON that could be adopted here if there is interest. These tests are agnostic of any output format. They just test what annotations should appear (or not appear) on which values of an instance.
Here's a example:
Notes/Comments/Rationale:
$schema
if a test is specifically covering a draft release.$defs
,properties
, etc). I don't think that should matter. I think blackbox outcome testing is the right level to be testing for this kind of thing."expected"
is an array because there's always a chance that an annotation is applied multiple times to any given instance location. However, the ordering of this array is not important. I wrote the tests such that the most recently encountered value for an annotation during evaluation is first in the array. Generally, when a schema has multiple annotations for the same location, users expect the most recent value to override the previous values. The array is ordered this way so that people who expect the overriding behavior can always just choose the first value, but that ordering is an implementation choice and shouldn't be considered when writing a test harness against these tests.The text was updated successfully, but these errors were encountered: