Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

if/then/else validation sequence tests #436

Closed
gregsdennis opened this issue Sep 26, 2020 · 45 comments
Closed

if/then/else validation sequence tests #436

gregsdennis opened this issue Sep 26, 2020 · 45 comments

Comments

@gregsdennis
Copy link
Member

I had an issue raised in my new implementation that highlighted a bug around if/then/else appearing in the "wrong" order in a schema. Would be a good few tests to have.

I'm on mobile, so I'm just making a note, really. I'll try to work up a PR later.

@Relequestual
Copy link
Member

This is confusing. A JSON object, and therefore a JSON Schema, has no order. Arrays have order, but objects do not.

A JSON file may have a textual order in which keys appear in an object, but I've never seen a JSON parser include that information once JSON is parsed into something native.

Could you clarify what you mean here?

@ssilverman
Copy link
Member

ssilverman commented Sep 27, 2020

If a parser has a bug then it’s always a good idea to have a test for it, even if we think “that shouldn’t happen because assumption A, B, or C.” Just because your implementation assumes something, doesn’t mean other implementations do as well.

To wit: JSON parsers aren’t required to have non-ordered objects, nor are they required to have non-duplicated keys in objects. It just so happens that it’s very common to implement them as an unordered “map” with unique keys.

I’m with Greg in this one. If an implementation breaks on some schema, then a test for why it broke is a good idea.

@ssilverman
Copy link
Member

Ok, I know y’all are clamoring for a link: https://tools.ietf.org/html/rfc8259#section-4 (and kinda section 9, but that’s a distant relationship). :)

@gregsdennis
Copy link
Member Author

Regardless of how keys exist in a file, I still have to iterate over them, meaning I get them in some order.

I must process the if before the then, or it doesn't work. If I don't take special care and just process them as I iterate over them, the then may be given to me first, but at that point, I don't have any annotations from an if, so it gets skipped.

This is wrong, and it deserves a test to ensure that implementations aren't making that mistake.

@Relequestual
Copy link
Member

So when you serialise from JSON and then the the keys of the object, you get them in the order they were in the file?

In all the languages I've worked in, getting the keys from an object are never ordered. Therefore I'd never considered anyone would simply iterate over them to process a schema.

Given that, unless the serialised JSON keys are in the same order as they were in the file, you can't put such a test in JSON.

@gregsdennis
Copy link
Member Author

They're not "ordered," but there is some sequence in which the implementation processes them. No implementation runs them in perfect parallel.

@gregsdennis
Copy link
Member Author

Manatee.Json, Newtonsoft.Json, and System.Text.Json are all capable of outputting the exact text they read in, which means they process them in the order of the file.

If any implementation reads a file at all, it must read them in the order of the file simply by virtue of reading the bytes.

@Relequestual
Copy link
Member

What I'm trying to say here is, I understand the problem, but because of the nature of the problem, it cannot be reliably tested in this test suite, because the order won't be the same as it was in JSON, when parsed.

@Relequestual
Copy link
Member

If you did, you'd sometimes get false positives, especially if the order of keys is deterministic, which it is in some languages.

@Relequestual
Copy link
Member

@ssilverman While we test that order in arrays doesn't effect applicators that have to happen in some order, and that's fine, we can't test the JSON parser in the test suite.

There are MANY tests.
This is my go to link on this issue:
http://seriot.ch/parsing_json.php

@notEthan
Copy link
Contributor

a fair number of json parsers preserve key order, including the one built into ruby. a test that key order does not affect if/then/else validation sounds like a good addition to me. I don't really understand the opposition to it - of course it won't make any difference in parsers where key order is not preserved, but in the not-particularly-rare case that it is preserved, order independence is worth affirming.

it's probably worthwhile to do this for other order-dependent keys as well.

@Julian
Copy link
Member

Julian commented Sep 27, 2020

There are other tests like this in the suite. To me too it's a good idea.

You can test this by having two different tests with different serialized orders. It's true that as JSON they're supposed to be equal but as you say in languages with ordered maps you still can get bugs.

I'd just say in my typical style though that these kinds of tests are exactly more reasons I am a stickler about running automated scripts over the suite :). Since a test like this will likely get totally screwed up when the parser used e.g. sorts the keys upon reserializing.

But yeah +1 from me on adding these.

@Relequestual
Copy link
Member

I'm not yet convinced. I'm going to ask some more knowledgeable colleagues.

@handrews
Copy link
Contributor

handrews commented Sep 27, 2020

[EDIT: I read this issue first thing in the morning, which you'd think I'd know not to do by now, and got it turned around in my head- the following is left up for amusement value only 😅 ]

OK absolutely flat-out no. JSON keys are NOT ordered. The fact that many parsers do preserve order (and some preserve order at least most of the time- I've encountered some that only do so up to some size, although don't ask me for a reference now it was years ago) is completely irrelevant. The spec is 100% clear on this (from the section @ssilverman referenced):

JSON parsing libraries have been observed to differ as to whether or
not they make the ordering of object members visible to calling
software. Implementations whose behavior does not depend on member
ordering will be interoperable in the sense that they will not be
affected by these differences.

That means that the ONLY interoperable behavior is not depending on member ordering. From the perspective of JSON Schema, "interoperability" is important across all languages/libraries/environments, so just having one library/language/environment that is consistently ordered within itself is irrelevant. There exist valid RFC 8259-compliant JSON parsers to produce unpredictable ordering, therefore that is the behavior that MUST be supported. JSON Schema authors can and have always been able to write keywords in any order, and as the person who added if/then/else there is not and never has been any requirement that they appear in order, nor was it ever even briefly considered.

The JSON Schema data model is also intentionally divorced from the JSON text representation (and JSON Schemas, which can be JSON schema instances, are therefore handled according to the JSON Schema data model), for numerous reasons including the unpredictability of object parsing order.


json-schema-org/json-schema-spec#996 provides a machine-readable solution for noting these and other dependencies. Given then information in that file format, it is trivial to note that when you see then or else, you MUST first check for if and, if it is present, evaluate it. And then evaluate either then or else (but if if is not present, neither then nor else are evaluated). Likewise for dependencies such as additionalProperties on properties and patternProperties, and for unevaluated* on all in-place applicators regardless of the specific keyword.

Ordering. Is. Not. Significant. Ever.

@Julian
Copy link
Member

Julian commented Sep 27, 2020

I'm not sure what we're arguing about here. Is someone actually saying these tests (to ensure in fact ordering is not significant) shouldn't go in?

If so, please elaborate on why.

@Relequestual
Copy link
Member

I'm not sure what we're arguing about here. Is someone actually saying these tests (to ensure in fact ordering is not significant) shouldn't go in?

If so, please elaborate on why.

I think @handrews got confused here.

I am arguing that they should not go in.

My argument is that, for implementations which are doing it wrong, they could, by chance, get a false positive, and not realise there's a problem. I feel that a test which may provide a false positive is worse than no test.

@handrews
Copy link
Contributor

My argument is that, for implementations which are doing it wrong, they could, by chance, get a false positive, and not realise there's a problem. I feel that a test which may provide a false positive is worse than no test.

This I think is why I was confused, idk what we're testing here. Nothing in this test suite should have anything to do with parsing JSON.

@notEthan
Copy link
Contributor

Ordering. Is. Not. Significant. Ever.

this is the exact thing that the proposed tests would help to ensure.

@handrews
Copy link
Contributor

@notEthan

this is the exact thing that the proposed tests would help to ensure.

Why does JSON Schema need to test this?

@Relequestual
Copy link
Member

Parsing JSON is a misnomer here. It's not actually about parsing JSON. It's about, if you do JSON schema wrong, and the parser keeps an objects keys ordered, and then you iterate over those keys in order to process your JSON Schema work, you're doing it wrong.

Such a test would be fine if it could be reliably tested explicitly.

I have no problem modifying an existing if/then/else test, but not specifying that key ordering is being tested. Because you CANNNOT reliably test that from a common test suite. It's dangerous to potentially suggest you can. We must not.

@notEthan
Copy link
Contributor

Why does JSON Schema need to test this?

because it helps ensure that implementations behave consistently and correctly?

I don't know what criteria define "need" in this test suite. this seems like a thing that provides value.

@Relequestual
Copy link
Member

Relequestual commented Sep 27, 2020

Follow up...

https://twitter.com/clidus/status/1310221854309744640?s=21

"Unreliable test is worse than no test, as people lose faith in the tests and assume they can ignore failures.

Then you deploy to production, something critical goes down and you have to have a post-mortem on why people are ignoring test results.

True story."

@Julian
Copy link
Member

Julian commented Sep 27, 2020

A JSON Schema implementer can have a bug here.

I don't follow any way this could cause a flaky test, @notEthan has the right response above to match my opinion.

But I'm on the road, so I can't really follow the speed this issue is moving at.

This to me is an obvious good additional test. If that's not how we net out I'll try and read the back and forth later.

@Julian
Copy link
Member

Julian commented Sep 27, 2020

(properties and required are by the way another good example here. I bet all the existing tests have those with properties preceeding required but it's easy to misimplement those in this same situation)

@Relequestual
Copy link
Member

(properties and required are by the way another good example here. I bet all the existing tests have those with properties preceeding required but it's easy to misimplement those in this same situation)

Not sure I follow. Properties and additionalProperties would be a good example of a comparable test, if additionalProperties were first.

@Relequestual
Copy link
Member

A JSON Schema implementer can have a bug here.

I don't follow any way this could cause a flaky test, @notEthan has the right response above to match my opinion.

But I'm on the road, so I can't really follow the speed this issue is moving at.

This to me is an obvious good additional test. If that's not how we net out I'll try and read the back and forth later.

In some languages, you can never persist the order of an objects keys, and it's always random.

If the order of keys are always random, then this proposed test could sometimes pass sometimes fail, without any changes, if the implementation incorrectly relies on the order of an objects keys (like Greg's does).

It so happens that Greg's keys are consistently ordered. If they weren't, the new test would intermittently pass and sometimes fail.

We do not want to say we are specifically testing something, which in a good number of languages, will frequently provide a false positive. That would be bad.

Such a test doesn't belong here, because it's not reliable.

We can highlight it as something you could test for, (although it's a bad way to do it in the first place...), and you could write a language specific test which returns a specific set of ordered keys from a function which gets an objects keys, but that's not expressable in this test suite.

@Julian
Copy link
Member

Julian commented Sep 27, 2020

The point is to test languages which load serialized JSON objects into ordered maps in the order of the serialized object. That test is deterministic.

If you wanted to go further you could add a test in every permuted order, and then you'd cover even languages which use some arbitrary but deterministic (nonrandomized) order, but I bet even @gregsdennis wasn't suggesting such a thing (not that it's a bad idea to me but certainly it's less likely to uncover a bug than explicitly inverting the common order).

It's true you can still have a bug if your language uses a random or arbitrary ordered map and you unluckily randomly load all existing tests in an order that hides an order-dependent logic bug in your implementation. I don't see how intentionally not having a test that covers a common set of cases helps that.

(And if I had to make a prediction, the above hypothetical scenario would never happen.)

@notEthan
Copy link
Contributor

notEthan commented Sep 27, 2020

I have no problem modifying an existing if/then/else test, but not specifying that key ordering is being tested. Because you CANNNOT reliably test that from a common test suite. It's dangerous to potentially suggest you can. We must not.

a test with if/then/else in another order wouldn't purport to be testing the entire notion that key order is not significant. it's just a test that a valid schema containing the keywords in a different order produces the correct validation result.

I am having trouble following the logic in your last comment. you highlight a case where an incorrect implementation with random key order could unpredictably pass or fail this test, but a failure there still positively indicates an implementation error. that you can imagine an implementation that has a false pass does not mean the test is bad.

@Julian
Copy link
Member

Julian commented Sep 27, 2020

(said another way on the last part of @notEthan's comment, flaky tests are only bad if they fail when things are correct. A test that fails 60% of the time but when it fails at all it's because a bug is present is still better than no test at all)

@ssilverman
Copy link
Member

ssilverman commented Sep 27, 2020

"Unreliable test is worse than no test, as people lose faith in the tests and assume they can ignore failures.

This is not an unreliable test. Here’s why:

Test 1: if, then, else. Expected Pass.
Test 2: then, if, else. Expected Pass.

Parser 1 (unordered objects):
Test 1: Pass
Test 2: Pass

Parser 2 (ordered objects, but with correct if/then/else):
Test 1: Pass
Test 2: Pass

@gregsdennis’s parser:
Test 1: Pass
Test 2: Fail

@Relequestual: can you show why this test is unreliable? Am I missing something? Of course, if there’s an “expected fail” test (for the same structure, that is), that would not be correct, but there isn’t here. Key ordering is not completely being tested, only that different orderings behave identically.

Someone might look at this and say, “this is dumb, because it’s testing the exact same thing.“ However, we could add to the descriptions that this might affect order-dependent parsers.

@handrews
Copy link
Contributor

@ssilverman as someone who spent a good 15 years as a professional QA engineer / QA architect specializing it test automation, I am very comfortable agreeing with @Relequestual that this test is worse than useless.

Your Parser 1 scenario is inaccurate. Regardless of the order in the text, the JSON Schema implementation may quite likely always receive the keywords in the order if, then, else, in which case an implementation that only works if the keys are in that order will always pass when it should in fact fail a test based on the stated condition.

Do not attempt to convince me that no such parser exists- the fact that one could exist is sufficient.

The only way for this to be a valid test case is if the test framework can control the internal iteration order of the keys. The test suite is designed to be language/library/implementation-agnostic, so this is not possible.

Therefore the actual requirement is untestable with this test design. The choice of JSON Parser is not under our control, and the iteration order of the underlying system is often not even under the implementation's control.

The test is invalid as a black-box systems test. A unit test could cover it, and if we want to make recommendations as to unit tests that implementations should cover, we can certainly do that.

But there is no valid test case here for our test suite's architecture.

@handrews
Copy link
Contributor

The only way for this to be a valid test case is if the test framework can control the internal iteration order of the keys. The test suite is designed to be language/library/implementation-agnostic, so this is not possible.

This is, in fact, one of the reasons unit tests are a thing: a good test automation architecture is layered, with clearly understood scope for each layer.

Implementation authors have a responsibility to correctly map the output of the JSON Parser into the JSON Schema data model. They may do that trivially and rely on code that sorts out dependencies. They may impose an order on it as they read it in. We don't know or care, and we don't have access to the appropriate control point to verify it.

Yes, it's possible that a faulty implementation could "pass" the test suite because of the limits of black box testing. That's quite common. Perhaps we could narrow the false negatives, but we can't eliminate them.

If you want to have the regular if/then/else tests that attempt to make it more likely to catch ordering problems by writing the schemas with if last in the object, that's fine, but I don't think we should provide a suite that alleges it can catch ordering problems.

@jdesrosiers
Copy link
Member

@Relequestual

If the order of keys are always random, then this proposed test could sometimes pass sometimes fail, without any changes, if the implementation incorrectly relies on the order of an objects keys

I can't tell if this is a poor choice of words, or a misunderstanding. The ordering of properties in a JSON object is not random, it's undefined. The parser has some rule, you just can't count on another parse to have the same rule. It's highly unlikely that given the same input, the rule would change from one execution to another. I can run my tests a million times and the properties will be ordered the same way. Yes, flaky tests are very very bad, but that's not what we should expect from adding tests like this.

However, let's consider that for some reason you have a JSON parser that randomizes properties or is not completely deterministic for some reason. If you have this bug, your tests will already be flaky. Adding another test that changes the order doesn't make the problem worse.

Therefore, adding these tests won't cause any problems that weren't already there, but might catch some bugs. The results will either be deterministic, or they will be flaky in a way that has nothing to do with the test suite. It won't necessarily catch all bugs, but that's always been the nature of testing.

@gregsdennis
Copy link
Member Author

gregsdennis commented Sep 27, 2020

@handrews - The test suite is designed to be language/library/implementation-agnostic...

No, the spec is supposed to be language-agnostic. The test suite carries many examples of tests that came out of bugs in specific implementations. There are tests that I don't have to concern myself with simply because I'm in .Net, and they don't apply.

Regarding false positives, there are any number of tests that still pass without any keyword processing at all. This happened when I was developing a new validator, ground-up. I can't be expected to check that the passes are valid passes. That it passes every time should be sufficient. If it does so incorrectly but consistently, why should I look into it. It like a student's view of algebra class in school. If you get the right answer it doesn't matter how you got it, even though the teacher wants to validate how you got there.

This is actually a good argument for why we need output in the tests. It ensures the "how." But until we have that, we can't check how implementations arrive at their answers, only that they get the right one.

If I have a flaky test, that's grounds to investigate that test, at which point I find a bug. When I fix the bug, the test consistently passes. Then I'm happy the test exists because it helped make my implementation better.

@Julian
Copy link
Member

Julian commented Sep 27, 2020

I think there's been enough discussion here :).

I'm going to make a maintainer's decision that these can go in if @gregsdennis submits them.

I'll put some rationale later for why that is, and some guidelines for which tests are reasonable for inclusion in the suite (ones to be honest I didn't think needed writing down until now but I guess now's as good a time as any).

But let's save each other the long back and forth.

@ssilverman
Copy link
Member

ssilverman commented Sep 27, 2020

@handrews I hear your point about the layering.

Here’s how I see it: at worse, there are duplicate tests because ordering shouldn’t matter for if/then/else. At best, it catches some errors. As it is, certain keyword combinations depend on the presence of others (if/then/else, contains/minContains, etc.), which definitely brings “ordering concepts” to my mind.

We have a case where @gregsdennis wrote something that failed, which raises the possibility, however small, that others may encounter that too. So what seem like duplicate tests to most implementations, causing no harm, are actually different tests for Greg’s implementation.

I’ll also add that I agree with @gregsdennis’s point about the test suite already containing lots of implementation-dependent tests.

@handrews
Copy link
Contributor

@Julian fine with me if you put it in, but y'all don't understand proper allocation of test responsibilities and are trying to compensate for people who aren't doing their own part and I find that irritating.

Y'all can argue what you want all you want, but I built an actual career on this so don't lecture me about test design theory.

@Relequestual
Copy link
Member

The parser has some rule, you just can't count on another parse to have the same rule. It's highly unlikely that given the same input, the rule would change from one execution to another. I can run my tests a million times and the properties will be ordered the same way. Yes, flaky tests are very very bad, but that's not what we should expect from adding tests like this. - @jdesrosiers

Actually, this is the case with the standard Perl parser. The order that an object's keys are printed out is (I believe) avilable (and shape) memory at the time of execution. We can run identical code and get different results each time. There is a SPECIFIC module for parsing to a JSON string in a deterministic manner (which is useful for tests) specifically for this reason.

If you have this bug, your tests will already be flaky. Adding another test that changes the order doesn't make the problem worse.

That's a good argument.

Therefore, adding these tests won't cause any problems that weren't already there, but might catch some bugs. The results will either be deterministic, or they will be flaky in a way that has nothing to do with the test suite. It won't necessarily catch all bugs, but that's always been the nature of testing.

Sure, I agree, but my point still stands, there's still a chance of a false positive where someone believes their implementation works FINE but it actually does not.

I appreciate we cannot test everything, but having an explicit test for something, which we know can flakily pass, is something I don't feel like we should include in the test suite.

I would be FINE with a page which details additional tests an implementer should consider.

@Relequestual
Copy link
Member

@Julian OK.
I still think there are unconsidered implications, but I'm less convinced they pose such a great a problem as I was initially (thanks @jdesrosiers). Consensus is go for PR, I'll get behind it.

@Julian
Copy link
Member

Julian commented Sep 27, 2020

OK, now that I'm back at a computer. Some quick rationale, and some guidelines (likely ones that will/should go in the README, which I can try to do as well). I want to preface that I don't think anything below is new, so don't take it as some sort of policy change. These are again just things I unfortunately never wrote down and just implicitly assumed were obvious. It's OK that some are not, or that some are even controversial.

First and foremost the reasoning -- something I think I've said a few times recently: this repository is not the spec. The specification is a careful document, with people pulling in different directions all wanting new features, or clarifications, or fixes for things that don't work a certain way or other -- the bar is high to convince that a change is worthwhile and advances the specification. We all want JSON Schema existing for a long long while (something folks on this thread have done great jobs at regardless of its toll on our collective energy).

In this repository, the bar is low for additions. (I'll talk about what that bar is in a minute.) The reason the bar is low is that generally speaking, the addition of new tests is low risk for implementations. As long as the test is correct, if it only benefits some small subset of implementers, it's still a net win and does not harm others typically. It does of course incur additional maintenance burden on the test suite. That is a tradeoff until now I have been happy to make. But a contributor or anyone proposing additions to the suite does not and has not needed to defend heavily against counterarguments.

Here is an attempt to make explicit what I have been using as the bar for as long as I can remember:

  • additional tests MUST be correct behavior as far as is dictated by the specification (there are some nuances here around optional behavior, or behavior the spec says is OK in languages where other behavior is just not possible or convenient which we can ignore for the minute.)

  • additional tests SHOULD NOT be directly covered by existing tests

  • additional tests SHOULD be in "minimal form" -- by which we call a test "minimal" if it does not include additional behavior beyond which is required for the behavior the test covers

  • additional tests SHOULD be added to all drafts they apply to (or to the subset of those drafts deemed "maintained drafts" when we move to freezing older versions)

  • additional tests MUST NOT ever fail for an implementation that is correct under the specification (this would be reiterating the above, but I want to stress if we ever had a somehow flaky test that would fail even for an implementation with no bugs, we would kill that test and find something else to do).

  • additional tests SHOULD be applicable in at least 2 "environments" -- with environments being intentionally slightly vague, but approximating "languages" -- in other words, tests that are extremely specific for a particular language and no other create lots of burden (I personally do not know every language under the sun, and if I get asked a question about some test that is about some very niche environment, I just won't know what to do with it. So ideally, tests we add should be possible to apply in multiple different environments. E.g., there are multiple languages that lack bool types. Or that only have access to float and no fixed precision type. Or that have ordered map JSON parsers :) In particular, this was slight motivation for e.g. Add basic tests for prototype safety #414 being semi-rejected in favor of only a small number of tests + a script for Javascript implementers.

  • additional tests MUST NOT break API guarantees of the test suite itself without a specific discussion and migration plan (this needs further elaboration to clarify exactly what the current guarantees of the suite are, but the point here as an example is "you can't add tests in which the object in the schema property of the test is not a valid JSON Schema. This is a guarantee that all tests have followed, and which is intended to exist forever. If we want to add tests for scenarios in which that does not hold, it must be after discussion.")

  • additional tests SHOULD NOT be merged so long as the sanity checks ("CI for the suite itself") is failing (which generally aims only to test some of the above API guarantees of the suite)

  • additional tests MUST NOT attempt to clarify the specification itself independently. In the case of questionable intent in the specification, either its author should be asked for the implied intent of the specification or the test MUST be deferred (i.e. rejected) until a specification with explicit decision on its behavior is published.

(I'm sure I may have forgotten one or two things, including maybe even an important one. If I have, I'll edit this comment and the likely follow-up PR adding it to the README) But assuming I haven't -- that's it. If the criteria is met, we should be merging the tests. We can tidy and describe them as clearly as we can come up with so that we know what they are there for. But we err on the side of allowance. If an implementer is saying "this is useful to me and otherwise I would have to include the test locally in my implementation", and another implementer agrees, we should accept the test and save the duplication that may result..

I hope at least that reasoning is understandable, even if we disagree on this specific case. I also see there's room to clarify that this test suite is indeed just a test suite, not formal verification. Someone writing an implementation may certainly still have bugs. If we wish to have supplementary guidance for implementers on where those bugs may lie that sounds fantastic. But it doesn't exist yet.

Now, slightly more broadly --

I hope no one's feelings are/were/will be hurt. I wanted to indeed cut this off where I did (and perhaps even earlier) -- these long back and forth threads do not do us benefit. I take this one specifically as a good moment because me personally I'm somewhat detached from it (even though I guess I did voice strong opinion, but ultimately to be honest I don't "care" much in isolation about this specifically). But because of that I want to indeed cut this off -- because I would encourage all of the wonderful people on this thread to instead invest the energy that happens when we have these long quote-ing back-and-forth kinds of conversations to instead help me review PRs! There are like 8 of us who all jumped in on this small niche issue with really minor consequence (even if you disagree, at least trust me when I say there are already tests of this general form in the suite, so if the battle is "lost", it was "lost" ages ago). If you are disappointed or angry, please! Let's make some sort of additional forward progress. The time we can have for these long discussions can be later when we all feel better. I know I don't feel good about where we are -- which is again a reason to cut this off -- this kind of conversation is draining both for the contributor and for maintainers (I'll say me). It takes away from energy on other more productive paths.

In conclusion -- I like and respect all of you :). Honestly.

Let's drop this topic though, and move forward -- if you strongly disagree with the way I phrased the reasoning above, I suspect you'll be able to voice it on a PR adding it. Until then, be well folks...

@handrews
Copy link
Contributor

handrews commented Sep 27, 2020

@Julian

But a contributor or anyone proposing additions to the suite does not and has not needed to defend heavily against counterarguments.

This is a good rule for this repository and I will do my best to remember it. Please feel free to quote this comment at me :P

@Relequestual
Copy link
Member

@Julian there are a few specific places you can put things like a code of conduct if you so wish https://github.com/json-schema-org/JSON-Schema-Test-Suite/community

In terms of THIS issue, I think we'd all agree it's pretty unusual to disagree like this over ADDING tests. I was concerned it would be detrimental, but now not SO much that I'm uncomfortable with it.

Thanks again for all YOUR hard work @Julian
I like to think the core of us (including relative newcomers) are like a big family, and we can sometimes disagree, but once the issue is laid to rest, move on and continue to work together. (Although I recognise that not everyone's family is like that, but you get my point.)

gregsdennis added a commit to gregsdennis/JSON-Schema-Test-Suite that referenced this issue Sep 27, 2020
@Julian
Copy link
Member

Julian commented Sep 27, 2020

@Julian there are a few specific places you can put things like a code of conduct if you so wish https://github.com/json-schema-org/JSON-Schema-Test-Suite/community

Hmmmm this is likely a very good suggestion. I'm not sure whether the above belongs in part there, in the README, in both, or what exactly yet. Advice more than welcome (as is PRs, or lemme know if we already have a CoC in any other repo I should crimp).

And yeah again the feeling is quite mutual!

@karenetheridge
Copy link
Member

karenetheridge commented Nov 10, 2020

The ordering of properties in a JSON object is not random, it's undefined. The parser has some rule, you just can't count on another parse to have the same rule. It's highly unlikely that given the same input, the rule would change from one execution to another.

FWIW, I know of an architecture/implementation that does exactly this: when a JSON object is read from text into the internal data format, not only is the order of the keys indeterminate, that ordering changes from one execution to another, even in the same runtime instance.

However, let's consider that for some reason you have a JSON parser that randomizes properties or is not completely deterministic for some reason. If you have this bug, your tests will already be flaky.

That's not a bug; that's just the way it is. What would be a bug is if the JSON Schema implementation used this indeterminate ordering in some way, in the places where a specific ordering is mandated. There are lots of places, such as the order in which the keys under the properties keyword are processed, where ordering is not mandated and there is no change in evaluation outcome if the ordering was something else.

@ssilverman
Copy link
Member

Fun fact: Go randomizes its map ordering.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

8 participants