-
Notifications
You must be signed in to change notification settings - Fork 205
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
[JSON-API] Make key_hash idx non unique #11102
Conversation
2af3fc1
to
fb89094
Compare
@@ -720,7 +720,7 @@ private final class PostgresQueries(tablePrefix: String, tpIdCacheMaxEntries: Lo | |||
s""" | |||
INSERT INTO $contractTableNameRaw | |||
VALUES (?, ?, ?::jsonb, ?, ?::jsonb, ?, ?, ?) | |||
ON CONFLICT (contract_id) DO NOTHING | |||
ON CONFLICT DO NOTHING |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
this might be too relaxed. If we insert a different contract with the same key, this will also ignore it but we should probably throw an error instead.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Not sure. The current implementation would let through the case in which we insert a different contract with the same contract identifier. In both cases, this is a ledger error, and I'm not sure we should fail. The failure should be caught earlier on. At this point, the worst thing that happens is that you corrupt a cache that you can throw away. Makes sense?
details.party.asInstanceOf[String], | ||
userCreateCommand(domain.Party(details.party.asInstanceOf[String])), |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Do not cast.
val arg = v.Record( | ||
fields = List( | ||
v.RecordField("username", Some(v.Value(v.Value.Sum.Party(username.unwrap)))), | ||
v.RecordField( | ||
"following", | ||
some(v.Value(v.Value.Sum.List(v.List.of(followingList)))), | ||
), | ||
) | ||
) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
recordFromFields
/ShRecord
should work just as well here.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Oracle is missing.
If you run your ledger without contract key uniqueness it's not a ledger
bug. We don't have to support that yet in the json API but I'd much rather
blow Up with an error for this than silently throw away contracts
…On Thu, Sep 30, 2021, 18:24 Stefano Baghino ***@***.***> wrote:
***@***.**** commented on this pull request.
------------------------------
In
ledger-service/db-backend/src/main/scala/com/digitalasset/http/dbbackend/Queries.scala
<#11102 (comment)>:
> @@ -720,7 +720,7 @@ private final class PostgresQueries(tablePrefix: String, tpIdCacheMaxEntries: Lo
s"""
INSERT INTO $contractTableNameRaw
VALUES (?, ?, ?::jsonb, ?, ?::jsonb, ?, ?, ?)
- ON CONFLICT (contract_id) DO NOTHING
+ ON CONFLICT DO NOTHING
Not sure. The current implementation would let through the case in which
we insert a different contract with the same contract identifier. In both
cases, this is a ledger error, and I'm not sure we should fail. The failure
should be caught earlier on. At this point, the worst thing that happens is
that you corrupt a cache that you can throw away. Makes sense?
—
You are receiving this because you commented.
Reply to this email directly, view it on GitHub
<#11102 (comment)>,
or unsubscribe
<https://github.com/notifications/unsubscribe-auth/AAKAWMEFOIOZOCMB4QVIBLLUESFJ5ANCNFSM5FCS6B3Q>
.
Triage notifications on the go with GitHub Mobile for iOS
<https://apps.apple.com/app/apple-store/id1477376905?ct=notification-email&mt=8&pt=524675>
or Android
<https://play.google.com/store/apps/details?id=com.github.android&referrer=utm_campaign%3Dnotification-email%26utm_medium%3Demail%26utm_source%3Dgithub>.
--
This message, and any attachments, is for the intended recipient(s) only,
may contain information that is privileged, confidential and/or proprietary
and subject to important terms and conditions available at
http://www.digitalasset.com/emaildisclaimer.html
<http://www.digitalasset.com/emaildisclaimer.html>. If you are not the
intended recipient, please delete this message.
|
changelog_begin - [JSON-API] Fixed a bug where an update of the internal contracts table failed due to a duplicate key conflict. Now it will ignore the conflicts and go on. changelog_end
fb89094
to
4e36210
Compare
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
- [JSON-API] Make key_hash idx non unique #11102 (comment) is not addressed.
- Please fix the PR title, which becomes the commit summary; "batch json api bug" is not specific, and the PR fixes the bug too.
- The changelog entry is misleading, given the fix that was ultimately chosen. If you manually merge, you can edit in an improved changelog entry at merge time; otherwise, to use automerge, please edit out the old one and add a new commit with an updated one.
This PR has been created by a script, which is not very smart and does not have all the context. Please do double-check that the version prefix is correct before merging. @adriaanm-da is in charge of this release. Commit log: ``` cfcdc13 Add deprecation output for sandbox classic (#11119) ac5c52d DPP-609 Add StorageBackendTestsDeduplication (#11088) 55af7ad HttpServiceTestFixture provides a sandbox-classic ledger used for performance tests (#11128) 63ab3f3 Fix change ID references in services.rst (#11130) 31db15d Port error code changes (#11113) 9fd8182 move reusable functions from ContractsFetch to new fetch-contracts library (#11057) c2084f6 Try and improve the explanation of compatibility test exclusions (#11114) 4df9b7c Rename command completion dedup time to duration in participant code [KVL-1057] (#10901) 11bc22d LF: builtins to create, signatory, and obersver on interface payload. (#11120) d4cb1f9 KV ledgers: disable participant side command de-duplication [KVL-1083] (#11095) b9a5a83 Minor: fix trace logging in `TimeBasedWriteSetSelector` (#11117) 018e908 [JSON-API] Make key_hash idx non unique (#11102) 429f437 Remove HA indexer integration test on H2 (#11104) 9cffa1f LF: check LF transaction protobuf aginst local snapshots (#11064) d7ee278 Optional table prefix for trigger service (#11047) 268b2d3 Command deduplication - test case that uses CommandService and CommandSubmissionService [KVL-1106] (#11098) fac05f6 bypass value enrichment for the performance test (#11112) 4860271 Port file was not written to... where? (#11108) a8d703b Increase akka server request-timeout for HttpServiceIntegrationTests (#11109) 37d94aa Add unit for daily performance tests (#11107) eb4ac8a Increase timeouts for resetting database (#11103) db75f7d interface methods: Scala Typechecker (#11097) d3e6f16 [JSON-API] Migrating tests to use sandbox next (#11034) ffc8d68 Data dependencies deserialize imports (#11037) 4b31bf0 update compat versions for 1.17.0 (#11075) 9fd6326 interfaces: consuming/non-consuming iface choices (#11009) 229ce47 Data dependencies serialize imports (#11038) 5d9ec65 Add test case for parallel command deduplication using mixed clients [KVL-1090] (#11093) d64d965 kvutils: Print state updates before and after normalization. (#11096) be216aa update NOTICES file (#11089) 517e866 Extract common code for command dedup conformance tests [KVL-1090] (#11092) 7a4963b add `--max-inbound-message-size` flag to `daml ledger export` (#11087) 26d10b2 Drop NodeId type parameter (#10921) 3cbd922 More NonEmpty functions (#10930) bcd4686 interface methods: Speedy (#11076) 85adaab [DPP-417][DPP-595] Error codes switching - follow-up (#11074) d7b280b Remove debug print in daml-lf/interpreter build (#11091) 1ed6428 LF: move archive snapshots in a separate directory (#11081) 5e424f8 Command dedup conformance suites readme (#11051) 8290347 DPP-496 HA indexer integration test (#11033) 735c309 Switch from InputStream to Byte Array at the binary content JDBC transport (#11072) 6ae3afa fix perf reporting (#11073) bf801a6 Fix SimpleDeduplicationBasic when participant deduplication is disabled (#11083) 62234dc [Short] Remove unused code (#11079) c1d1521 interface methods: Scala AST (#11070) 7dd9c2d Remove expectations for internal failures from parallel command dedup tests (#11061) 5112599 Explicit discard in daml-lf/interpreter (#11067) 554b36c rotate release duty after 1.18.0-snapshot.20210928.7948.0.b4d00317 (#11059) d156964 Release SDK 1.17.0 (#11062) 6d8cf70 Handle interface methods in LF conversion (#11054) 87ecf1f daml-ledger: better errors for non-json responses. (#11055) ec2d26f [Divulgence pruning] Fixes divulgence pruning offset update query (#11046) f13c6d6 release 1.18.0-snapshot.20210928.7948.0.b4d00317 (#11058) d1805a3 More dependabot fun (#11063) 07b273a update NOTICES file (#11060) ``` Changelog: ``` - [JSON-API] make key_hash indexes non-unique, this fixes a bug where a duplicate key conflict was raised on the query store when the same contract was being witnessed twice by two separate parties * [Trigger Service] Enable the new `tablePrefix` setting in the `--jdbc` flag to add a prefix to all tables used by the trigger service to avoid collisions with other components using the same db-schema. * [Trigger Service] Enable the new ``--allow-existing-schema`` flag to initialize the trigger service on a database with a pre-existing schema. - [Daml Assistant] Add support for `--max-inbound-message-size` flag to the Ledger Export tool. Add cons, find, delete and deleteBy functions to NonEmpty module. ``` CHANGELOG_BEGIN CHANGELOG_END
This PR has been created by a script, which is not very smart and does not have all the context. Please do double-check that the version prefix is correct before merging. @adriaanm-da is in charge of this release. Commit log: ``` cfcdc13 Add deprecation output for sandbox classic (#11119) ac5c52d DPP-609 Add StorageBackendTestsDeduplication (#11088) 55af7ad HttpServiceTestFixture provides a sandbox-classic ledger used for performance tests (#11128) 63ab3f3 Fix change ID references in services.rst (#11130) 31db15d Port error code changes (#11113) 9fd8182 move reusable functions from ContractsFetch to new fetch-contracts library (#11057) c2084f6 Try and improve the explanation of compatibility test exclusions (#11114) 4df9b7c Rename command completion dedup time to duration in participant code [KVL-1057] (#10901) 11bc22d LF: builtins to create, signatory, and obersver on interface payload. (#11120) d4cb1f9 KV ledgers: disable participant side command de-duplication [KVL-1083] (#11095) b9a5a83 Minor: fix trace logging in `TimeBasedWriteSetSelector` (#11117) 018e908 [JSON-API] Make key_hash idx non unique (#11102) 429f437 Remove HA indexer integration test on H2 (#11104) 9cffa1f LF: check LF transaction protobuf aginst local snapshots (#11064) d7ee278 Optional table prefix for trigger service (#11047) 268b2d3 Command deduplication - test case that uses CommandService and CommandSubmissionService [KVL-1106] (#11098) fac05f6 bypass value enrichment for the performance test (#11112) 4860271 Port file was not written to... where? (#11108) a8d703b Increase akka server request-timeout for HttpServiceIntegrationTests (#11109) 37d94aa Add unit for daily performance tests (#11107) eb4ac8a Increase timeouts for resetting database (#11103) db75f7d interface methods: Scala Typechecker (#11097) d3e6f16 [JSON-API] Migrating tests to use sandbox next (#11034) ffc8d68 Data dependencies deserialize imports (#11037) 4b31bf0 update compat versions for 1.17.0 (#11075) 9fd6326 interfaces: consuming/non-consuming iface choices (#11009) 229ce47 Data dependencies serialize imports (#11038) 5d9ec65 Add test case for parallel command deduplication using mixed clients [KVL-1090] (#11093) d64d965 kvutils: Print state updates before and after normalization. (#11096) be216aa update NOTICES file (#11089) 517e866 Extract common code for command dedup conformance tests [KVL-1090] (#11092) 7a4963b add `--max-inbound-message-size` flag to `daml ledger export` (#11087) 26d10b2 Drop NodeId type parameter (#10921) 3cbd922 More NonEmpty functions (#10930) bcd4686 interface methods: Speedy (#11076) 85adaab [DPP-417][DPP-595] Error codes switching - follow-up (#11074) d7b280b Remove debug print in daml-lf/interpreter build (#11091) 1ed6428 LF: move archive snapshots in a separate directory (#11081) 5e424f8 Command dedup conformance suites readme (#11051) 8290347 DPP-496 HA indexer integration test (#11033) 735c309 Switch from InputStream to Byte Array at the binary content JDBC transport (#11072) 6ae3afa fix perf reporting (#11073) bf801a6 Fix SimpleDeduplicationBasic when participant deduplication is disabled (#11083) 62234dc [Short] Remove unused code (#11079) c1d1521 interface methods: Scala AST (#11070) 7dd9c2d Remove expectations for internal failures from parallel command dedup tests (#11061) 5112599 Explicit discard in daml-lf/interpreter (#11067) 554b36c rotate release duty after 1.18.0-snapshot.20210928.7948.0.b4d00317 (#11059) d156964 Release SDK 1.17.0 (#11062) 6d8cf70 Handle interface methods in LF conversion (#11054) 87ecf1f daml-ledger: better errors for non-json responses. (#11055) ec2d26f [Divulgence pruning] Fixes divulgence pruning offset update query (#11046) f13c6d6 release 1.18.0-snapshot.20210928.7948.0.b4d00317 (#11058) d1805a3 More dependabot fun (#11063) 07b273a update NOTICES file (#11060) ``` Changelog: ``` - [JSON-API] make key_hash indexes non-unique, this fixes a bug where a duplicate key conflict was raised on the query store when the same contract was being witnessed twice by two separate parties * [Trigger Service] Enable the new `tablePrefix` setting in the `--jdbc` flag to add a prefix to all tables used by the trigger service to avoid collisions with other components using the same db-schema. * [Trigger Service] Enable the new ``--allow-existing-schema`` flag to initialize the trigger service on a database with a pre-existing schema. - [Daml Assistant] Add support for `--max-inbound-message-size` flag to the Ledger Export tool. Add cons, find, delete and deleteBy functions to NonEmpty module. ``` CHANGELOG_BEGIN CHANGELOG_END Co-authored-by: Azure Pipelines DAML Build <[email protected]>
See [original PR] for details. [original PR]: #11102 Note that there was a conflict, which I tried to resolve sensibly: ledger-service/http-json/src/itlib/scala/http/AbstractHttpServiceIntegrationTest.scala -- * Add failing test that covers the bug * Fix on conflict error for inserts into the contracts table changelog_begin - [JSON-API] make key_hash indexes non-unique, this fixes a bug where a duplicate key conflict was raised on the query store when the same contract was being witnessed twice by two separate parties changelog_end * move test to parent so as to test oracle query store * make key_hash indexes non-unique * use recordFromFields Co-authored-by: Akshay <[email protected]>
@cocreature We landed on "neither blow up nor throw away contracts" so the perfect has vanquished the good. |
See [original PR] for details. [original PR]: #11102 Note that there was a conflict, which I tried to resolve sensibly: ledger-service/http-json/src/itlib/scala/http/AbstractHttpServiceIntegrationTest.scala -- * Add failing test that covers the bug * Fix on conflict error for inserts into the contracts table changelog_begin - [JSON-API] make key_hash indexes non-unique, this fixes a bug where a duplicate key conflict was raised on the query store when the same contract was being witnessed twice by two separate parties changelog_end * move test to parent so as to test oracle query store * make key_hash indexes non-unique * use recordFromFields Co-authored-by: Akshay <[email protected]> * fix broken backport of tests Co-authored-by: Victor Peter Rouven Müller <[email protected]> Co-authored-by: Akshay <[email protected]>
RC1 did not include #11102. CHANGELOG_BEGIN CHANGELOG_END
RC1 did not include #11102. CHANGELOG_BEGIN CHANGELOG_END
@S11001001 , that sounds suboptimal. Could you elaborate? I want to ensure we get to the bottom of this, as the original regression caused issues for customers. |
Since I understand this index was originally tweaked as part of performance work, I wonder if the benchmarks were redone after this PR? |
@adriaanm-da We have daily numbers through the perf tests that we post on slack and we haven't seen any slowdown there , given we don't support non unique keys on json-api right now this should be fine |
@adriaanm-da I've mixed too many exotic ingredients in my metaphor. The two choices that @cocreature mentioned are both bad semantically in some way (hence the "which bad behavior is less bad" tone), but the solution implemented in this PR preserves good semantics without either those or any other bad implications. |
Perfect :-) |
Yeah, but did it vanquish the good? |
}: Future[ContractId] | ||
fut.map(cid => (party, cid)) | ||
} | ||
(alice, aliceUserId) = users(0) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I think we could simplify this test a fair bit. As far as I understand. The only thing that matters is:
- There is a contract with a key (no need to add a new template for that)
- We see it twice. An easy way to trigger this is changing the parties with a contract that is visible to both.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Happy to simplify that test, back then I just did a direct transpilation of the daml script code reproducing the bug with me not being able to simplify the test further because I lacked the knowledge of what all the query store is doing.
@@ -684,7 +684,7 @@ private final class PostgresQueries(tablePrefix: String, tpIdCacheMaxEntries: Lo | |||
|
|||
private[this] val contractKeyHashIndexName = Fragment.const0(s"${tablePrefix}ckey_hash_idx") | |||
private[this] val contractKeyHashIndex = CreateIndex( | |||
sql"""CREATE UNIQUE INDEX $contractKeyHashIndexName ON $contractTableName (key_hash)""" | |||
sql"""CREATE INDEX $contractKeyHashIndexName ON $contractTableName (key_hash)""" |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
What happens for non-unique keys on a fetch-by-key now? Do we fail the request or return an arbitrary contract with the key?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Right now, the request fails (the request is made using a call that assumes only one row is returned).
Closes #11251 Schema changed as part of #11102 Also backported to 1.17.1 in #11143 changelog_begin [JSON API] Solving a bug that could cause the JSON API to return correct result if a contract with the same key is observed twice required a schema change. The JSON API data needs to be dropped and the query store needs to reset. If you are migrating from a previous version, either reset your database manually or start the HTTP JSON API with one of the options that regenerate the schema (`create-only`, `create-if-needed-and-start`, `create-and-start`). changelog_end
Closes #11251 Schema changed as part of #11102 Also backported to 1.17.1 in #11143 changelog_begin [JSON API] Solving a bug that could cause the JSON API to return correct result if a contract with the same key is observed twice required a schema change. The JSON API data needs to be dropped and the query store needs to reset. If you are migrating from a previous version, either reset your database manually or start the HTTP JSON API with one of the options that regenerate the schema (`create-only`, `create-if-needed-and-start`, `create-and-start`). changelog_end
Pull Request Checklist
resolves a bug with the duplicate key conflict on query store, where the same contract is being witnessed twice by two separate parties.
changelog_begin
changelog_end
CHANGELOG_BEGIN
andCHANGELOG_END
tagsNOTE: CI is not automatically run on non-members pull-requests for security
reasons. The reviewer will have to comment with
/AzurePipelines run
totrigger the build.