-
Notifications
You must be signed in to change notification settings - Fork 271
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Merged all remaining work from the 1.3.x to master #1127
Merged all remaining work from the 1.3.x to master #1127
Conversation
* Added some documentation about configuring the registry UI * renamed user interface to web console * address feedback
…es (#950) * added some documentation about artifact meta-data and custom properties * address review feedback
* Added avro and json schema serde documentation, mostly to have a place to document the new Avro encoding option * updated docs based on review feedback
* Add registry client documentation * Improve custom header docs * Update con-registry-client.adoc * Update ref-registry-client.adoc * Update proc-writing-registry-client.adoc * Improve rest client docs * Address comments Co-authored-by: Eric Wittmann <[email protected]>
* re-organized the maven plugin docs a bit and added some info about "test-update" and file extensions * incorporate feedback from review
* documented how to override the topic names for kafka storage * changes based on review feedback
* Added docs for configuring default global rules * changes based on review feedback * added feedback from review
* Add artifact state docs section * Improve state docs * Address comments * Address comments
* Add Datum and Protobuf docs. * Update con-registry-serdes-types.adoc Co-authored-by: Eric Wittmann <[email protected]>
* Add check-period to globalId strategy. * Add check period test and docs.
* fix upstream branding and tidy up xref attributes * restructure and tidy up schema look up strategies * minor tidy up
@@ -74,6 +91,7 @@ public AbstractKafkaSerializer( | |||
T schema = toSchema(data); | |||
String artifactId = getArtifactIdStrategy().artifactId(topic, isKey(), schema); | |||
long id = getGlobalIdStrategy().findId(getClient(), artifactId, artifactType(), schema); | |||
schema = getCache().getSchema(id); // use registry's schema! |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
@alesj @EricWittmann why it's required to fetch the just created schema? this is causing the tests to fail in the streams storage variant. Because of the storage being asynchronous , fetching the schema here can fail (and so it's doing in the tests)
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
why it's required to fetch the just created schema?
Since you can end-up with wrong schema -- data is actually invalid aka not schema compliant.
And it would all still work w/o problems -- on the producer side.
It would fail on the consumer side, but I guess you want to catch this before -- so not to waist network and Kafka brokers, where it could be prevented.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
so what do you suggest? active wait? (to ensure the artifact gets created in the registry) or "manually" load the cache with the schema we already have in memory? or what else is possible?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
It seems like we need to manually load the cache with the schema, but only in the case where the global ID strategy stored the schema in the registry. This is the only use-case where the fetch will fail when it should not (due to the async nature of the Streams impl). @alesj and @famartinrh agreed?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
agree on loading the cache under that conditions, now is up to how are we going to implement that :)
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This is the only use-case where the fetch will fail when it should not (due to the async nature of the Streams impl)
Ah yes ... yeah, that can be async ... and not much that can be done about it, other than retry ?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Or ... actually, in some cases you already know that it's gonna create new globalId, so you could pre-populate the cache ... so no need for another globalId->schema lookup ... but that depends on the strategy ...
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
My suggestion is that we add a retry here for now so that all the tests pass. And then we add a GitHub issue to properly fix the issue by pre-caching the schema in the case where the strategy actually uploads it to the registry.
Any objections?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Hearing no objections, I have taken this approach. Note that I also had to update the CheckPeriod test so that it worked against async storages.
…ed merge effects).
…registry into merges/from-1.3.x
I merged all the changes from
1.3.x
into master.