-
Notifications
You must be signed in to change notification settings - Fork 0
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
[Merged by Bors] - Feature/s3 extension defaults #192
Conversation
I'm gonna review this after lunch |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Hi, I just went through the Review Checklist, the only thing that's missing is an update for the Changelog. I'll test the functionality now
I think the "Integration tests added" is a good point here, I see that the current integration test has a Druid cluster definition with the s3 section defined, we should have a second test without the s3 case to test both. I think that test is part of the ticket here and it would need to be created. Edit: At the moment our ingestion test uses a dummy s3 config. We should remove that since it should not be needed anymore with this PR. And i think it would be good to have two ingestion tests:
It would also be good to split out the smoke test again, I think it's good to have that as a seperate test. Sorry for all my change requests! |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
LGTM now!
bors merge |
## Description It is not immediately evident from the Apache Druid documentation that only one s3-endpoint can be used per druid cluster: the endpoint cannot be entered in the console UI and must be supplied as a runtime property (only buckets/baseKeys can be entered in the console). The credentials can be passed either as environment variables, as runtime properties, or directly via the UI, so they do not need to exist as runtime properties necessarily. The upshot of this is that the druid-s3-extension requires an s3-endpoint for it to be initialized: making the loading of the s3-extension conditional upon the provision of an `s3:endpoint` in our CR means that s3-functionality will only be activated in the console if it can be used (i.e. that an endpoint is also available). The following have been tested: - quickstart: include s3-extension in list and set a value for `druid.s3.endpoint.url` - ingest sample- and s3-data (adding credentials/bucket in UI) - operator + hdfs-deep-storage + without s3 definition in CR: ingestion of sample data - operator + hdfs-deep-storage + with s3 definition in CR: ingestion of s3-data - operator + s3-deep-storage: ingestion of s3 and non-s3 data - existing integration tests
Pull request successfully merged into main. Build succeeded: |
Description
It is not immediately evident from the Apache Druid documentation that only one s3-endpoint can be used per druid cluster: the endpoint cannot be entered in the console UI and must be supplied as a runtime property (only buckets/baseKeys can be entered in the console). The credentials can be passed either as environment variables, as runtime properties, or directly via the UI, so they do not need to exist as runtime properties necessarily.
The upshot of this is that the druid-s3-extension requires an s3-endpoint for it to be initialized: making the loading of the s3-extension conditional upon the provision of an
s3:endpoint
in our CR means that s3-functionality will only be activated in the console if it can be used (i.e. that an endpoint is also available).The following have been tested:
druid.s3.endpoint.url
Review Checklist
Once the review is done, comment
bors r+
(orbors merge
) to merge. Further information