-
Notifications
You must be signed in to change notification settings - Fork 3.7k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
[DRAFT] 30.0.0 release notes #16505
Comments
#16371 this PR added a feature to the Router page that makes it easier to search DataSource using keywords, which also changed user behavior, so it should be necessary to mention it in the release note I think 😄 |
@asdf2014 That PR is not present in the Druid 30 branch, as it was merged after the branch cut (and cannot be backported as it is not a bugfix/regression). |
@adarshsanjeev Oh, I see 😅 |
@adarshsanjeev Could the following in-progress bugfix #16550 be considered for |
@frankgrimes97 Currently, Druid 30 is at a stage where only regressions can be backported. |
Testing 30.0.0-rc3 there are a number of regressions im noticing in the UI,
|
Apache Druid 30.0.0 contains over 407 new features, bug fixes, performance enhancements, documentation improvements, and additional test coverage from 50 contributors.
See the complete set of changes for additional details, including bug fixes.
Review the upgrade notes and incompatible changes before you upgrade to Druid 30.0.0.
If you are upgrading across multiple versions, see the Upgrade notes page, which lists upgrade notes for the most recent Druid versions.
# Upcoming removals
As part of the continued improvements to Druid, we are deprecating certain features and behaviors in favor of newer iterations that offer more robust features and are more aligned with standard ANSI SQL. Many of these new features have been the default for new deployments for several releases.
The following features are deprecated, and we currently plan to remove support in Druid 32.0.0:
0
. For more information, see NULL values. For a tutorial on the SQL-compliant logic, see the Null handling tutorial.1
(true) or0
(false). Previously, true and false could be represented either astrue
andfalse
or as1
and0
, respectively. In addition, Druid now returns a null value for Boolean comparisons likeTrue && NULL
. For more information, see Boolean logic. For examples of filters that use the SQL-compliant logic, see Query filters.# Important features, changes, and deprecations
This section contains important information about new and existing features.
# Concurrent append and replace improvements
Streaming ingestion supervisors now support concurrent append, that is streaming tasks can run concurrently with a replace task (compaction or re-indexing) if it also happens to be using concurrent locks. Set the context parameter
useConcurrentLocks
to true to enable concurrent append.Once you update the supervisor to have
"useConcurrentLocks": true
, the transition to concurrent append happens seamlessly without causing any ingestion lag or task failures.#16369
Druid now performs active cleanup of stale pending segments by tracking the set of tasks using such pending segments.
This allows concurrent append and replace to upgrade only a minimal set of pending segments and thus improve performance and eliminate errors.
Additionally, it helps in reducing load on the metadata store.
#16144
# Grouping on complex columns
Druid now supports grouping on complex columns and nested arrays.
This means that both native queries and the MSQ task engine can group on complex columns and nested arrays while returning results.
Additionally, the MSQ task engine can roll up and sort on the supported complex columns, such as JSON columns, during ingestion.
#16068
#16322
# Removed ZooKeeper-based segment loading
ZooKeeper-based segment loading is being removed due to known issues.
It has been deprecated for several releases.
Recent improvements to the Druid Coordinator have significantly enhanced performance with HTTP-based segment loading.
#15705
# Improved groupBy queries
Before Druid pushes realtime segments to deep storage, the segments consist of spill files.
Segment metrics such as
query/segment/time
now report on each spill file for a realtime segment, rather than for the entire segment.This change eliminates the need to materialize results on the heap, which improves the performance of groupBy queries.
#15757
# Improved AND filter performance
Druid query processing now adaptively determines when children of AND filters should compute indexes and when to simply match rows during the scan based on selectivity of other filters.
Known as filter partitioning, it can result in dramatic performance increases, depending on the order of filters in the query.
For example, take a query like
SELECT SUM(longColumn) FROM druid.table WHERE stringColumn1 = '1000' AND stringColumn2 LIKE '%1%'
. Previously, Druid used indexes when processing filters if they are available.That's not always ideal; imagine if
stringColumn1 = '1000'
matches 100 rows. With indexes, we have to find every value ofstringColumn2 LIKE '%1%'
that is true to compute the indexes for the filter. IfstringColumn2
has more than 100 values, it ends up being worse than simply checking for a match in those 100 remaining rows.With the new logic, Druid now checks the selectivity of indexes as it processes each clause of the AND filter.
If it determines it would take more work to compute the index than to match the remaining rows, Druid skips computing the index.
The order you write filters in a WHERE clause of a query can improve the performance of your query.
More improvements are coming, but you can try out the existing improvements by reordering a query.
Put indexes that are less intensive to compute such as
IS NULL
,=
, and comparisons (>
,>=,
<
, and<=
) near the start of AND filters so that Druid more efficiently processes your queries.Not ordering your filters in this way won’t degrade performance from previous releases since the fallback behavior is what Druid did previously.
#15838
# Centralized datasource schema (alpha)
You can now configure Druid to manage datasource schema centrally on the Coordinator.
Previously, Brokers needed to query data nodes and tasks for segment schemas.
Centralizing datasource schemas can improve startup time for Brokers and the efficiency of your deployment.
To enable this feature, set the following configs:
druid.centralizedDatasourceSchema.enabled
to true.druid.indexer.fork.property.druid.centralizedDatasourceSchema.enabled
to true in your MiddleManager runtime properties.#15817
# MSQ support for window functions
You can now run window functions in the MSQ task engine using the context flag
enableWindowing:true
.In the native engine, you must use a group by clause to enable window functions. This requirement is removed in the MSQ task engine.
#15470
#16229
# MSQ support for Google Cloud Storage
You can now export MSQ results to a Google Cloud Storage (GCS) path by passing the function
google()
as an argument to theEXTERN
function.#16051
# RabbitMQ extension
A new RabbitMQ extension is available as a community contribution.
The RabbitMQ extension (
druid-rabbit-indexing-service
) lets you manage the creation and lifetime of rabbit indexing tasks. These indexing tasks read events from RabbitMQ through super streams.As super streams allow exactly once delivery with full support for partitioning, they are compatible with Druid's modern ingestion algorithm, without the downsides of the prior RabbitMQ firehose.
Note that this uses the RabbitMQ streams feature and not a conventional exchange. You need to make sure that your messages are in a super stream before consumption. For more information, see RabbitMQ documentation.
#14137
# Functional area and related changes
This section contains detailed release notes separated by areas.
# Web console
# Improved the Supervisors view
You can now use the Supervisors view to dynamically query supervisors and display additional information on newly added columns.
#16318
# Search in tables and columns
You can now use the Query view to search in tables and columns.
#15990
# Kafka input format
Improved how the web console determines the input format for a Kafka source.
Instead of defaulting to the Kafka input format for a Kafka source, the web console now only picks the Kafka input format if it detects any of the following in the Kafka sample: a key, headers, or more than one topic.
#16180
# Improved handling of lookups during sampling
Rather than sending a transform expression containing lookups to the sampler, Druid now substitutes the transform expression with a placeholder.
This prevents the expression from blocking the flow.
#16234
# Other web console improvements
Added the fields Avro bytes decoder and Proto bytes decoder for their input formats #15950
Fixed an issue with the Tasks view returning incorrect values for Created time and Duration fields after the Overlord restarts #16228
Fixed the Azure icon not rendering in the web console #16173
Fixed the supervisor offset reset dialog in the web console #16298
Improved the user experience when the web console is operating in manual capabilities mode #16191
Improved the query timer as follows:
#16235
The web console now suggests the
azureStorage
input type instead of theazure
storage type #15820The download query detail archive option is now more resilient when the detail archive is incomplete #16071
You can now set
maxCompactionTaskSlots
to zero to stop compaction tasks #15877# General ingestion
# Improved Azure input source
You can now ingest data from multiple storage accounts using the new
azureStorage
input source schema. For example:#15630
# Added a new config to
AzureAccountConfig
The new config
storageAccountEndpointSuffix
lets you configure the endpoint suffix so that you can override the default and connect to other endpoints, such as Azure Government.#16016
# Data management API improvements
Improved the Data management API as follows:
markUsed
andmarkUnused
APIs where an empty set of segment IDs would be inconsistently treated as null or non-null in different scenarios #16145markUnused
API endpoint to handle an empty list of segment versions #16198segmentIds
filter in the Data management API payload is now parameterized in the database query #16174For example:
(interval, [versions])
. Whenversions
is unspecified, all versions of segments in theinterval
are marked as used or unused, preserving the old behavior #16141# Nested columns performance improvement
Nested column serialization now releases nested field compression buffers as soon as the nested field serialization is complete, which requires significantly less direct memory during segment serialization when many nested fields are present.
#16076
# Improved task context reporting
Added a new field
taskContext
in the task reports of non-MSQ tasks. The change is backward compatible. The payload of this field contains the entire context used by the task during its runtime.Added a new experimental interface
TaskContextEnricher
to enrich context with use case specific logic.#16041
# Other ingestion improvements
RetryableS3OutputStream
—this can help to determine whether to adjust chunk size #16117InternalServerError
#16186task_allocator_id
columns #16355MarkOvershadowedSegmentsAsUnused
Coordinator duty to also consider segments that are overshadowed by a segment that requires zero replicas #16181numSegmentsKilled
is reported incorrectly #16103index_parallel
tasks #16042TaskReportMap
#16217isOvershadowed
when there is a unique minor version for an interval #15952EntryExistsException
thrown when trying to insert a duplicate task in the metadata store—Druid now throws aDruidException
with error codeentryAlreadyExists
#14448# SQL-based ingestion
# Manifest files for MSQ task engine exports
Export queries that use the MSQ task engine now also create a manifest file at the destination, which lists the files created by the query.
During a rolling update, older versions of workers don't return a list of exported files, and older Controllers don't create a manifest file.
Therefore, export queries ran during this time might have incomplete manifests.
#15953
#
SortMerge
join supportDruid now supports
SortMerge
join forIS NOT DISTINCT FROM
operations.#16003
# State of compaction context parameter
Added a new context parameter
storeCompactionState
.When set to
true
, Druid records the state of compaction for each segment in thelastCompactionState
segment field.#15965
# Selective loading of lookups
We have built the foundation of selective lookup loading. As part of this improvement,
KillUnusedSegmentsTask
no longer loads lookups.#16328
# MSQ task report improvements
Improved the task report for the MSQ task engine as follows:
actualWorkTimeMS = durationMs - pendingMs
#15966segmentReport
logs the type of the segment created and the reason behind the selection #16175# Other SQL-based ingestion improvements
# Streaming ingestion
# Streaming completion reports
Streaming task completion reports now have an extra field
recordsProcessed
, which lists all the partitions processed by that task and a count of records for each partition.Use this field to see the actual throughput of tasks and make decision as to whether you should vertically or horizontally scale your workers.
#15930
# Improved memory management for Kinesis
Kinesis ingestion memory tuning config is now simpler:
recordsPerFetch
anddeaggregate
.fetchThreads
can no longer exceed the budgeted amount of heap (100 MB or 5%).recordBufferSizeBytes
to set a byte-based limit rather than records-based limit for the Kinesis fetch threads and main ingestion threads. We recommend setting this to 100 MB or 10% of heap, whichever is smaller.maxBytesPerPoll
to set a byte-based limit for how much data Druid polls from shared buffer at a time. Default is 1,000,000 bytes.As part of this change, the following properties have been deprecated:
recordBufferSize
, userecordBufferSizeBytes
insteadmaxRecordsPerPoll
, usemaxBytesPerPoll
instead#15360
# Improved autoscaling for Kinesis streams
The Kinesis autoscaler now considers max lag in minutes instead of total lag.
To maintain backwards compatibility, this change is opt-in for existing Kinesis connections.
To opt in, set
lagBased.lagAggregate
in your supervisor spec toMAX
.New connections use max lag by default.
#16284
#16314
# Parallelized incremental segment creation
You can now configure the number of threads used to create and persist incremental segments on the disk using the
numPersistThreads
property.Use additional threads to parallelize the segment creation to prevent ingestion from stalling or pausing frequently as long as there are sufficient CPU resources available.
#13982
# Kafka steaming supervisor topic improvement
Druid now properly handles previously found partition offsets.
Prior to this change, updating a Kafka streaming supervisor topic from single to multi-topic (pattern), or vice versa, could cause old offsets to be ignored spuriously.
#16190
# Querying
# Dynamic table append
You can now use the
TABLE(APPEND(...))
function to implicitly create unions based on table schemas.For example, the following queries are equivalent:
and
Note that if the same columns are defined with different input types, Druid uses the least restrictive column type.
#15897
# Added SCALAR_IN_ARRAY function
Added
SCALAR_IN_ARRAY
function for checking if a scalar expression appears in an array:SCALAR_IN_ARRAY(expr, arr)
#16306
# Improved PARTITIONED BY
If you use the MSQ task engine to run queries, you can now use the following strings in addition to the supported ISO 8601 periods:
HOUR
- Same as'PT1H'
DAY
- Same as'P1D'
MONTH
- Same as'P1M'
YEAR
- Same as'P1Y'
ALL TIME
ALL
- Alias forALL TIME
#15836
# Improved catalog tables
You can validate complex target column types against source input expressions during DML INSERT/REPLACE operations.
#16223
You can now define catalog tables without explicit segment granularities.
DML queries on such tables need to have the PARTITIONED BY clause specified.
Alternatively, you can update the table to include a defined segment granularity for DML queries to be validated properly.
#16278
# Double and null values in SQL type ARRAY
You can now pass double and null values in SQL type ARRAY through dynamic parameters.
For example:
#16274
#
TypedInFilter
filterAdded a new
TypedInFilter
filter to replaceInDimFilter
—to improve performance when matching numeric columns.#16039
TypedInFilter
can run in replace-with-default mode.#16233
# Heap dictionaries clear out
Improved object handling to reduce the chances of running out of memory with Group By queries on high cardinality data.
#16114
# Other querying improvements
radiusUnit
element to theradius
bound #16029Unable to vectorize expression
exception #16128ColumnType
toRelDataType
conversion for nested arrays #16138WindowingscanAndSort
query issues on top of Joins #15996REGEXP_LIKE
,CONTAINS_STRING
, andICONTAINS_STRING
so that they correctly return null for null value inputs in ANSI SQL compatible null handling mode (the default configuration). Previously, they returned false #15963ARRAY_CONTAINS
andARRAY_OVERLAP
with null left side arguments as well asMV_CONTAINS
andMV_OVERLAP
#15974bit_xor() is null
return the wrong result #16237select array[true, false] from datasource
#16093java.util.regex.Pattern
to match%
#16153IndexedTable
to reject building the index on the complex types to prevent joining on complex types #16349enableWindowing
context parameter for window functions #16229# Cluster management
# Improved retrieving active task status
Improved performance of the Overlord API
/indexer/v1/taskStatus
by serving status of active tasks from memory rather than querying the metadata.#15724
# Other cluster management improvements
Pac4jSessionStore
to 128 bits, which is FIPS compliant #15758# Data management
# Changes to Coordinator default values
Changed to the default values for the Coordinator service as follows:
druid.coordinator.kill.period
has been changed fromP1D
to the runtime value ofdruid.coordinator.period.indexingPeriod
. This default value can be overridden by explicitly specifyingdruid.coordinator.kill.period
in the Coordinator runtime properties.killTaskSlotRatio
has been updated from1.0
to0.1
. This ensures that kill tasks take up at least one task slot and at most 10% of all available task slots by default.#16247
# Compaction completion reports
Parallel compaction task completion reports now have
segmentsRead
andsegmentsPublished
fields to show how effective a compaction task is.#15947
#
GoogleTaskLogs
upload buffer sizeChanged the upload buffer size in
GoogleTaskLogs
to 1 MB instead of 15 MB to allow more uploads in parallel and prevent the MiddleManager service from running out of memory.#16236
# Other data management improvements
created_date
entry, which can help in troubleshooting ingestion issues #15977charsetFix
#16212# Metrics and monitoring
# New unused segment metric
You can now use the
kill/eligibleUnusedSegments/count
metric to find the number of unused segments of a datasource that are identified as eligible for deletion from the metadata store by the Coordinator.#15941 #15977
# Kafka emitter improvements
You can now set custom dimensions for events emitted by the Kafka emitter as a JSON map for the
druid.emitter.kafka.extra.dimensions
property.For example,
druid.emitter.kafka.extra.dimensions={"region":"us-east-1","environment":"preProd"}
.#15845
# Prometheus emitter improvements
The Prometheus emitter extension now emits
service/heartbeat
andzk-connected
metrics.#16209
Also added the following missing metrics to the default Prometheus emitter mapping:
query/timeout/count
,mergeBuffer/pendingRequests
,ingest/events/processedWithError
,ingest/notices/queueSize
andsegment/count
.#16329
# StatsD emitter improvements
You can now configure
queueSize
,poolSize
,processorWorkers
, andsenderWorkers
parameters for the StatsD emitter.Use these parameters to increase the capacity of the StatsD client when its queue size is full.
#16283
# Improved
segment/unavailable/count
metricThe
segment/unavailable/count
metric now accounts for segments that can be queried from deep storage (replicaCount=0
).#16020
Added a new metric
segment/deepStorage/count
to support the query from deep storage feature.#16072
# Other metrics and monitoring improvements
task/autoScaler/requiredCount
metric that provides a count of required tasks based on the calculations of thelagBased
autoscaler. Compare that value totask/running/count
to discover the difference between the current and desired task counts #16199jvmVersion
dimension to theJvmMonitor
module #16262# Extensions
# Microsoft Azure improvements
You can now use ingestion payloads larger than 1 MB for Azure.
#15695
# Kubernetes improvements
You can now configure the CPU cores for Peons (Kubernetes jobs) using the Overlord property
druid.indexer.runner.cpuCoreInMicro
.#16008
# Delta Lake improvements
You can use these filters to filter out data files from a snapshot, reducing the number of files Druid has to ingest from a Delta table.
For more information, see Delta filter object.
#16288
Also added a text box for the Delta Lake filter to the web console.
The text box accepts an optional JSON object that is passed down as the
filter
to the delta input source.#16379
# Improve performance of LDAP credentials validator
Improved performance of LDAP credentials validator by keeping password hashes in an in-memory cache. This helps avoid re-computation of password hashes, thus speeding up the process of LDAP-based Druid authentication.
#15993
# Upgrade notes and incompatible changes
# Upgrade notes
# Append JsonPath function
The
append
function for JsonPath for ORC format now fails with an exception. Previously, it would run but not append anything.#15772
# Kinesis ingestion tuning
The following properties have been deprecated as part of simplifying the memory tuning for Kinesis ingestion:
recordBufferSize
, userecordBufferSizeBytes
insteadmaxRecordsPerPoll
, usemaxBytesPerPoll
instead#15360
# Improved Supervisor rolling restarts
The
stopTaskCount
config now prioritizes stopping older tasks first. As part of this change, you must also explicitly set a value forstopTaskCount
. It no longer defaults to the same value astaskCount
.#15859
# Changes to Coordinator default values
Changed the following default values for the Coordinator service:
druid.coordinator.kill.period
(if unspecified) has changed fromP1D
to the value ofdruid.coordinator.period.indexingPeriod
. Operators can choose to overridedruid.coordinator.kill.period
and that takes precedence over the default behavior.killTaskSlotRatio
has been updated from1.0
to0.1
. This ensures that kill tasks take up only one task slot by default instead of consuming all available task slots.#16247
#
GoogleTaskLogs
upload buffer sizeChanged the upload buffer size in
GoogleTaskLogs
to 1 MB instead of 15 MB to allow more uploads in parallel and prevent the MiddleManager service from running out of memory.#16236
# Incompatible changes
# Changes to
targetDataSource
in EXPLAIN queriesDruid 30.0.0 includes a breaking change that restores the behavior for
targetDataSource
to its 28.0.0 and earlier state, different from Druid 29.0.0 and only 29.0.0. In 29.0.0,targetDataSource
returns a JSON object that includes the datasource name. In all other versions,targetDataSource
returns a string containing the name of the datasource.If you're upgrading from any version other than 29.0.0, there is no change in behavior.
If you are upgrading from 29.0.0, this is an incompatible change.
#16004
# Removed ZooKeeper-based segment loading
ZooKeeper-based segment loading is being removed due to known issues.
It has been deprecated for several releases.
Recent improvements to the Druid Coordinator have significantly enhanced performance with HTTP-based segment loading.
#15705
# Removed Coordinator configs
Removed the following Coordinator configs:
druid.coordinator.load.timeout
: Not needed as the default value of this parameter (15 minutes) is known to work well for all clusters.druid.coordinator.loadqueuepeon.type
: Not needed as this value is alwayshttp
.druid.coordinator.curator.loadqueuepeon.numCallbackThreads
: Not needed as ZooKeeper(curator)-based segment loading isn't an option anymore.Auto-cleanup of compaction configs of inactive datasources is now enabled by default.
#15705
# Changed
useMaxMemoryEstimates
for Hadoop jobsThe default value of the
useMaxMemoryEstimates
parameter for Hadoop jobs is nowfalse
.#16280
# Developer notes
# Dependency updates
The following dependencies have had their versions bumped:
nimbus-jose-jwt
to addressCVE-2023-52428
#16374commons-configuration2
from 2.8.0 to 2.10.1 to addressCVE-2024-29131
andCVE-2024-29133
#16374bcpkix-jdk18on
from 1.76 to 1.78.1 to addressCVE-2024-30172
,CVE-2024-30171
, andCVE-2024-29857
#16374nimbus-jose-jwt
from 8.22.1 to 9.37.2 #16320rewrite-maven-plugin
from 5.23.1 to 5.27.0 #16238rewrite-testing-frameworks
from 2.4.1 to 2.6.0 #16238json-path
from 2.3.0 to 2.9.04.1.108.Final
to addressCVE-2024-29025
#16267CVE-2024-23944
#16267log4j.version
from 2.18.0 to 2.22.1 #15934org.apache.commons.commons-compress
from 1.24.0 to 1.26.0 #16009org.apache.commons.commons-codec
from 1.16.0 to 1.16.1 #16009org.bitbucket.b_c:jose4j
from 0.9.3 to 0.9.6 #16078redis.clients:jedis
from 5.0.2 to 5.1.2 #160749.4.53.v20231009
to9.4.54.v20240208
#16000webpackdevmiddleware
from 5.3.3 to 5.3.4 in web console #16195express
from 4.18.2 to 4.19.2 in web console #16204druid-toolkit/query
from 0.21.9 to 0.22.11 in web console #16213follow-redirects
from 1.15.1 to 1.15.4 in web console #16134aws-sdk
transitive dependency to reduce the size of the compiled Ranger extension #16011log4j v1
dependencies #15984CVE-2023-52428(7.5)
,CVE-2023-50291(7.5)
,CVE-2023-50298(7.5)
,CVE-2023-50386(8.8)
, andCVE-2023-50292(7.5)
#16147The text was updated successfully, but these errors were encountered: