This section summarizes the changes in each release.
Also see:
-
{kibana-ref}/release-notes.html[{kib} release notes]
-
{beats-ref}/release-notes.html[{beats} release notes]
Review important information about {fleet-server} and {agent} for the 8.12.2 release.
- {fleet}
-
-
Fix a popover about inactive agents not being dismissible. (#176929)
-
Fix assets being unintentionally moved to the default space during Fleet setup. (#176173)
-
Fix categories labels in integration overview. (#176141)
-
Fix the ability to delete agent policies with inactive agents from UI, the inactive agents need to be unenrolled first. (#175815)
-
- {fleet-server}
-
-
Fix a bug where agents were stuck in a non-upgradeable state after upgrade. This resolves the known issue that affected versions 8.12.0 and 8.12.1. #3264 #3263
-
Fix chunked file delivery so that files are delivered in order. #3283
-
- {agent}
Review important information about {fleet-server} and {agent} for the 8.12.1 release.
Breaking changes can prevent your application from optimal operation and performance. Before you upgrade, review the breaking changes, then mitigate the impact to your application.
Naming collisions with {fleet} custom ingest pipelines
Summary
If you were relying on an ingest pipeline of the form ${type}-${integration}@custom
introduced in version 8.12.0 (for example, traces-apm@custom
, logs-nginx@custom
, or metrics-system@custom
) you need to update your pipeline’s name to include an .integration
suffix (for example, logs-nginx.integration@custom
) to preserve your expected ingestion behavior.
Details
In version 8.12.0, {fleet} added new custom ingest pipeline names for adding custom processing to integration data streams. These pipeline names used patterns as follows:
-
global@custom
-
${type}@custom
(for exampletraces@custom
) -
${type}-${integration}@custom
(for exampletraces-apm@custom
) -
${type}-${integration}-${dataset}@custom
pre-existing (for exampletraces-apm.rum@custom
)
However, it was discovered in #175254 that the ${type-integration}@custom
pattern can collide in cases where the integration
name is also a dataset name. The clearest case of these collisions was in the APM integration’s data streams, for example:
-
traces-apm
-
traces-apm.rum
-
traces-apm.sampled
Because traces-apm
is a legitimate data stream defined by the APM integration (see the relevant manifest.yml file), it incurred a collision of these custom pipeline names on version 8.12.0. For example:
// traces-apm
{
"pipeline": {
"name": "traces-apm@custom", // <---
"ignore_missing_pipeline": true
}
}
// traces-apm.rum
{
"pipeline": {
"name": "traces-apm@custom", // <---
"ignore_missing_pipeline": true
}
},
{
"pipeline": {
"name": "traces-apm.rum@custom",
"ignore_missing_pipeline": true
}
}
Prior to version 8.12.0, the traces-apm@custom
custom pipeline name was already supported. So, if you had already defined and were using the supported traces-apm@custom
pipeline, and then upgraded to 8.12.0, you would observe that documents ingested to traces-apm.rum
and traces-apm.sampled
would also be processed by your pre-existing traces-apm@custom
ingest pipeline. This could cause breakages and unexpected pipeline processing errors.
To correct this in version 8.12.1, {fleet} now appends a suffix to the "integration level" custom ingest pipeline name. The new suffix prevents collisions between datasets and integration names moving forward. For example:
// traces-apm
{
"pipeline": {
"name": "traces-apm.integration@custom", // <--- Integration level pipeline
"ignore_missing_pipeline": true
}
},
{
"pipeline": {
"name": "traces-apm@custom", // <--- Dataset level pipeline
"ignore_missing_pipeline": true
}
}
// traces-apm.rum
{
"pipeline": {
"name": "traces-apm.integration@custom", // <--- Integration level pipeline
"ignore_missing_pipeline": true
}
},
{
"pipeline": {
"name": "traces-apm.rum@custom", // <--- Dataset level pipeline
"ignore_missing_pipeline": true
}
}
So, if you are relying on an integration level custom ingest pipeline introduced in version 8.12.0, you need to update its name to include the new .integration
suffix to preserve your existing ingestion behavior.
Refer to the Ingest pipelines documentation for details and examples.
Agents upgraded to 8.12.0 are stuck in a non-upgradeable state
Details
An issue discovered in {fleet-server} prevents {agents} that have been upgraded to version 8.12.0 from being upgraded again, using the {fleet} UI, to version 8.12.1 or higher.
Impact
As a workaround, we recommend you to use the {kib} {fleet} API to update any documents in which upgrade_details
is either null
or not defined. Note that these steps must be run as a superuser.
POST _security/role/fleet_superuser
{
"indices": [
{
"names": [".fleet*",".kibana*"],
"privileges": ["all"],
"allow_restricted_indices": true
}
]
}
POST _security/user/fleet_superuser
{
"password": "password",
"roles": ["superuser", "fleet_superuser"]
}
curl -sk -XPOST --user fleet_superuser:password -H 'content-type:application/json' \
-H'x-elastic-product-origin:fleet' \
http://localhost:9200/.fleet-agents/_update_by_query \
-d '{
"script": {
"source": "ctx._source.remove(\"upgrade_details\")",
"lang": "painless"
},
"query": {
"bool": {
"must_not": {
"exists": {
"field": "upgrade_details"
}
}
}
}
}'
DELETE _security/user/fleet_superuser
DELETE _security/role/fleet_superuser
After running these API requests, wait at least 10 minutes, and then the agents should be upgradeable again.
- {fleet}
-
-
Fix the display of category label on the Integration overview page. (#176141)
-
Fix conflicting dynamic template mappings for intermediate objects. (#175970)
-
Fix reserved keys for Elasticsearch output YAML box. (#175901)
-
Prevent deletion of agent policies with inactive agents from UI. (#175815)
-
Fix incorrect count of agents in bulk actions. (#175318)
-
Fix a custom integrations not displaying on the Installed integrations page. (#174804)
-
- {agent}
Review important information about {fleet-server} and {agent} for the 8.12.0 release.
- {agent}
-
-
Update Go version to 1.20.12. #3885
-
Breaking changes can prevent your application from optimal operation and performance. Before you upgrade, review the breaking changes, then mitigate the impact to your application.
Possible naming collisions with {fleet} custom ingest pipelines
Details
Starting in this release, {fleet} ingest pipelines can be configured to process events at various levels of customization. If you have a custom pipeline already defined that matches the name of a {fleet} custom ingest pipeline, it may be unexpectedly called for other data streams in other integrations. For details and investigation about the issue refer to #175254. A fix is planned for delivery in the next 8.12 minor release.
Affected ingest pipelines
APM
-
traces-apm
-
traces-apm.rum
-
traces-apm.sampled`
For APM, if you had previously defined an ingest pipeline of the form traces-apm@custom
to customize the ingestion of documents ingested to the traces-apm
data stream, then by nature of the new @custom
hooks introduced in issue #168019, the traces-apm@custom
pipeline will be called as a pipeline processor in both the traces-apm.rum
and traces-apm.sampled
ingest pipelines. See the following for a comparison of the relevant processors
blocks for each of these pipeline before and after upgrading to 8.12.0:
// traces-apm-8.x.x
{
"pipeline": {
"name": "traces-apm@custom",
"ignore_missing_pipeline": true
}
}
// traces-apm-8.12.0
{
"pipeline": {
"name": "global@custom",
"ignore_missing_pipeline": true
}
},
{
"pipeline": {
"name": "traces@custom",
"ignore_missing_pipeline": true
}
},
{
"pipeline": {
"name": "traces-apm@custom",
"ignore_missing_pipeline": true
}
},
{
"pipeline": {
"name": "traces-apm@custom", <--- Duplicate pipeline entry
"ignore_missing_pipeline": true
}
}
// traces-apm.rum-8.x.x
{
"pipeline": {
"name": "traces-apm.rum@custom",
"ignore_missing_pipeline": true
}
}
// traces-apm.rum-8.12.0
{
"pipeline": {
"name": "global@custom",
"ignore_missing_pipeline": true
}
},
{
"pipeline": {
"name": "traces@custom",
"ignore_missing_pipeline": true
}
},
{
"pipeline": {
"name": "traces-apm@custom", <--- Collides with `traces-apm@custom` that may be preexisting
"ignore_missing_pipeline": true
}
},
{
"pipeline": {
"name": "traces-apm.rum@custom",
"ignore_missing_pipeline": true
}
}
// traces-apm.sampled-8.x.x
{
"pipeline": {
"name": "traces-apm.rum@custom",
"ignore_missing_pipeline": true
}
}
// traces-apm.sampled-8.12.0
{
"pipeline": {
"name": "global@custom",
"ignore_missing_pipeline": true
}
},
{
"pipeline": {
"name": "traces@custom",
"ignore_missing_pipeline": true
}
},
{
"pipeline": {
"name": "traces-apm@custom", <--- Collides with `traces-apm@custom` that may be preexisting
"ignore_missing_pipeline": true
}
},
{
"pipeline": {
"name": "traces-apm.sampled@custom",
"ignore_missing_pipeline": true
}
}
The immediate workaround to avoid this unwanted behavior is to edit both the traces-apm.rum
and traces-apm.sampled
ingest pipelines to no longer include the traces-apm@custom
pipeline processor.
Please note that this is a temporary workaround, and this change will be undone if the APM integration is upgraded or reinstalled.
{agent}
The elastic_agent
integration is subject to the same type of breaking change as described for APM, above. The following ingest pipelines are impacted:
-
logs-elastic_agent
-
logs-elastic_agent.apm_server
-
logs-elastic_agent.auditbeat
-
logs-elastic_agent.cloud_defend
-
logs-elastic_agent.cloudbeat
-
logs-elastic_agent.endpoint_security
-
logs-elastic_agent.filebeat
-
logs-elastic_agent.filebeat_input
-
logs-elastic_agent.fleet_server
-
logs-elastic_agent.heartbeat
-
logs-elastic_agent.metricbeat
-
logs-elastic_agent.osquerybeat
-
logs-elastic_agent.packetbeat
-
logs-elastic_agent.pf_elastic_collector
-
logs-elastic_agent.pf_elastic_symbolizer
-
logs-elastic_agent.pf_host_agent
The behavior is similar to what’s described for APM above: pipelines such as logs-elastic_agent.filebeat
will include a pipeline
processor that calls logs-elastic_agent@custom
. If you have custom processing logic defined in a logs-elastic_agent@custom
ingest pipeline, it will be called by all of the pipelines listed above.
The workaround is the same: remove the logs-elastic_agent@custom
pipeline processor from all of the ingest pipelines listed above.
For new DEB and RPM installations the elastic-agent enroll
command incorrectly reports failure
Details
When you run the elastic-agent enroll
command for an RPM or DEB {agent} package, a Retarting agent daemon
message appears in the command output, followed by a Restart attempt failed
error.
Impact
The error does not mean that the enrollment failed. The enrollment actually succeeded. You can ignore the Restart attempt failed
error and continue by running the following commands, after which {agent} should successfully connect to {fleet}:
sudo systemctl enable elastic-agent
sudo systemctl start elastic-agent
Performance regression in AWS S3 inputs using SQS notification
Details
In 8.12 the default memory queue flush interval was raised from 1 second to 10 seconds. In many configurations this improves performance because it allows the output to batch more events per round trip, which improves efficiency. However, the SQS input has an extra bottleneck that interacts badly with the new value.
For more details see #37754.
Impact
If you are using the Elasticsearch output, and your configuration uses a performance preset, switch it to preset: latency
. If you use no preset or use preset: custom
, then set queue.mem.flush.timeout: 1s
in your output configuration.
If you are not using the Elasticsearch output, set queue.mem.flush.timeout: 1s
in your output configuration.
To configure the output parameters for a {fleet}-managed agent, see [es-output-settings-yaml-config]. For a standalone agent, see [elastic-agent-output-configuration].
{fleet} setup can fail when there are more than one thousand {agent} policies
Details
When you set up {fleet} with a very high volume of {agent} policies, one thousand or more, you may encounter an error similar to the following:
[ERROR][plugins.fleet] Unknown error happened while checking Uninstall Tokens validity: 'ResponseError: all shards failed: search_phase_execution_exception
Caused by:
too_many_nested_clauses: Query contains too many nested clauses; maxClauseCount is set to 5173
The exact number of {agent} policies required to cause the error depends in part on the size of the {es} cluster, but generally it can happen with volumes above approximately one thousand policies.
Impact
Currently there is no workaround for the issue but a fix is planned to be included in the next version 8.12 release.
Note that according to our policy scaling recommendations, the current recommended maximum number of {agent} policies supported by {fleet} is 500.
Agents upgraded to 8.12.0 are stuck in a non-upgradeable state
Details
An issue discovered in {fleet-server} prevents {agents} that have been upgraded to version 8.12.0 from being upgraded again, using the {fleet} UI, to version 8.12.1 or higher.
This issue is planned to be fixed in versions 8.12.2 and 8.13.0.
Impact
As a workaround, we recommend you to use the {kib} {fleet} API to update any documents in which upgrade_details
is either null
or not defined. Note that these steps must be run as a superuser.
POST _security/role/fleet_superuser
{
"indices": [
{
"names": [".fleet*",".kibana*"],
"privileges": ["all"],
"allow_restricted_indices": true
}
]
}
POST _security/user/fleet_superuser
{
"password": "password",
"roles": ["superuser", "fleet_superuser"]
}
curl -sk -XPOST --user fleet_superuser:password -H 'content-type:application/json' \
-H'x-elastic-product-origin:fleet' \
http://localhost:9200/.fleet-agents/_update_by_query \
-d '{
"script": {
"source": "ctx._source.remove(\"upgrade_details\")",
"lang": "painless"
},
"query": {
"bool": {
"must_not": {
"exists": {
"field": "upgrade_details"
}
}
}
}
}'
DELETE _security/user/fleet_superuser
DELETE _security/role/fleet_superuser
After running these API requests, wait at least 10 minutes, and then the agents should be upgradeable again.
Remote {es} output does not support {elastic-defend} response actions
Details
Support for a remote {es} output was introduced in this release to enable {agents} to send integration or monitoring data to a remote {es} cluster. A bug has been found that causes {elastic-defend} response actions to stop working when a remote {es} output is configured for an agent.
Impact
This bug is currently being investigated and is expected to be resolved in an upcoming release.
The 8.12.0 release Added the following new and notable features.
- {fleet}
-
-
Add {agent} upgrade states and display each agent’s progress through the upgrade process. See [view-upgrade-status] for details. (#167539)
-
Add support for preconfigured output secrets. (#172041)
-
Add support for pipelines to process events at various levels of customization. (#170270)
-
Add UI components to create and edit output secrets. (#169429)
-
Add support for remote ES output. (#169252)
-
Add the ability to specify secrets in outputs. (#169221)
-
Add an integrations configs tab to display input templates. (#168827)
-
Add a {kib} task to publish Agent metrics. (#168435)
-
- {agent}
-
-
Add a "preset" field to {es} output configurations that applies a set of configuration overrides based on a desired performance priority. #37259 #3879 #3797
-
Send the current agent upgrade details to {fleet-server} as part of the check-in API’s request body. #3528 #3119
-
Add new fields for retryable upgrade steps to upgrade details metadata. #3845 #3818
-
Improve the upgrade watcher to no longer require root access. #3622
-
Enable hints autodiscovery for {agent} so that the host for a container in a Kubernetes pod no longer needs to be specified manually. #3575 #1453
-
Enable hints autodiscovery for {agent} so that a configuration can be defined through annotations for specific containers inside a pod. #3416 #3161
-
Support flattened
data_stream.*
fields in an {agent} input configuration. #3465 #3191
-
- {fleet}
- {agent}
-
-
Use shorter timeouts for diagnostic requests unless CPU diagnostics are requested. #3794 #3197
-
Add configuration parameters for the Kubernetes
leader_election
provider. #3625 -
Remove duplicated tags that may be specified during an agent enrollment. #3740 #858
-
Include upgrade details in an agent diagnostics bundle #3624 and in the
elastic-agent status
command output. #3615 #3119 -
Start and stop the monitoring server based on the monitoring configuration. #3584 #2734
-
Copy files concurrently to reduce the time taken to install and upgrade {agent} on systems running SSDs. #3212
-
Update
elastic-agent-libs
from version 0.7.2 to 0.7.3. #4000
-
- {fleet}
-
-
Allow agent upgrades if patch version is higher than {kib}. (#173167)
-
Fix secrets with dot-separated variable names. (#173115)
-
Fix endpoint privilege management endpoints return errors. (#171722)
-
Fix expiration time for immediate bulk upgrades being too short. (#170879)
-
Fix incorrect overwrite of
logs-
andmetrics-
data views on every integration install. (#170188) -
Create intermediate objects when using dynamic mappings. (#169981)
-
- {agent}
-
-
Preserve build metadata in upgrade version strings. #3824 #3813
-
Create a custom
MarshalYAML()
method to properly handle error fields in agent diagnostics. #3835 #2940 -
Fix the {agent} ignoring the
agent.download.proxy_url
setting during a policy update. #3803 #3560 -
Only try to download an upgrade locally if the
file://
prefix is specified for the source URI. #3682 -
Fix logging calls that have missing arguments. #3679
-
Update NodeJS version bundled with Heartbeat to v18.18.2. #3655
-
Use a third-party library to track progress during install and uninstall operations. #3623 #3607
-
Enable the {agent} container to run on Azure Container Instances. #3778 #3711
-
When a scheduled upgrade expires, set the upgrade state to failed. #3902 #3817
-
Update
elastic-agent-autodiscover
to version 0.6.6 and fix default metadata configuration. #3938
-