Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[7.14] Add known issue docs for #79371 #79485

Merged
merged 1 commit into from
Oct 19, 2021
Merged
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
15 changes: 15 additions & 0 deletions docs/reference/release-notes/7.12.asciidoc
Original file line number Diff line number Diff line change
@@ -7,6 +7,8 @@ Also see <<breaking-changes-7.12,Breaking changes in 7.12>>.
[discrete]
=== Known issues

include::7.12.asciidoc[tag=frozen-tier-79371-known-issue]

* Snapshot and restore: If an index is deleted while the cluster is
concurrently taking more than one snapshot then there is a risk that one of the
snapshots may never complete and also that some shard data may be lost from the
@@ -139,6 +141,19 @@ https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2021-22137[CVE-2021-22137]
[discrete]
=== Known issues

// tag::frozen-tier-79371-known-issue[]
* Frozen tier: (Windows only) The frozen data tier relies on multiple caching mechanisms
to speed up access to searchable snapshot files. One of these caches uses
https://en.wikipedia.org/wiki/Sparse_file[sparse files] to avoid creating large
files on disk when it is not strictly required. A bug prevented files from being
created with the right options to enable sparse support on Windows, leading {es} to
create potentially large files that can end up consuming all the disk space.
+
This issue is fixed in {es} versions 7.15.2 and later. There is no known workaround
for earlier versions. Filesystems that enable sparse files by default are not affected.
For more details, see {es-issue}79371[#79371].
// end::frozen-tier-79371-known-issue[]

* If autoscaling is enabled for machine learning, the administrator of the cluster
should increase the cluster setting `xpack.ml.max_open_jobs`. This allows autoscaling
to run reliably as it relies on assigning jobs only via memory. Having
20 changes: 15 additions & 5 deletions docs/reference/release-notes/7.13.asciidoc
Original file line number Diff line number Diff line change
@@ -20,6 +20,8 @@ https://cve.mitre.org/cgi-bin/cvename.cgi?name=2021-22145[CVE-2021-22145]
[discrete]
=== Known issues

include::7.12.asciidoc[tag=frozen-tier-79371-known-issue]

include::7.13.asciidoc[tag=snapshot-repo-corruption-75598-known-issue]

[[bug-7.13.4]]
@@ -41,18 +43,20 @@ Also see <<breaking-changes-7.13,Breaking changes in 7.13>>.
[[security-updates-7.13.3]]
=== Security updates

* An uncontrolled recursion vulnerability that could lead to a
denial of service attack was identified in the {es} Grok parser.
A user with the ability to submit arbitrary queries to {es} could create
* An uncontrolled recursion vulnerability that could lead to a
denial of service attack was identified in the {es} Grok parser.
A user with the ability to submit arbitrary queries to {es} could create
a malicious Grok query that will crash the {es} node.
All versions of {es} prior to 7.13.3 are affected by this flaw.
You must upgrade to {es} version 7.13.3 to obtain the fix.
All versions of {es} prior to 7.13.3 are affected by this flaw.
You must upgrade to {es} version 7.13.3 to obtain the fix.
https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2021-22144[CVE-2021-22144]

[[known-issues-7.13.3]]
[discrete]
=== Known issues

include::7.12.asciidoc[tag=frozen-tier-79371-known-issue]

include::7.13.asciidoc[tag=snapshot-repo-corruption-75598-known-issue]

[[bug-7.13.3]]
@@ -106,6 +110,8 @@ Also see <<breaking-changes-7.13,Breaking changes in 7.13>>.
[discrete]
=== Known issues

include::7.12.asciidoc[tag=frozen-tier-79371-known-issue]

include::7.13.asciidoc[tag=snapshot-repo-corruption-75598-known-issue]

[[bug-7.13.2]]
@@ -146,6 +152,8 @@ Also see <<breaking-changes-7.13,Breaking changes in 7.13>>.
[discrete]
=== Known issues

include::7.12.asciidoc[tag=frozen-tier-79371-known-issue]

// tag::snapshot-repo-corruption-75598-known-issue[]
* Snapshot and restore: If a running snapshot is cancelled while a
previously-started snapshot is still ongoing and a later snapshot is enqueued
@@ -204,6 +212,8 @@ Also see <<breaking-changes-7.13,Breaking changes in 7.13>>.
[discrete]
=== Known issues

include::7.12.asciidoc[tag=frozen-tier-79371-known-issue]

* If autoscaling is enabled for machine learning, the administrator of the
cluster should increase the cluster setting `xpack.ml.max_open_jobs` to the
maximum value of `512`. This allows autoscaling to run reliably as it relies on
18 changes: 12 additions & 6 deletions docs/reference/release-notes/7.14.asciidoc
Original file line number Diff line number Diff line change
@@ -7,16 +7,18 @@ Also see <<breaking-changes-7.14,Breaking changes in 7.14>>.
[discrete]
=== Known issues

include::7.12.asciidoc[tag=frozen-tier-79371-known-issue]

// tag::ccs-agg-mem-known-issue[]
* Aggregations: In {es} 7.14.0–7.15.0, when a {ccs} ({ccs-init}) request is proxied, the memory for the aggregations on the
proxy node will not be freed. The trigger is {ccs} using aggregations where minimize
roundtrips is not effective (for example, when minimize roundtrips is explicitly disabled, or implicitly disabled
proxy node will not be freed. The trigger is {ccs} using aggregations where minimize
roundtrips is not effective (for example, when minimize roundtrips is explicitly disabled, or implicitly disabled
when using scroll, async and point-in-time searches).
+
This affects {kib} {ccs-init} aggregations because {kib}
uses async search by default. This issue can also happen in all modes of remote connections
configured for {ccs} (sniff and proxy). In sniff mode, we only connect to a subset of the
remote nodes (by default 3). So if the remote node we want to send a request to is not one of those 3,
This affects {kib} {ccs-init} aggregations because {kib}
uses async search by default. This issue can also happen in all modes of remote connections
configured for {ccs} (sniff and proxy). In sniff mode, we only connect to a subset of the
remote nodes (by default 3). So if the remote node we want to send a request to is not one of those 3,
we must send the request as a proxy request. The workaround is to periodically restart nodes with heap pressure.
+
We have fixed this issue in {es} 7.15.1 and later versions. For more details,
@@ -58,6 +60,8 @@ Also see <<breaking-changes-7.14,Breaking changes in 7.14>>.
[discrete]
=== Known issues

include::7.12.asciidoc[tag=frozen-tier-79371-known-issue]

include::7.14.asciidoc[tag=ccs-agg-mem-known-issue]

[[enhancement-7.14.1]]
@@ -159,6 +163,8 @@ Also see <<breaking-changes-7.14,Breaking changes in 7.14>>.
[discrete]
=== Known issues

include::7.12.asciidoc[tag=frozen-tier-79371-known-issue]

include::7.14.asciidoc[tag=ccs-agg-mem-known-issue]

include::7.13.asciidoc[tag=snapshot-repo-corruption-75598-known-issue]