-
Notifications
You must be signed in to change notification settings - Fork 24.9k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Deduplicate RepositoryData on a Best Effort Basis #67947
Deduplicate RepositoryData on a Best Effort Basis #67947
Conversation
Enhance transport request deduplicator to allow for more general deduplication. Make use of that to enable deduplicate RepositoryData under concurrent request load (which so far has been the only situation where RepositoryData has created unmanagable memory pressure).
Pinging @elastic/es-distributed (Team:Distributed) |
} catch (Exception e) { | ||
listener.onFailure(e); | ||
} | ||
repoDataDeduplicator.executeOnce(metadata, listener, (metadata, l) -> l.onResponse(cached.repositoryData())); |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This is a little less significant in savings relative to the other spot but I think it's worth it.
- Deserializing the cached
RepositoryData
is still expensive and it's nice to save the work. - For status APIs this gives a nice natural rate limit by only fanning out after fetching
RepositoryData
in case of massive concurrency of requests (e.g. snapshotter fetching all snapshots in a 1k snapshots repo) - For other operations like create, delete, clone we want those to run serially anyway (which we generally do via the master service) so its fine if we just run all the listeners in series here to begin with.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
For status APIs this gives a nice natural rate limit by only fanning out after fetching RepositoryData in case of massive concurrency of requests (e.g. snapshotter fetching all snapshots in a 1k snapshots repo)
I fail to see how this change could act as a rate limit for that scenario, it's a good improvement but the requests would accumulate anyway, right?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
but the requests would accumulate anyway, right?
Yea, I guess "rate-limiter" was a poor choice of wording. I guess we "rate limit" the rate at which we dispatch the actions after loading repo data to a single thread at a time but requests still pile up. In practice that probably means that things run much quicker than before effectively but with requests queuing and waiting for the first one to finish instead of executing in parallel it's more of a "thread-limiter" I suppose :)
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
That makes sense, thanks for the clarification!
if (bestEffortConsistency) { | ||
threadPool.generic().execute(ActionRunnable.wrap(listener, this::doGetRepositoryData)); | ||
} else { | ||
repoDataDeduplicator.executeOnce(metadata, listener, (metadata, l) -> |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This would completely resolve the broken situations we observed where there were 50+ concurrent GENERIC threads fetching RepositoryData
and deserializing it over and over, eventually running the system OOM.
@@ -110,6 +110,8 @@ default Repository create(RepositoryMetadata metadata, Function<String, Reposito | |||
* Returns a {@link RepositoryData} to describe the data in the repository, including the snapshots | |||
* and the indices across all snapshots found in the repository. Throws a {@link RepositoryException} | |||
* if there was an error in reading the data. | |||
* @param listener listener that may be resolved on different kinds of threads including transport and cluster state applier threads |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I audited all usages of this method and I couldn't find any remaining spot where this is a problem (at least in master
, might need some adjustments in 7.x
).
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Nice change, I've left some questions and small suggestions. 👍
*/ | ||
public final class TransportRequestDeduplicator<T> { | ||
public final class AbstractResultDeduplicator<T, R> { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
nit: Maybe we should name it just ResultDeduplicator
? as we don't expect to extend this class.
Also, now that we're using it outside of the transport package, maybe it makes sense to move it to another package?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
++ to both, lets renamed and moved to the .action
package :)
} catch (Exception e) { | ||
listener.onFailure(e); | ||
} | ||
repoDataDeduplicator.executeOnce(metadata, listener, (metadata, l) -> l.onResponse(cached.repositoryData())); |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
For status APIs this gives a nice natural rate limit by only fanning out after fetching RepositoryData in case of massive concurrency of requests (e.g. snapshotter fetching all snapshots in a 1k snapshots repo)
I fail to see how this change could act as a rate limit for that scenario, it's a good improvement but the requests would accumulate anyway, right?
Thanks Francisco all addressed I think :) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
LGTM, thanks Armin!
Thanks Francisco! |
Enhance transport request deduplicator to allow for more general deduplication. Make use of that to enable deduplicate RepositoryData under concurrent request load (which so far has been the only situation where RepositoryData has created unmanageable memory pressure).
There is a small chance here that elastic#67947 would cause the callback for the repository data to run on a transport or CS updater thread and do a lot of IO to fetch `SnapshotInfo`. Fixed by always forking to the generic pool for the callback. Added test that triggers lots of deserializing repository data from cache on the transport thread concurrently which triggers this bug relatively reliable (more than half the runs) but is still reasonably fast (under 5s).
Enhance transport request deduplicator to allow for more general deduplication. Make use of that to enable deduplicate RepositoryData under concurrent request load (which so far has been the only situation where RepositoryData has created unmanageable memory pressure).
) There is a small chance here that #67947 would cause the callback for the repository data to run on a transport or CS updater thread and do a lot of IO to fetch `SnapshotInfo`. Fixed by always forking to the generic pool for the callback. Added test that triggers lots of deserializing repository data from cache on the transport thread concurrently which triggers this bug relatively reliable (more than half the runs) but is still reasonably fast (under 5s).
…stic#68023) There is a small chance here that elastic#67947 would cause the callback for the repository data to run on a transport or CS updater thread and do a lot of IO to fetch `SnapshotInfo`. Fixed by always forking to the generic pool for the callback. Added test that triggers lots of deserializing repository data from cache on the transport thread concurrently which triggers this bug relatively reliable (more than half the runs) but is still reasonably fast (under 5s).
) (#68092) There is a small chance here that #67947 would cause the callback for the repository data to run on a transport or CS updater thread and do a lot of IO to fetch `SnapshotInfo`. Fixed by always forking to the generic pool for the callback. Added test that triggers lots of deserializing repository data from cache on the transport thread concurrently which triggers this bug relatively reliable (more than half the runs) but is still reasonably fast (under 5s).
Enhance transport request deduplicator to allow for more general deduplication.
Make use of that to enable deduplicate RepositoryData under concurrent request load
(which so far has been the only situation where RepositoryData has created unmanagable
memory pressure).
relates #66042 (improves loading snapshot info for many snapshots in parallel as done by Cloud snapshotter for example)
relates #55153 (pushes back at least a little by creating a bottle-neck on the repository data loading step and saves significant memory from reducing the number of
RepositoryData
instances on heap during concurrent snapshot get requests)