You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Is your feature request related to a problem? Please describe.
The major problem of HCAD is scalability. HCAD issues a query to fetch feature data for each entity regularly. HCAD v1 limit the number of entity values returned per query to 1000, both for reasons of efficiency (fetching the overall entity values is expensive and may not finish before the next scheduled query run starts) and for reasons of ease-of-load-shedding (the top 1000 limit, in turn, curtails the memory used to host models, the disk access needed to read/write model checkpoints and anomaly results, the CPU used for entity metadata maintenance and model training/inference, and the garbage collection for deleted models and metadata). Users can increase the maximum entity setting at the cost of a linear growth resource usage, which opens the door to cluster instability. However, such a heuristic “best-effort” limit is detrimental to the scalability of entity monitoring. Even if a user scales out, the number of monitored entities does not increase.
Describe the solution you'd like
We increase the default maximum entities returned by the feature query from one thousand to one million. The huge number of entities are handled via pagination.
The text was updated successfully, but these errors were encountered:
Is your feature request related to a problem? Please describe.
The major problem of HCAD is scalability. HCAD issues a query to fetch feature data for each entity regularly. HCAD v1 limit the number of entity values returned per query to 1000, both for reasons of efficiency (fetching the overall entity values is expensive and may not finish before the next scheduled query run starts) and for reasons of ease-of-load-shedding (the top 1000 limit, in turn, curtails the memory used to host models, the disk access needed to read/write model checkpoints and anomaly results, the CPU used for entity metadata maintenance and model training/inference, and the garbage collection for deleted models and metadata). Users can increase the maximum entity setting at the cost of a linear growth resource usage, which opens the door to cluster instability. However, such a heuristic “best-effort” limit is detrimental to the scalability of entity monitoring. Even if a user scales out, the number of monitored entities does not increase.
Describe the solution you'd like
We increase the default maximum entities returned by the feature query from one thousand to one million. The huge number of entities are handled via pagination.
The text was updated successfully, but these errors were encountered: