Commit
This commit does not belong to any branch on this repository, and may belong to a fork outside of the repository.
69238: server: create api to query persisted stats by date range r=xinhaoz a=xinhaoz This commit creates a new API endpoint /_status/combinedstmts to fetch combined in-memory and persisted statements and transactions from crdb_internal.statement_statistics and crdb_internal.transaction_statistics. The request supports optional start and end parameters which represent the unix time at which the data was aggregated. The parameteres start, end and combined have also been added to the StatementsRequest message. Setting combined to true will forward the request to fetch data from the new combined api, using the start and end parameters provided. Release justification: Category 2 low-risk updates to new functionality Release note (api change): New endpoint /_status/combinedstmts to retrieve persisted and in-memory statements from crdb_internal.statement_statistics and crdb_internal.transaction_statistics by aggregated_ts range. The request supports optional query string parameters start and end, which are the date range in unix time. The response returned is currently the response expected from /_status/statements. /_status/statements has also been udpated to support the parameters combined, start, and end. If combined is true, then the statements endpoint will use /_status/combinedstmts with the optional parameters start and end. 69395: opt: support locality optimized search for scans with more than 1 row r=yuzefovich a=rytaft This commit updates the logic for planning locality optimized search to allow the optimization the be planned if there are fewer than 100,000 keys selected. The optimization is not yet supported for scans with a hard limit. Informs #64862 Release justification: Low risk, high benefit change to existing functionality. Release note (performance improvement): locality optimized search is now supported for scans that are guaranteed to return 100000 keys or less. This optimization allows the execution engine to avoid visiting remote regions if all requested keys are found in the local region, thus reducing the latency of the query. 69469: ts: include histogram quantiles in tsdump r=dhartunian a=tbg `cockroach debug tsdump` previously silently did not return metrics backed by histograms. This is for technical reasons related to the bookkeeping of metrics names and is rectified here by requiring some extra tagging of metrics that are histograms so that they can be picked up by tsdump. It's not pretty, but pragmatic: it works and it'll be clear to anyone adding a histogram in the future how to proceed, even if they may wonder why things work in such a roundabout manner (and if they're curious about that, the relevant issues are linked in comments as well). I also renamed AllMetricsNames to AllInternalTimeseriesMetricsNames to make clear what is being returned. Demo: ``` killall -9 cockroach; rm -rf cockroach-data; ./cockroach start-single-node --insecure --background && \ ./cockroach workload run kv --init \ 'postgres://[email protected]:26257?sslmode=disable' --duration=300s && \ ./cockroach debug tsdump --format=raw --insecure > tsdump.gob && \ killall -9 cockroach && rm -rf cockroach-data && \ COCKROACH_DEBUG_TS_IMPORT_FILE=tsdump.gob ./cockroach start-single-node --insecure ``` ![image](https://user-images.githubusercontent.com/5076964/131134624-b5471621-d23b-4ce7-9026-e8aeb3613231.png) Release justification: low-risk observability fix Release note (ops change): The ./cockroach debug tsdump command now downloads histogram timeseries it silently omitted previously. Co-authored-by: Xin Hao Zhang <[email protected]> Co-authored-by: Rebecca Taft <[email protected]> Co-authored-by: Tobias Grieger <[email protected]>
- Loading branch information