Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

sql: infrastructure to get visibility into the sql memory budget #35097

Closed
RaduBerinde opened this issue Feb 20, 2019 · 9 comments
Closed

sql: infrastructure to get visibility into the sql memory budget #35097

RaduBerinde opened this issue Feb 20, 2019 · 9 comments
Labels
A-sql-memmon SQL memory monitoring C-enhancement Solution expected to add code/behavior + preserve backward-compat (pg compat issues are exception) T-sql-queries SQL Queries Team

Comments

@RaduBerinde
Copy link
Member

RaduBerinde commented Feb 20, 2019

The sql memory budgeting mechanism doesn't give us much visibility into what is going on in a running cluster. There is nothing we can monitor in the UI (at least not by default) to see if we're running close to the limit, and when we start erroring out there's not much information about what was going on.

We should design infrastructure that allows figuring out what the budget used for at a given time. This will have some overhead, so it should be something that is only enabled temporarily, for short periods of time. It could be triggered by a debug endpoint, or it could be turned on periodically whenever we reach e.g. 80% of the budget. Perhaps it could even be triggered on already-running operations when an out-of-budget error occurs.

This is something that should be on our radar for the next release to try and avoid expending engineering time on debugging out-of-budget errors after they happen. I would like to help out with the design/planning and perhaps even the implementation.

CC @jordanlewis @knz

Jira issue: CRDB-4608

@RaduBerinde RaduBerinde self-assigned this Feb 20, 2019
@knz
Copy link
Contributor

knz commented Feb 21, 2019

I'm sorry I don't have ideas on how to do this with our current codebase and a standard Go runtime.

My opinion on this has always been that we need a custom go runtime to track consumption per "request" (eg SQL connection/query). I don't know how to make things work otherwise.

@petermattis
Copy link
Collaborator

Can we use a custom profile here?

@jordanlewis
Copy link
Member

jordanlewis commented Feb 21, 2019 via email

@tbg
Copy link
Member

tbg commented Feb 21, 2019

I started writing some infra for this on a whim a while back, found here.
This came out of convos with @jordanlewis and his PR #30930.

The hope is that we could do stuff like

go tool pprof http://localhost:8888/debug/pprof/sql.budgetalloc
go tool pprof http://localhost:8888/debug/pprof/sql.tablereader.bytesread
go tool pprof http://localhost:8888/debug/pprof/kv.num_keys_iterated
...

and get something like a heap profile, perhaps even with labels (per anonymized statement, or at least statement tag), and that we could also profile individual operations (EXPLAIN (PROFILE) SELECT ....), though it'll be a bit challenging to get that working cross-node and it definitely verges into the territory of tracing and would need to be suitably integrated with it. Either of these options also intersects with metrics collection (i.e. any metric should also have an associated profile that can be turned on, which allows showing the contributing call stacks to the metric).

For high CPU usage though, I think just extending #30930 so that it contains the anonymized statement and can be turned on on demand and for short periods of time (I was thinking SET CLUSTER SETTING server.pprof.label_until = NOW() + INTERVAL '30 seconds'; or integrated into the debug page ui profile link) would do a good job in practice.

There's no such counterpart for the heap profile, though that would be fantastic. The way heap profiles are collected is extremely different though (done during GC), and doesn't lend itself well to labeling at allocation time. Or maybe it's possible, not sure. But it doesn't seem to exist today.

@awoods187 awoods187 added A-sql-memmon SQL memory monitoring C-enhancement Solution expected to add code/behavior + preserve backward-compat (pg compat issues are exception) labels Mar 6, 2019
@jordanlewis
Copy link
Member

We added the sessions page which shows the amount of allocated bytes per active sessions, and a UI graph that shows the total number of allocated bytes.

craig bot pushed a commit that referenced this issue Mar 1, 2023
95141: storage: Add support for TargetBytes for EndTxn r=nvanbenschoten a=KaiSun314

Fixes: #77228

Intent resolution batches are sequenced on raft and each batch can
consist of 100-200 intents. If an intent key or even value in some cases
are large, it is possible that resolving all intents in the batch would
result in a raft command size exceeding the max raft command size
kv.raft.command.max_size.

To address this, we add support for TargetBytes in resolve intent and
resolve intent range commands, allowing us to stop resolving intents in
the batch as soon as we exceed the TargetBytes max bytes limit.

This PR adds byte pagination for synchronous intent resolution (i.e.
EndTxn / End Transaction).

Release note: None

97511: status: set codec from context in table stats requests r=knz a=dhartunian

Replaced usages of `TODOSQLCodec` with the codec from `sqlServer.execCfg`. This enables the DB and Table stats endpoints to work from tenants.

Resolves: #82879

Relates to: #90261, #90267, #90268, #90264, #89429

Epic: CRDB-12100

Release note: None

97657: sql,mon: expose the memory monitors as virtual table r=yuzefovich a=yuzefovich

This commit adjusts our memory monitors to be able to traverse the whole
monitor tree starting at the root monitor. In particular, this commit
introduces a doubly-linked list of "siblings" and stores the reference
to the head in the parent. Whenever a monitor is `Start`ed, it is
included as the new head of its parent's children list, whenever a monitor
is `Stop`ped, it is removed from that list. The overhead of this
additional tracking should be negligible since only the parent's lock
needs to be acquired twice throughout the lifetime of a monitor (thus,
assuming relatevily long-lived sessions, this wouldn't affect the root
monitor) and the increase in allocations is minor.

This required clarification on how locks on a parent and a child can be
held at the same time. In particular, since the main code path is
acquiring locks "upwards" (meaning when growing the child's budget we
might need to grow the parent's budget, and "growing" locks the
corresponding monitor), whenever we want to traverse the tree from the
root down, we have to unlock the parent's monitor before recursing into
the children. As a result, the traversal might give us an inconsistent
view (where a recently stopped child can contribute to the usage of the
parent while we don't recurse into that child). This seems acceptable.

This ability to traverse the whole monitor tree is now exposed as a new
virtual table `crdb_internal.node_memory_monitors` which includes a line
for each monitor active at the time of table generation (subject to
possible inconsistency mentioned above). The table includes the name of
the monitors which can be suggestive about the activity on the cluster,
thus, access to this table is gated on the "view activity" permissions.
The usage of the virtual table to expose the memory monitors information
results in flattening of the tree; however, one of the fields is
a "level" (or "generation") in relation to the root, plus the ordering
of rows is very specific, so we can still format the output to see the
hierarchy. We also assign IDs to the monitors (which is their pointer
address). Exposing this information as a virtual table allows us to use
SQL to analyze it.

Here is one example of visualizing it:
```
[email protected]:26257/defaultdb> SELECT repeat('    ', level) || name || ' ' || crdb_internal.humanize_bytes(used) FROM crdb_internal.node_memory_monitors;
                             ?column?
-------------------------------------------------------------------
  root 0 B
      internal-planner.‹root›.‹resume-job-101› 0 B
      internal-planner.‹node›.‹resume-job-100› 0 B
      internal-planner.‹node›.‹resume-job-842810460057567233› 0 B
      sql 900 KiB
          session root 20 KiB
              txn 10 KiB
                  flow e595eb80 10 KiB
              session 0 B
              txn-fingerprint-id-cache 0 B
          internal SQL executor 0 B
          internal SQL executor 0 B
          internal sql executor 0 B
          conn 105 KiB
          internal SQL executor 70 KiB
          internal SQL executor 60 KiB
          SQLStats 540 KiB
          SQLStats 0 B
      distsql 0 B
      server-cache-mon 0 B
      bulk-mon 0 B
          backup-mon 0 B
          backfill-mon 0 B
      pre-conn 105 KiB
      closed-session-cache 190 KiB
      timeseries-results 0 B
      timeseries-workers 0 B
      kv-mem 20 KiB
          rangefeed-monitor 0 B
          rangefeed-system-monitor 0 B
(30 rows)
```

There are a couple of additional minor improvements:
- we now include the short FlowID into the flow's memory monitor name.
Combined with the distsql_flows virtual table we'll be able to get the
stmt fingerprint for the remote flows running on a node.
- new `crdb_internal.humanize_bytes` builtin function is introduced.

Note that the corresponding `cluster_memory_monitors` virtual table is
not introduced out of caution. In particular, this would lead to RPCs
issued to all nodes in the cluster, and since each node can have on the
order of hundreds of thousands monitors, the response to each RPC could
have non-trivial network cost. We can revisit this decision later if we
find that a cluster level view of the memory monitors is desirable, but
for now a node level view seems like a big improvement on its own.

Addresses: #35097.
Fixes: #90551.

Release note (sql change): New internal virtual table
`crdb_internal.memory_monitors` is introduced. It exposes all of the
current reservations with the memory accounting system on a single node.
Access to the table requires VIEWACTIVITY or VIEWACTIVITYREDACTED
permissions.

97853: builtins: fix crdb_internal.hide_sql_constants array overload r=xinhaoz a=xinhaoz

Previously, erroring on parsing a stmt provided in one of the array elements to crdb_internal.hide_sql_constants would result in an error. This commit ensures that the empty string is returned for an unparseable stmt.

Epic: none

Release note: None

Co-authored-by: Kai Sun <[email protected]>
Co-authored-by: David Hartunian <[email protected]>
Co-authored-by: Yahor Yuzefovich <[email protected]>
Co-authored-by: Xin Hao Zhang <[email protected]>
@github-actions
Copy link

We have marked this issue as stale because it has been inactive for
18 months. If this issue is still relevant, removing the stale label
or adding a comment will keep it active. Otherwise, we'll close it in
10 days to keep the issue queue tidy. Thank you for your contribution
to CockroachDB!

@yuzefovich
Copy link
Member

#97657 added a virtual table that exposes the state of the memory monitor tree on a single node, so it seems like this has been fixed. @RaduBerinde did you envision something more / automatic?

@yuzefovich yuzefovich moved this from Triage to New Backlog in SQL Queries Oct 3, 2023
craig bot pushed a commit that referenced this issue Nov 22, 2023
114268: backup: split request spans to be range sized r=dt a=dt

Backup processors are assigned spans -- which are produced by the SQL planning function PartitionSpans - which they must backup, by reading content of that span using some number of paginated ExportRequests and then writing that content to the assigned destination.

Typically each export request sent by a backup processor is expected to be served by approximately one range: it sends the request to the whole span it is trying to export, distsender sends it to first range it overlaps, that range reads until it hits the pagination limit, then distsender returns its result and the processor does this again starting the span from the resume key.

Since each request does a range's worth of work, the backup processor can assume it should, if things are normal and healthy in the cluster, return its result within a short amount of time. This is often a second or less, or perhaps a few seconds if it had to wait in queues. As such, the backup processor imposes a 5 minute timeout on these requests, as a single request not returning in this duration indicates something is not normal and healthy in the cluster, and the backup cannot expect to make process until that is resolved.

However this logic does not hold if a single request, subject to this timeout, ends up doing substantially more work. This however can happen if that request has a span large than a single range _and_ the ranges in that span are empty and/or don't contain data matching the predicate of the request. In such cases, the request would be sent to one range, it would process it, but since it returns zero results, the pagination limit would not be hit and the request would then continue on to be sent to another range, and another, etc until it either reaches the end of the requested span or finally finds results that hit the pagination limit. If neither of these happen, it could end up hitting the timeout, that was imposed as a limit that should never be hit by doing a single range's worth of work, because we are in fact doing many range's worth of work.

This change pre-splits the spans that we need to export into subspans that we will send requests to, so that each sub-span is the size of one range. It is OK if the actual ranges below these requests end up splitting or merging, as this splitting has simply ensured that each request corresponds to "a range's worth of work" which is should as it was at the splitting time a range.

By doing this, we should be able to assume that all requests are expected to complete, if the cluster is healthy, within the 5min timeout.

We do this rather than setting `ReturnOnRangeBoundary` both since `ReturnOnRangeBoundary` is not yet in use and production tested, but also since breaking work up into range-sized chunks allows more evenly distributing between the worker goroutines in the processor. 

Release note: none.
Epic: none.

114275: server/profiler: add periodic memory monitoring dump r=yuzefovich a=yuzefovich

This commit adds periodic "memory monitoring" dump as another profiler.
It uses exactly the same heuristic as the heap profiler to be triggered,
and the dump contains human-readable view of the memory monitoring
system. Monitors that don't have any usage are omitted from the output.
In conjunction with the heap profiles it should give us better insight
into why OOMs / significant memory usage occurred.

The output is written into new files in `heap_profiler` directory and
are of the form:
```
root 0 B (8.0 GiB / 0 B)
    sql 265 MiB
        session root 264 MiB (21 KiB / 84 KiB)
            txn 264 MiB
                flow 211b7d29 264 MiB
                    joinreader-mem 220 KiB
                    distinct-7-limited-17 90 KiB
                    fetcher-mem 1.9 MiB
                    joinreader-mem 3.3 MiB
                    hash-aggregator-4-unlimited-16 218 MiB
                    hash-joiner-3-unlimited-5 70 KiB
                    hash-joiner-3-limited-4 26 MiB
                    index-join-1-unlimited-2 510 KiB
                    cfetcher-2-unlimited-1 2.0 MiB
                    cfetcher-0-unlimited-0 12 MiB
            session 90 KiB
        internal sql executor 90 KiB
        conn 105 KiB
        internal SQL executor 40 KiB
        internal SQL executor 30 KiB
        SQLStats 220 KiB
    pre-conn 105 KiB
    closed-session-cache 30 KiB
        rangefeed-system-monitor 0 B (128 MiB / 0 B)
```
(this was collected on a single node cluster running one TPCH query).

This feature is enabled by default but can be disabled via a public
cluster setting.

Addresses: #35097.

Epic: None

Release note (general change): CockroachDB will now periodically dump
the state of its internal memory accounting system into `heap_profiler`
directory, at the same time as the heap profiles are taken. This
behavior can be disabled by changing `diagnostics.memory_monitoring_dumps.enabled`
cluster setting to `false`.

114655: roachprod: better error reporting for SyncedCluster Wait r=renatolabs a=herkolategan

Previously, `Wait` swallowed an error and did not report the actual cause. This
change captures the original error as well.

Epic: None
Release Note: None

114841: cli: add json and ndjson to help text r=rafiss a=rafiss

informs #114762
Release note: None

Co-authored-by: David Taylor <[email protected]>
Co-authored-by: Yahor Yuzefovich <[email protected]>
Co-authored-by: Herko Lategan <[email protected]>
Co-authored-by: Rafi Shamim <[email protected]>
@yuzefovich
Copy link
Member

I'm going to close this issue as a result of two changes:

@github-project-automation github-project-automation bot moved this from New Backlog to Done in SQL Queries Mar 2, 2024
@jordanlewis
Copy link
Member

Woohoo, always hoped to see this one get done!

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
A-sql-memmon SQL memory monitoring C-enhancement Solution expected to add code/behavior + preserve backward-compat (pg compat issues are exception) T-sql-queries SQL Queries Team
Projects
Archived in project
Development

No branches or pull requests

7 participants