Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Collect postgres statement samples & execution plans for deep database monitoring #8627

Merged
merged 8 commits into from
Mar 3, 2021

Conversation

djova
Copy link
Contributor

@djova djova commented Feb 15, 2021

What does this PR do?

Adds a new feature to "Deep Database Monitoring", enabling collection of statement samples and execution plans. Follow-up to #7852.

See #8629 for the corresponding MySQL PR which depends on this PR.

How it works

If enabled, a python thread is launched during a regular check run:

  • collects statement samples at the configured rate limit (default 1 collection per second)
  • maintains its own psycopg2 connection to avoid clashing transactions/state with the main thread connection
  • collects execution plans through a postgres function that the user must install into each database being monitored (if we wanted the agent to collect execution plans directly by running EXPLAIN then it would need full write permission to all tables)
  • shuts down if it detects that the main check has stopped running

During one "collection" we do the following:

  1. read out all new statements from pg_stat_activity
  2. try to collect an execution plan for each statement
  3. submit events directly to the new database monitoring event intake

Rate Limiting

There are several different rate limits to keep load on the database to a minimum and to avoid reingesting duplicate events:

  • collections_per_second: limits how often collections are done
  • explained_statements_cache: ttl limits how often we attempt to collect an execution plan for a given normalized query
  • seen_samples_cache: ttl limits how often we ingest statement samples for the same normalized query and execution plan

Configuration

We're adding a new statement_samples postgres instance config section. Here is the full set of available configuration showing the default settings:

statement_samples:
   enabled: false
   collections_per_second: 1
   explain_function: 'datadog.explain_statement'
   explained_statements_cache_maxsize: 5000
   explained_statements_per_hour_per_query: 60
   seen_samples_cache_maxsize: 10000
   samples_per_hour_per_query: 15

Motivation

Collect statement samples & execution plans, enabling deeper insight into what's running on the database and how queries are being executed.

Review checklist (to be filled by reviewers)

  • Feature or bugfix MUST have appropriate tests (unit, integration, e2e)
  • PR title must be written as a CHANGELOG entry (see why)
  • Files changes must correspond to the primary purpose of the PR as described in the title (small unrelated changes should have their own PR)
  • PR must have changelog/ and integration/ labels attached

@djova djova requested review from a team as code owners February 15, 2021 16:32
@djova djova changed the title Collect postgres statement samples & execution plans Collect postgres statement samples & execution plans for deep database monitoring Feb 15, 2021
@ghost ghost added the dependencies label Feb 15, 2021
@djova djova force-pushed the djova/postgres-dbm-statement-samples branch 2 times, most recently from 72d02c6 to 3dbe969 Compare February 15, 2021 18:31
@olivielpeau
Copy link
Member

Just a quick note about detecting that the main check has stopped running: once #8463 is merged, the cancel method could be implemented by this check to signal to the collection python thread that it should stop.

@djova
Copy link
Contributor Author

djova commented Feb 15, 2021

Just a quick note about detecting that the main check has stopped running: once #8463 is merged, the cancel method could be implemented by this check to signal to the collection python thread that it should stop.

Good to know! Will use that once it's ready.

Copy link
Contributor

@florimondmanca florimondmanca left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

👋 Took an initial look. Generally the code looks good, happy to review specific items if need be.

Does this need any changes to metadata (README, metrics, etc)?

@ofek
Copy link
Contributor

ofek commented Feb 16, 2021

Code looks good as always!

However, in principle, I'm extremely opposed to the contents of datadog_checks_base/datadog_checks/base/utils/db/statement_samples.py. What we usually do (and should do here) is implement any backend submission logic in the Agent and expose a high-level API to Python. See: https://github.com/DataDog/datadog-agent/tree/master/rtloader#examples

@djova djova requested a review from justiniso February 16, 2021 15:06
@djova djova force-pushed the djova/postgres-dbm-statement-samples branch 4 times, most recently from d128956 to 8a564a8 Compare March 1, 2021 20:08
@djova djova requested a review from a team as a code owner March 2, 2021 00:41
@ghost ghost added the documentation label Mar 2, 2021
@djova
Copy link
Contributor Author

djova commented Mar 2, 2021

However, in principle, I'm extremely opposed to the contents of datadog_checks_base/datadog_checks/base/utils/db/statement_samples.py. What we usually do (and should do here) is implement any backend submission logic in the Agent and expose a high-level API to Python.

Agreed. This is coming in a follow-up change once we all agree on the design of this new high-level API.

@ofek @olivielpeau @florimondmanca I've addressed all of the comments. Could you please take another look?

Comment on lines +184 to +186
self._log.warning(
"Statement sampler database error: %s", e, exc_info=self._log.getEffectiveLevel() == logging.DEBUG
)
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Why not just log as debug?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I think since database errors will be relatively infrequent, it'll be useful for the user to (by default) see warnings in the log so they have context as to why it's failing (i.e. failed to connect). I think they shouldn't need to enable debug logs to see something that obvious.

If debug is enabled then we'll log the full stack trace as well as if you're working on the code you may want to know exactly where the exception came from. But in the common case of something like failed connections, we don't need to pollute the log with the full stack trace.

ofek
ofek previously approved these changes Mar 2, 2021
olivielpeau
olivielpeau previously approved these changes Mar 2, 2021
Copy link
Member

@olivielpeau olivielpeau left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Approving since there's an RFC in progress to submit this data through the core agent instead.

Also, #8463 is merged now so implementing cancel would make sense (can be done in a follow up PR).

@djova
Copy link
Contributor Author

djova commented Mar 2, 2021

Also, #8463 is merged now so implementing cancel would make sense (can be done in a follow up PR).

Thanks, will do in a follow-up!

**What does this PR do?**

Adds a new feature to "Deep Database Monitoring", enabling collection of statement samples and execution plans. Follow-up to #7852.

**How does it work?***

If enabled, a python thread is launched during a regular check run:
* collects statement samples at the configured rate limit (default 1 collection per second)
* maintains its own `psycopg2` connection to avoid clashing transactions/state with the main thread connection
* shuts down if it detects that the main check has not run for two collection intervals
* collects execution plans through a postgres function that the user must install into each database being monitored (if we wanted the agent to collect execution plans directly by running `EXPLAIN` then it would need full write permission to all tables)

During one "collection" we do the following:
1. read out all new statements from `pg_stat_activity`
1. try to collect a execution plan for each statement
1. submit events directly to the new database monitoring event intake

**Rate limiting**

There are several different rate limits to keep load on the database to a minimum and to avoid reingesting duplicate events:
* `collections_per_second`: limits how often collections are done (each collection is a query to `pg_stat_activity`)
* `explained_statements_cache`: limits how often we attempt to collect an execution plan for a given normalized query
* `seen_samples_cache`: limits how often we ingest statement samples for the same normalized query and execution plan

**Configuration**

We're adding a new `statement_samples` postgres instance config section. Here is the full set of available configuration showing the default settings:
```yaml
statement_samples:
   enabled: false
   collections_per_second: 1
   explain_function: 'datadog.explain_statement'
   explained_statements_cache_maxsize: 5000
   explained_statements_per_hour_per_query: 60
   seen_samples_cache_maxsize: 10000
   samples_per_hour_per_query: 15
```
@ofek ofek merged commit fe3a483 into master Mar 3, 2021
@ofek ofek deleted the djova/postgres-dbm-statement-samples branch March 3, 2021 15:39
github-actions bot pushed a commit that referenced this pull request Mar 3, 2021
…e monitoring (#8627)

* Collect postgres statement samples & execution plans

**What does this PR do?**

Adds a new feature to "Deep Database Monitoring", enabling collection of statement samples and execution plans. Follow-up to #7852.

**How does it work?***

If enabled, a python thread is launched during a regular check run:
* collects statement samples at the configured rate limit (default 1 collection per second)
* maintains its own `psycopg2` connection to avoid clashing transactions/state with the main thread connection
* shuts down if it detects that the main check has not run for two collection intervals
* collects execution plans through a postgres function that the user must install into each database being monitored (if we wanted the agent to collect execution plans directly by running `EXPLAIN` then it would need full write permission to all tables)

During one "collection" we do the following:
1. read out all new statements from `pg_stat_activity`
1. try to collect a execution plan for each statement
1. submit events directly to the new database monitoring event intake

**Rate limiting**

There are several different rate limits to keep load on the database to a minimum and to avoid reingesting duplicate events:
* `collections_per_second`: limits how often collections are done (each collection is a query to `pg_stat_activity`)
* `explained_statements_cache`: limits how often we attempt to collect an execution plan for a given normalized query
* `seen_samples_cache`: limits how often we ingest statement samples for the same normalized query and execution plan

**Configuration**

We're adding a new `statement_samples` postgres instance config section. Here is the full set of available configuration showing the default settings:
```yaml
statement_samples:
   enabled: false
   collections_per_second: 1
   explain_function: 'datadog.explain_statement'
   explained_statements_cache_maxsize: 5000
   explained_statements_per_hour_per_query: 60
   seen_samples_cache_maxsize: 10000
   samples_per_hour_per_query: 15
```

* remove accidental spaces

* comment update

* import json from base

* Format style

* add config spec

* Use repr

Co-authored-by: Florimond Manca <[email protected]>

* Update test_serialization.py

Co-authored-by: Florimond Manca <[email protected]>
Co-authored-by: Ofek Lev <[email protected]> fe3a483
djova added a commit that referenced this pull request Mar 4, 2021
Fix broken build due to old datadog_checks_base minimum version.

Follow-up to #8627
djova added a commit that referenced this pull request Mar 4, 2021
Fix broken build due to old datadog_checks_base minimum version.

```
E   ImportError: cannot import name 'compute_exec_plan_signature' from 'datadog_checks.base.utils.db.sql' (/home/vsts/work/1/s/postgres/.tox/py38/lib/python3.8/site-packages/datadog_checks/base/utils/db/sql.py)
```

Follow-up to #8627
djova added a commit that referenced this pull request Mar 4, 2021
Fix broken build due to old datadog_checks_base minimum version.

```
E   ImportError: cannot import name 'compute_exec_plan_signature' from 'datadog_checks.base.utils.db.sql' (/home/vsts/work/1/s/postgres/.tox/py38/lib/python3.8/site-packages/datadog_checks/base/utils/db/sql.py)
```

Follow-up to #8627

Depends on #8756
djova added a commit that referenced this pull request Mar 4, 2021
Fix broken build due to old datadog_checks_base minimum version.

```
E   ImportError: cannot import name 'compute_exec_plan_signature' from 'datadog_checks.base.utils.db.sql' (/home/vsts/work/1/s/postgres/.tox/py38/lib/python3.8/site-packages/datadog_checks/base/utils/db/sql.py)
```

Follow-up to #8627

Depends on #8759
djova added a commit that referenced this pull request Mar 4, 2021
Fix broken build due to old datadog_checks_base minimum version.

```
E   ImportError: cannot import name 'compute_exec_plan_signature' from 'datadog_checks.base.utils.db.sql' (/home/vsts/work/1/s/postgres/.tox/py38/lib/python3.8/site-packages/datadog_checks/base/utils/db/sql.py)
```

Follow-up to #8627

Depends on #8759
djova added a commit that referenced this pull request Mar 4, 2021
Fix broken build due to old datadog_checks_base minimum version.

```
E   ImportError: cannot import name 'compute_exec_plan_signature' from 'datadog_checks.base.utils.db.sql' (/home/vsts/work/1/s/postgres/.tox/py38/lib/python3.8/site-packages/datadog_checks/base/utils/db/sql.py)
```

Follow-up to #8627

Depends on #8759
ofek pushed a commit that referenced this pull request Mar 4, 2021
Fix broken build due to old datadog_checks_base minimum version.

```
E   ImportError: cannot import name 'compute_exec_plan_signature' from 'datadog_checks.base.utils.db.sql' (/home/vsts/work/1/s/postgres/.tox/py38/lib/python3.8/site-packages/datadog_checks/base/utils/db/sql.py)
```

Follow-up to #8627

Depends on #8759
djova added a commit that referenced this pull request Mar 4, 2021
Use the cancel feature added in #8463 to ensure the statement sampler thread is stopped when the check is unscheduled.

Follow-up to #8627
djova added a commit that referenced this pull request Mar 5, 2021
Use the cancel feature added in #8463 to ensure the statement sampler thread is stopped when the check is unscheduled.

Follow-up to #8627
ofek pushed a commit that referenced this pull request Mar 5, 2021
Use the cancel feature added in #8463 to ensure the statement sampler thread is stopped when the check is unscheduled.

Follow-up to #8627
djova added a commit to DataDog/datadog-agent that referenced this pull request Apr 9, 2021
Add a new aggregator API through which checks can submit "event platform events" of various types.

All supported `eventTypes` are hardcoded in `EventPlatformForwarder`.

The `dbm-samples` and `dbm-metrics` events are expected to arrive fully serialized so their pipelines are simply "HTTP passthrough" pipelines which skip all of the other features of logs pipelines like processing rules and encoding.

Future event types will be able to add more detailed processing if they need it.

**Overall flow**

1. `aggregator.submit_event_platform_event(check_id, rawEvent, "{eventType}")` - python API. Here's how the postgres check would be updated to use it: DataDog/integrations-core#9045.
2. `BufferedAggregator` forwards events to the `EventPlatformForwarder`. Events are **dropped** here if `EventPlatformForwarder` is backed up for any reason.
3. `EventPlatformForwarder` forwards events to the pipeline for the given `eventType`, **dropping** events for unknown `eventTypes`

**Internal Agent Stats**

*Prometheus*: `aggregator.flush - data_type:{eventType}, state:{ok|error}`

*ExpVar*: `EventPlatformEvents` & `EventPlatformEventsErrors`: counts by `eventType`

**User-Facing Agent Stats**

Statistics for each `eventType` will be tracked alongside other types of telemetry (`Service Checks`, `Series`, ...). Where appropriate, the raw `eventType` is translated to a human readable name (i.e. `dbm-samples` --> `Database Monitoring Query Samples`).

`agent status` output:
```
=========
Collector
=========

  Running Checks
  ==============

    postgres (5.4.0)
    ----------------
      Instance ID: postgres:1df52d84fb6f603c [OK]
      Metric Samples: Last Run: 366, Total: 7,527
      Database Monitoring Query Samples: Last Run: 11, Total: 176
      ...

=========
Aggregator
=========
  Checks Metric Sample: 29,818
  Database Monitoring Query Samples: 473
  ...
```

`agent check {check_name}` output:

```
=== Metrics ===
...
=== Database Monitoring Query Samples ===
...
```

`agent check {check_name} --json` output will use the raw event types instead of the human readable names:

```
"aggregator": {
  "metrics": [...],
  "dbm-samples": [...],
  ...
}
```

**Motivation**

The posting of statement samples payloads to the intake for postgres & mysql checks is done directly from python as of (DataDog/integrations-core#8627, DataDog/integrations-core#8629). With this change we'll be able to move responsibility for posting payloads to the more robust agent go code with proper batching, buffering, retries, error handling, and tracking of statistics.
remeh added a commit to DataDog/datadog-agent that referenced this pull request Apr 16, 2021
* add new generic event platform aggregator API

Add a new aggregator API through which checks can submit "event platform events" of various types.

All supported `eventTypes` are hardcoded in `EventPlatformForwarder`.

The `dbm-samples` and `dbm-metrics` events are expected to arrive fully serialized so their pipelines are simply "HTTP passthrough" pipelines which skip all of the other features of logs pipelines like processing rules and encoding.

Future event types will be able to add more detailed processing if they need it.

**Overall flow**

1. `aggregator.submit_event_platform_event(check_id, rawEvent, "{eventType}")` - python API. Here's how the postgres check would be updated to use it: DataDog/integrations-core#9045.
2. `BufferedAggregator` forwards events to the `EventPlatformForwarder`. Events are **dropped** here if `EventPlatformForwarder` is backed up for any reason.
3. `EventPlatformForwarder` forwards events to the pipeline for the given `eventType`, **dropping** events for unknown `eventTypes`

**Internal Agent Stats**

*Prometheus*: `aggregator.flush - data_type:{eventType}, state:{ok|error}`

*ExpVar*: `EventPlatformEvents` & `EventPlatformEventsErrors`: counts by `eventType`

**User-Facing Agent Stats**

Statistics for each `eventType` will be tracked alongside other types of telemetry (`Service Checks`, `Series`, ...). Where appropriate, the raw `eventType` is translated to a human readable name (i.e. `dbm-samples` --> `Database Monitoring Query Samples`).

`agent status` output:
```
=========
Collector
=========

  Running Checks
  ==============

    postgres (5.4.0)
    ----------------
      Instance ID: postgres:1df52d84fb6f603c [OK]
      Metric Samples: Last Run: 366, Total: 7,527
      Database Monitoring Query Samples: Last Run: 11, Total: 176
      ...

=========
Aggregator
=========
  Checks Metric Sample: 29,818
  Database Monitoring Query Samples: 473
  ...
```

`agent check {check_name}` output:

```
=== Metrics ===
...
=== Database Monitoring Query Samples ===
...
```

`agent check {check_name} --json` output will use the raw event types instead of the human readable names:

```
"aggregator": {
  "metrics": [...],
  "dbm-samples": [...],
  ...
}
```

**Motivation**

The posting of statement samples payloads to the intake for postgres & mysql checks is done directly from python as of (DataDog/integrations-core#8627, DataDog/integrations-core#8629). With this change we'll be able to move responsibility for posting payloads to the more robust agent go code with proper batching, buffering, retries, error handling, and tracking of statistics.

* simplify

* remove debug log

* move json marshaling to check.go

* check enabled before lock

* refactor, add noop ep forwarder

* Update pkg/collector/check/stats.go

Co-authored-by: maxime mouial <[email protected]>

* remove purge during flush

* remove global

* Update rtloader/include/datadog_agent_rtloader.h

Co-authored-by: Rémy Mathieu <[email protected]>

* Update rtloader/common/builtins/aggregator.h

Co-authored-by: Rémy Mathieu <[email protected]>

* Update pkg/collector/check/stats.go

Co-authored-by: Rémy Mathieu <[email protected]>

* remove unnecessary

* rename lock

* refactor pipelines

* remove unnecessary nil check

* revert

* Update releasenotes/notes/event-platform-aggregator-api-33e92539f08ac5c2.yaml

Co-authored-by: Alexandre Yang <[email protected]>

* track processed

* move locking into ep forwarder

* move to top

* Update pkg/aggregator/aggregator.go

Co-authored-by: Alexandre Yang <[email protected]>

* remove read lock

* refactor error logging

* move to pkg/epforwarder

* update default dbm-metrics endpoint

* local var

Co-authored-by: maxime mouial <[email protected]>
Co-authored-by: Rémy Mathieu <[email protected]>
Co-authored-by: Alexandre Yang <[email protected]>
djova added a commit that referenced this pull request Apr 29, 2021
Update the collection of postgres statement metrics & samples to automatically collect data for all databases on a host.

This means the check now respects the `dbstrict` setting. If `false` (the default), it will collect statement metrics & samples from all databases on the host. If `true` it will only collect this data from the initial database configured in the check config.

For collection of execution plans this means that the statement sampler thread now maintains a collection pool with one connection per database.

Follow-up to #8627

Motivation:

* Simplify configuration for collection of statement metrics, samples & execution plans for users by enabling collection from all databases on a host with only a single configured "check instance." Previously users had to enumerate each database in a host separately.
* Ensure that collection of statement samples & plans respects the `dbstrict` setting
djova added a commit that referenced this pull request Apr 30, 2021
Update the collection of postgres statement metrics & samples to automatically collect data for all databases on a host.

Changes:

* collection of statement metrics & samples now respects the `dbstrict` setting
* in order to be able to collect telemetry from multiple databases the statement sampler thread now maintains a collection pool with one connection per database
* added a new `ignore_databases` configuration to enable users to define which databases on the host will be ignored. It defaults to the same exclusion list that was previously hardcoded in the "instance metrics" query. This setting is now shared across the whole check (instance metrics, statement metrics, statement samples)

Follow-up to #8627

Motivation:

* Simplify configuration for collection of statement metrics, samples & execution plans for users by enabling collection from all databases on a host with only a single configured "check instance." Previously users had to enumerate each database in a host separately.
* Ensure that collection of statement samples & plans respects the `dbstrict` setting
djova added a commit that referenced this pull request May 7, 2021
Update the collection of postgres statement metrics & samples to automatically collect data for all databases on a host.

Changes:

* collection of statement metrics & samples now respects the `dbstrict` setting
* in order to be able to collect telemetry from multiple databases the statement sampler thread now maintains a collection pool with one connection per database
* added a new `ignore_databases` configuration to enable users to define which databases on the host will be ignored. It defaults to the same exclusion list that was previously hardcoded in the "instance metrics" query. This setting is now shared across the whole check (instance metrics, statement metrics, statement samples)

Follow-up to #8627

Motivation:

* Simplify configuration for collection of statement metrics, samples & execution plans for users by enabling collection from all databases on a host with only a single configured "check instance." Previously users had to enumerate each database in a host separately.
* Ensure that collection of statement samples & plans respects the `dbstrict` setting
djova added a commit that referenced this pull request May 7, 2021
Update the collection of postgres statement metrics & samples to automatically collect data for all databases on a host.

Changes:

* collection of statement metrics & samples now respects the `dbstrict` setting
* in order to be able to collect telemetry from multiple databases the statement sampler thread now maintains a collection pool with one connection per database
* added a new `ignore_databases` configuration to enable users to define which databases on the host will be ignored. It defaults to the same exclusion list that was previously hardcoded in the "instance metrics" query. This setting is now shared across the whole check (instance metrics, statement metrics, statement samples)

Follow-up to #8627

Motivation:

* Simplify configuration for collection of statement metrics, samples & execution plans for users by enabling collection from all databases on a host with only a single configured "check instance." Previously users had to enumerate each database in a host separately.
* Ensure that collection of statement samples & plans respects the `dbstrict` setting
djova added a commit that referenced this pull request May 7, 2021
Update the collection of postgres statement metrics & samples to automatically collect data for all databases on a host.

Changes:

* collection of statement metrics & samples now respects the `dbstrict` setting
* in order to be able to collect telemetry from multiple databases the statement sampler thread now maintains a collection pool with one connection per database
* added a new `ignore_databases` configuration to enable users to define which databases on the host will be ignored. It defaults to the same exclusion list that was previously hardcoded in the "instance metrics" query. This setting is now shared across the whole check (instance metrics, statement metrics, statement samples)

Follow-up to #8627

Motivation:

* Simplify configuration for collection of statement metrics, samples & execution plans for users by enabling collection from all databases on a host with only a single configured "check instance." Previously users had to enumerate each database in a host separately.
* Ensure that collection of statement samples & plans respects the `dbstrict` setting
djova added a commit that referenced this pull request May 7, 2021
Update the collection of postgres statement metrics & samples to automatically collect data for all databases on a host.

Changes:

* collection of statement metrics & samples now respects the `dbstrict` setting
* in order to be able to collect telemetry from multiple databases the statement sampler thread now maintains a collection pool with one connection per database
* added a new `ignore_databases` configuration to enable users to define which databases on the host will be ignored. It defaults to the same exclusion list that was previously hardcoded in the "instance metrics" query. This setting is now shared across the whole check (instance metrics, statement metrics, statement samples)

Follow-up to #8627

Motivation:

* Simplify configuration for collection of statement metrics, samples & execution plans for users by enabling collection from all databases on a host with only a single configured "check instance." Previously users had to enumerate each database in a host separately.
* Ensure that collection of statement samples & plans respects the `dbstrict` setting
djova added a commit that referenced this pull request May 10, 2021
* postgres statement metrics & samples: collect from all databases on host

Update the collection of postgres statement metrics & samples to automatically collect data for all databases on a host.

Changes:

* collection of statement metrics & samples now respects the `dbstrict` setting
* in order to be able to collect telemetry from multiple databases the statement sampler thread now maintains a collection pool with one connection per database
* added a new `ignore_databases` configuration to enable users to define which databases on the host will be ignored. It defaults to the same exclusion list that was previously hardcoded in the "instance metrics" query. This setting is now shared across the whole check (instance metrics, statement metrics, statement samples)

Follow-up to #8627

Motivation:

* Simplify configuration for collection of statement metrics, samples & execution plans for users by enabling collection from all databases on a host with only a single configured "check instance." Previously users had to enumerate each database in a host separately.
* Ensure that collection of statement samples & plans respects the `dbstrict` setting

* Update postgres/assets/configuration/spec.yaml

Co-authored-by: Ofek Lev <[email protected]>

* update conf

* remove commented out

* validate models

Co-authored-by: Ofek Lev <[email protected]>
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

Successfully merging this pull request may close these issues.

4 participants