Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[receiver/postgresql] Remove with/without resource attributes feature gates #22479

Merged
merged 1 commit into from
May 24, 2023
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension


Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
20 changes: 20 additions & 0 deletions .chloggen/postgresql-rm-gates.yaml
Original file line number Diff line number Diff line change
@@ -0,0 +1,20 @@
# Use this changelog template to create an entry for release notes.
# If your change doesn't affect end users, such as a test fix or a tooling change,
# you should instead start your pull request title with [chore] or use the "Skip Changelog" label.

# One of 'breaking', 'deprecation', 'new_component', 'enhancement', 'bug_fix'
change_type: breaking

# The name of the component, or a single word describing the area of concern, (e.g. filelogreceiver)
component: postgresqlreceiver

# A brief description of the change. Surround your text with quotes ("") if it needs to start with a backtick (`).
note: Remove resource attribute feature gates

# Mandatory: One or more tracking issues related to the change. You can use the PR number here if no issue exists.
issues: [22479]

# (Optional) One or more lines of additional information to render under the primary note.
# These lines will be padded with 2 spaces and then inserted directly into the document.
# Use pipe (|) for multiline entries.
subtext:
35 changes: 3 additions & 32 deletions receiver/postgresqlreceiver/README.md
Original file line number Diff line number Diff line change
Expand Up @@ -26,16 +26,19 @@ The monitoring user must be granted `SELECT` on `pg_stat_database`.
## Configuration

The following settings are required to create a database connection:

- `username`
- `password`

The following settings are optional:

- `endpoint` (default = `localhost:5432`): The endpoint of the postgresql server. Whether using TCP or Unix sockets, this value should be `host:port`. If `transport` is set to `unix`, the endpoint will internally be translated from `host:port` to `/host.s.PGSQL.port`
- `transport` (default = `tcp`): The transport protocol being used to connect to postgresql. Available options are `tcp` and `unix`.

- `databases` (default = `[]`): The list of databases for which the receiver will attempt to collect statistics. If an empty list is provided, the receiver will attempt to collect statistics for all non-template databases.

The following settings are also optional and nested under `tls` to help configure client transport security

- `insecure` (default = `false`): Whether to enable client transport security for the postgresql connection.
- `insecure_skip_verify` (default = `true`): Whether to validate server name and certificate if client transport security is enabled.
- `cert_file` (default = `$HOME/.postgresql/postgresql.crt`): A cerficate used for client authentication, if necessary.
Expand Down Expand Up @@ -69,35 +72,3 @@ The full list of settings exposed for this receiver are documented [here](./conf
## Metrics

Details about the metrics produced by this receiver can be found in [metadata.yaml](./metadata.yaml)

[beta]: https://github.com/open-telemetry/opentelemetry-collector#beta
[contrib]: https://github.com/open-telemetry/opentelemetry-collector-releases/tree/main/distributions/otelcol-contrib

### Feature gate configurations

#### Transition from metrics without "resource_attributes"

All metrics are being transitioning to moving the metric attributes `table` and `database` to resource attributes `postgresql.table` and `postgresql.database` respectively. This effort is motivated via the resource specification found [in the metrics data model](https://github.com/open-telemetry/opentelemetry-specification/blob/141a3ef0bf1eba0b6d260335bbe0ce7af9387cfc/specification/metrics/data-model.md#resource-attributes-1).

Eventually the move will be finalized, but there will be a transitional period where metrics will emit with resource attributes behind a feature gate.

##### Transition Schedule

1. v0.58.0, August 2022:

- The version of the metrics receiver with resource attributes will be available via feature gates.
- The old metrics with `table` and `database` metric attributes are deprecated with a warning.
- `receiver.postgresql.emitMetricsWithResourceAttributes` is *disabled* by default.
- `receiver.postgresql.emitMetricsWithoutResourceAttributes` is *enabled* by default.

2. v0.60.0, September 2022:

- The new collection method with resource attributes is enabled by default. The old metrics with the `table` and `database` metric attributes is disabled by default.
- `receiver.postgresql.emitMetricsWithResourceAttributes` is *enabled* by default.
- `receiver.postgresql.emitMetricsWithoutResourceAttributes` is *disabled* by default.

3. v0.62.0, October 2022:
djaglowski marked this conversation as resolved.
Show resolved Hide resolved

- The feature gates are removed.
- Metrics collection using resource attributes are always emitted
- Metrics collection using the `database` and `table` metric attributes are no longer available.
2 changes: 1 addition & 1 deletion receiver/postgresqlreceiver/go.mod
Original file line number Diff line number Diff line change
Expand Up @@ -13,7 +13,6 @@ require (
go.opentelemetry.io/collector/component v0.78.2
go.opentelemetry.io/collector/confmap v0.78.2
go.opentelemetry.io/collector/consumer v0.78.2
go.opentelemetry.io/collector/featuregate v1.0.0-rcv0012
go.opentelemetry.io/collector/pdata v1.0.0-rcv0012
go.opentelemetry.io/collector/receiver v0.78.2
go.uber.org/multierr v1.11.0
Expand Down Expand Up @@ -60,6 +59,7 @@ require (
github.com/stretchr/objx v0.5.0 // indirect
go.opencensus.io v0.24.0 // indirect
go.opentelemetry.io/collector/exporter v0.78.2 // indirect
go.opentelemetry.io/collector/featuregate v1.0.0-rcv0012 // indirect
go.opentelemetry.io/otel v1.15.1 // indirect
go.opentelemetry.io/otel/metric v0.38.1 // indirect
go.opentelemetry.io/otel/trace v1.15.1 // indirect
Expand Down
29 changes: 0 additions & 29 deletions receiver/postgresqlreceiver/integration_test.go
Original file line number Diff line number Diff line change
Expand Up @@ -18,7 +18,6 @@ import (
"github.com/testcontainers/testcontainers-go/wait"
"go.opentelemetry.io/collector/component/componenttest"
"go.opentelemetry.io/collector/consumer/consumertest"
"go.opentelemetry.io/collector/featuregate"
"go.opentelemetry.io/collector/receiver/receivertest"

"github.com/open-telemetry/opentelemetry-collector-contrib/internal/coreinternal/golden"
Expand Down Expand Up @@ -82,34 +81,6 @@ func TestPostgreSQLIntegration(t *testing.T) {
},
expectedFile: filepath.Join("testdata", "integration", "expected_all_db.yaml"),
},
{
name: "without_resource_attributes",
cfg: func(hostname string) *Config {
require.NoError(t, featuregate.GlobalRegistry().Set(
emitMetricsWithResourceAttributesFeatureGate.ID(), false,
))
require.NoError(t, featuregate.GlobalRegistry().Set(
emitMetricsWithoutResourceAttributesFeatureGate.ID(), true,
))
f := NewFactory()
cfg := f.CreateDefaultConfig().(*Config)
cfg.Endpoint = net.JoinHostPort(hostname, "15432")
cfg.Databases = []string{}
cfg.Username = "otelu"
cfg.Password = "otelp"
cfg.Insecure = true
return cfg
},
cleanup: func() {
require.NoError(t, featuregate.GlobalRegistry().Set(
emitMetricsWithResourceAttributesFeatureGate.ID(), true,
))
require.NoError(t, featuregate.GlobalRegistry().Set(
emitMetricsWithoutResourceAttributesFeatureGate.ID(), false,
))
},
expectedFile: filepath.Join("testdata", "integration", "expected_all_without_resource_attributes.yaml"),
},
}

container := getContainer(t, testcontainers.ContainerRequest{
Expand Down
161 changes: 48 additions & 113 deletions receiver/postgresqlreceiver/scraper.go
Original file line number Diff line number Diff line change
Expand Up @@ -10,7 +10,6 @@ import (
"sync"
"time"

"go.opentelemetry.io/collector/featuregate"
"go.opentelemetry.io/collector/pdata/pcommon"
"go.opentelemetry.io/collector/pdata/pmetric"
"go.opentelemetry.io/collector/receiver"
Expand All @@ -20,34 +19,11 @@ import (
"github.com/open-telemetry/opentelemetry-collector-contrib/receiver/postgresqlreceiver/internal/metadata"
)

var (
emitMetricsWithoutResourceAttributesFeatureGate = featuregate.GlobalRegistry().MustRegister(
"receiver.postgresql.emitMetricsWithoutResourceAttributes",
featuregate.StageAlpha,
featuregate.WithRegisterDescription("Postgresql metrics are transitioning from being reported with identifying metric attributes "+
"to being identified via resource attributes in order to fit the OpenTelemetry specification. This feature "+
"gate controls emitting the old metrics without resource attributes. For more details, see: "+
"https://github.com/open-telemetry/opentelemetry-collector-contrib/blob/main/receiver/postgresqlreceiver/README.md#feature-gate-configurations"),
featuregate.WithRegisterReferenceURL("https://github.com/open-telemetry/opentelemetry-collector-contrib/issues/12960"),
)
emitMetricsWithResourceAttributesFeatureGate = featuregate.GlobalRegistry().MustRegister(
"receiver.postgresql.emitMetricsWithResourceAttributes",
featuregate.StageBeta,
featuregate.WithRegisterDescription("Postgresql metrics are transitioning from being reported with identifying metric attributes "+
"to being identified via resource attributes in order to fit the OpenTelemetry specification. This feature "+
"gate controls emitting the new metrics with resource attributes. For more details, see: "+
"https://github.com/open-telemetry/opentelemetry-collector-contrib/blob/main/receiver/postgresqlreceiver/README.md#feature-gate-configurations"),
featuregate.WithRegisterReferenceURL("https://github.com/open-telemetry/opentelemetry-collector-contrib/issues/12960"),
)
)

type postgreSQLScraper struct {
logger *zap.Logger
config *Config
clientFactory postgreSQLClientFactory
mb *metadata.MetricsBuilder
emitMetricsWithoutResourceAttributes bool
emitMetricsWithResourceAttributes bool
logger *zap.Logger
config *Config
clientFactory postgreSQLClientFactory
mb *metadata.MetricsBuilder
}

type postgreSQLClientFactory interface {
Expand All @@ -72,12 +48,10 @@ func newPostgreSQLScraper(
clientFactory postgreSQLClientFactory,
) *postgreSQLScraper {
return &postgreSQLScraper{
logger: settings.Logger,
config: config,
clientFactory: clientFactory,
mb: metadata.NewMetricsBuilder(config.MetricsBuilderConfig, settings),
emitMetricsWithResourceAttributes: emitMetricsWithResourceAttributesFeatureGate.IsEnabled(),
emitMetricsWithoutResourceAttributes: emitMetricsWithoutResourceAttributesFeatureGate.IsEnabled(),
logger: settings.Logger,
config: config,
clientFactory: clientFactory,
mb: metadata.NewMetricsBuilder(config.MetricsBuilderConfig, settings),
}
}

Expand Down Expand Up @@ -128,19 +102,14 @@ func (p *postgreSQLScraper) scrape(ctx context.Context) (pmetric.Metrics, error)
numTables := p.collectTables(ctx, now, dbClient, database, &errs)

p.recordDatabase(now, database, r, numTables)

if p.emitMetricsWithResourceAttributes {
p.collectIndexes(ctx, now, dbClient, database, &errs)
}
p.collectIndexes(ctx, now, dbClient, database, &errs)
}

if p.emitMetricsWithResourceAttributes {
p.mb.RecordPostgresqlDatabaseCountDataPoint(now, int64(len(databases)))
p.collectBGWriterStats(ctx, now, listClient, &errs)
p.collectWalAge(ctx, now, listClient, &errs)
p.collectReplicationStats(ctx, now, listClient, &errs)
p.collectMaxConnections(ctx, now, listClient, &errs)
}
p.mb.RecordPostgresqlDatabaseCountDataPoint(now, int64(len(databases)))
p.collectBGWriterStats(ctx, now, listClient, &errs)
p.collectWalAge(ctx, now, listClient, &errs)
p.collectReplicationStats(ctx, now, listClient, &errs)
p.collectMaxConnections(ctx, now, listClient, &errs)

return p.mb.Emit(), errs.Combine()
}
Expand All @@ -164,31 +133,18 @@ func (p *postgreSQLScraper) retrieveDBMetrics(

func (p *postgreSQLScraper) recordDatabase(now pcommon.Timestamp, db string, r *dbRetrieval, numTables int64) {
dbName := databaseName(db)
if p.emitMetricsWithResourceAttributes {
p.mb.RecordPostgresqlTableCountDataPoint(now, numTables)
if activeConnections, ok := r.activityMap[dbName]; ok {
p.mb.RecordPostgresqlBackendsDataPointWithoutDatabase(now, activeConnections)
}
if size, ok := r.dbSizeMap[dbName]; ok {
p.mb.RecordPostgresqlDbSizeDataPointWithoutDatabase(now, size)
}
if stats, ok := r.dbStats[dbName]; ok {
p.mb.RecordPostgresqlCommitsDataPointWithoutDatabase(now, stats.transactionCommitted)
p.mb.RecordPostgresqlRollbacksDataPointWithoutDatabase(now, stats.transactionRollback)
}
p.mb.EmitForResource(metadata.WithPostgresqlDatabaseName(db))
} else {
if activeConnections, ok := r.activityMap[dbName]; ok {
p.mb.RecordPostgresqlBackendsDataPoint(now, activeConnections, db)
}
if size, ok := r.dbSizeMap[dbName]; ok {
p.mb.RecordPostgresqlDbSizeDataPoint(now, size, db)
}
if stats, ok := r.dbStats[dbName]; ok {
p.mb.RecordPostgresqlCommitsDataPoint(now, stats.transactionCommitted, db)
p.mb.RecordPostgresqlRollbacksDataPoint(now, stats.transactionRollback, db)
}
p.mb.RecordPostgresqlTableCountDataPoint(now, numTables)
if activeConnections, ok := r.activityMap[dbName]; ok {
p.mb.RecordPostgresqlBackendsDataPointWithoutDatabase(now, activeConnections)
}
if size, ok := r.dbSizeMap[dbName]; ok {
p.mb.RecordPostgresqlDbSizeDataPointWithoutDatabase(now, size)
}
if stats, ok := r.dbStats[dbName]; ok {
p.mb.RecordPostgresqlCommitsDataPointWithoutDatabase(now, stats.transactionCommitted)
p.mb.RecordPostgresqlRollbacksDataPointWithoutDatabase(now, stats.transactionRollback)
}
p.mb.EmitForResource(metadata.WithPostgresqlDatabaseName(db))
}

func (p *postgreSQLScraper) collectTables(ctx context.Context, now pcommon.Timestamp, dbClient client, db string, errs *scrapererror.ScrapeErrors) (numTables int64) {
Expand All @@ -203,51 +159,30 @@ func (p *postgreSQLScraper) collectTables(ctx context.Context, now pcommon.Times
}

for tableKey, tm := range tableMetrics {
if p.emitMetricsWithResourceAttributes {
p.mb.RecordPostgresqlRowsDataPointWithoutDatabaseAndTable(now, tm.dead, metadata.AttributeStateDead)
p.mb.RecordPostgresqlRowsDataPointWithoutDatabaseAndTable(now, tm.live, metadata.AttributeStateLive)
p.mb.RecordPostgresqlOperationsDataPointWithoutDatabaseAndTable(now, tm.inserts, metadata.AttributeOperationIns)
p.mb.RecordPostgresqlOperationsDataPointWithoutDatabaseAndTable(now, tm.del, metadata.AttributeOperationDel)
p.mb.RecordPostgresqlOperationsDataPointWithoutDatabaseAndTable(now, tm.upd, metadata.AttributeOperationUpd)
p.mb.RecordPostgresqlOperationsDataPointWithoutDatabaseAndTable(now, tm.hotUpd, metadata.AttributeOperationHotUpd)
p.mb.RecordPostgresqlTableSizeDataPoint(now, tm.size)
p.mb.RecordPostgresqlTableVacuumCountDataPoint(now, tm.vacuumCount)

br, ok := blockReads[tableKey]
if ok {
p.mb.RecordPostgresqlBlocksReadDataPointWithoutDatabaseAndTable(now, br.heapRead, metadata.AttributeSourceHeapRead)
p.mb.RecordPostgresqlBlocksReadDataPointWithoutDatabaseAndTable(now, br.heapHit, metadata.AttributeSourceHeapHit)
p.mb.RecordPostgresqlBlocksReadDataPointWithoutDatabaseAndTable(now, br.idxRead, metadata.AttributeSourceIdxRead)
p.mb.RecordPostgresqlBlocksReadDataPointWithoutDatabaseAndTable(now, br.idxHit, metadata.AttributeSourceIdxHit)
p.mb.RecordPostgresqlBlocksReadDataPointWithoutDatabaseAndTable(now, br.toastHit, metadata.AttributeSourceToastHit)
p.mb.RecordPostgresqlBlocksReadDataPointWithoutDatabaseAndTable(now, br.toastRead, metadata.AttributeSourceToastHit)
p.mb.RecordPostgresqlBlocksReadDataPointWithoutDatabaseAndTable(now, br.tidxRead, metadata.AttributeSourceTidxRead)
p.mb.RecordPostgresqlBlocksReadDataPointWithoutDatabaseAndTable(now, br.tidxHit, metadata.AttributeSourceTidxHit)
}
p.mb.EmitForResource(
metadata.WithPostgresqlDatabaseName(db),
metadata.WithPostgresqlTableName(tm.table),
)
} else {
p.mb.RecordPostgresqlRowsDataPoint(now, tm.dead, db, tm.table, metadata.AttributeStateDead)
p.mb.RecordPostgresqlRowsDataPoint(now, tm.live, db, tm.table, metadata.AttributeStateLive)
p.mb.RecordPostgresqlOperationsDataPoint(now, tm.inserts, db, tm.table, metadata.AttributeOperationIns)
p.mb.RecordPostgresqlOperationsDataPoint(now, tm.del, db, tm.table, metadata.AttributeOperationDel)
p.mb.RecordPostgresqlOperationsDataPoint(now, tm.upd, db, tm.table, metadata.AttributeOperationUpd)
p.mb.RecordPostgresqlOperationsDataPoint(now, tm.hotUpd, db, tm.table, metadata.AttributeOperationHotUpd)

br, ok := blockReads[tableKey]
if ok {
p.mb.RecordPostgresqlBlocksReadDataPoint(now, br.heapRead, db, br.table, metadata.AttributeSourceHeapRead)
p.mb.RecordPostgresqlBlocksReadDataPoint(now, br.heapHit, db, br.table, metadata.AttributeSourceHeapHit)
p.mb.RecordPostgresqlBlocksReadDataPoint(now, br.idxRead, db, br.table, metadata.AttributeSourceIdxRead)
p.mb.RecordPostgresqlBlocksReadDataPoint(now, br.idxHit, db, br.table, metadata.AttributeSourceIdxHit)
p.mb.RecordPostgresqlBlocksReadDataPoint(now, br.toastHit, db, br.table, metadata.AttributeSourceToastHit)
p.mb.RecordPostgresqlBlocksReadDataPoint(now, br.toastRead, db, br.table, metadata.AttributeSourceToastRead)
p.mb.RecordPostgresqlBlocksReadDataPoint(now, br.tidxRead, db, br.table, metadata.AttributeSourceTidxRead)
p.mb.RecordPostgresqlBlocksReadDataPoint(now, br.tidxHit, db, br.table, metadata.AttributeSourceTidxHit)
}
p.mb.RecordPostgresqlRowsDataPointWithoutDatabaseAndTable(now, tm.dead, metadata.AttributeStateDead)
p.mb.RecordPostgresqlRowsDataPointWithoutDatabaseAndTable(now, tm.live, metadata.AttributeStateLive)
p.mb.RecordPostgresqlOperationsDataPointWithoutDatabaseAndTable(now, tm.inserts, metadata.AttributeOperationIns)
p.mb.RecordPostgresqlOperationsDataPointWithoutDatabaseAndTable(now, tm.del, metadata.AttributeOperationDel)
p.mb.RecordPostgresqlOperationsDataPointWithoutDatabaseAndTable(now, tm.upd, metadata.AttributeOperationUpd)
p.mb.RecordPostgresqlOperationsDataPointWithoutDatabaseAndTable(now, tm.hotUpd, metadata.AttributeOperationHotUpd)
p.mb.RecordPostgresqlTableSizeDataPoint(now, tm.size)
p.mb.RecordPostgresqlTableVacuumCountDataPoint(now, tm.vacuumCount)

br, ok := blockReads[tableKey]
if ok {
p.mb.RecordPostgresqlBlocksReadDataPointWithoutDatabaseAndTable(now, br.heapRead, metadata.AttributeSourceHeapRead)
p.mb.RecordPostgresqlBlocksReadDataPointWithoutDatabaseAndTable(now, br.heapHit, metadata.AttributeSourceHeapHit)
p.mb.RecordPostgresqlBlocksReadDataPointWithoutDatabaseAndTable(now, br.idxRead, metadata.AttributeSourceIdxRead)
p.mb.RecordPostgresqlBlocksReadDataPointWithoutDatabaseAndTable(now, br.idxHit, metadata.AttributeSourceIdxHit)
p.mb.RecordPostgresqlBlocksReadDataPointWithoutDatabaseAndTable(now, br.toastHit, metadata.AttributeSourceToastHit)
p.mb.RecordPostgresqlBlocksReadDataPointWithoutDatabaseAndTable(now, br.toastRead, metadata.AttributeSourceToastHit)
p.mb.RecordPostgresqlBlocksReadDataPointWithoutDatabaseAndTable(now, br.tidxRead, metadata.AttributeSourceTidxRead)
p.mb.RecordPostgresqlBlocksReadDataPointWithoutDatabaseAndTable(now, br.tidxHit, metadata.AttributeSourceTidxHit)
}
p.mb.EmitForResource(
metadata.WithPostgresqlDatabaseName(db),
metadata.WithPostgresqlTableName(tm.table),
)
}
return int64(len(tableMetrics))
}
Expand Down
Loading