Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

panic: runtime error: invalid memory address or nil pointer dereference in serviceQuotaFetcher #205

Open
marina-armis opened this issue Aug 14, 2024 · 2 comments
Labels
bug Something isn't working

Comments

@marina-armis
Copy link

We are running v0.10.0 in EKS on Amazon Linux 2 nodes and the exporter is crashing from time to time with the following error on serviceQuotaFetcher:

{"time":"2024-08-13T19:16:17.606187975Z","level":"INFO","msg":"starting the HTTP server component"}
{"time":"2024-08-13T19:16:47.125168277Z","level":"INFO","msg":"get RDS metrics"}
panic: runtime error: invalid memory address or nil pointer dereference
[signal SIGSEGV: segmentation violation code=0x1 addr=0x0 pc=0x1c48718]

goroutine 72 [running]:
golang.org/x/exp/slog.(*Logger).Handler(...)
	/home/runner/go/pkg/mod/golang.org/x/[email protected]/slog/logger.go:89
golang.org/x/exp/slog.(*Logger).Enabled(0x0?, {0x0?, 0x0?}, 0x322f780?)
	/home/runner/go/pkg/mod/golang.org/x/[email protected]/slog/logger.go:136 +0x18
golang.org/x/exp/slog.(*Logger).log(0x0, {0x0, 0x0}, 0x8, {0x22c7b7c, 0x11}, {0xc000515d88, 0x4, 0x4})
	/home/runner/go/pkg/mod/golang.org/x/[email protected]/slog/logger.go:233 +0x6a
golang.org/x/exp/slog.(*Logger).Error(...)
	/home/runner/go/pkg/mod/golang.org/x/[email protected]/slog/logger.go:215
github.com/qonto/prometheus-rds-exporter/internal/app/servicequotas.(*serviceQuotaFetcher).getQuota(0xc000515f30, {0x22b1ac1, 0x3}, {0x22ba8e7, 0xa})
	/home/runner/work/prometheus-rds-exporter/prometheus-rds-exporter/internal/app/servicequotas/servicequotas.go:89 +0x60a
github.com/qonto/prometheus-rds-exporter/internal/app/servicequotas.(*serviceQuotaFetcher).GetRDSQuotas(0xc000515f30)
	/home/runner/work/prometheus-rds-exporter/prometheus-rds-exporter/internal/app/servicequotas/servicequotas.go:110 +0x34
github.com/qonto/prometheus-rds-exporter/internal/app/exporter.(*rdsCollector).getQuotasMetrics(0xc00027a2c8, {0x7f2788394240, 0xc00012f040})
	/home/runner/work/prometheus-rds-exporter/prometheus-rds-exporter/internal/app/exporter/exporter.go:482 +0x1cb
created by github.com/qonto/prometheus-rds-exporter/internal/app/exporter.(*rdsCollector).fetchMetrics in goroutine 15
	/home/runner/work/prometheus-rds-exporter/prometheus-rds-exporter/internal/app/exporter/exporter.go:368 +0x755

The exporter is configured with:

  PROMETHEUS_RDS_EXPORTER_COLLECT_LOGS_SIZE: "false"
  PROMETHEUS_RDS_EXPORTER_COLLECT_USAGES: "false"
  PROMETHEUS_RDS_EXPORTER_COLLECT_INSTANCE_TAGS: "false"
@marina-armis marina-armis added the bug Something isn't working label Aug 14, 2024
@qfritz
Copy link
Contributor

qfritz commented Aug 19, 2024

Hello, I had a look but sadly can't reproduce the issue in my local environment. I also checked the logs of our current deployment (in case we missed something) but couldn't find a similar error.

It looks like it received an unexpected value. Would you be able to run

aws service-quotas list-service-quotas --service-code rds

...when you receive this error and share with us the values of the following codes:

L-7B6409FD // DB instances
L-7ADDB58A // Total storage for all DB instances
L-272F1212 // Manual DB instance snapshots

airclovis added a commit to airclovis/prometheus-rds-exporter that referenced this issue Aug 20, 2024
This commit is updating the exporter and servicequotas packages to
properly pass the logger.

In the `serviceQuotaFetcher` struct we have defined a logger field but
it was never initialized. Which means any call to s.logger was panicking
in the `serviceQuotaFetcher` methods.

Properly passing the logger and storing it in the structure will allow
us to properly use `s.logger` inside the `serviceQuotaFetcher` methods.

This change has been made following this [issue][1]

[1]: qonto#205
airclovis added a commit to airclovis/prometheus-rds-exporter that referenced this issue Aug 20, 2024
This commit is updating the exporter and servicequotas packages to
properly pass the logger.

In the `serviceQuotaFetcher` struct we have defined a logger field but
it was never initialized. Which means any call to s.logger was panicking
in the `serviceQuotaFetcher` methods.

Properly passing the logger and storing it in the structure will allow
us to properly use `s.logger` inside the `serviceQuotaFetcher` methods.

This change has been made following this [issue][1]

[1]: qonto#205
airclovis added a commit to airclovis/prometheus-rds-exporter that referenced this issue Aug 20, 2024
This commit is updating the exporter and servicequotas packages to
properly pass the logger.

In the `serviceQuotaFetcher` struct we have defined a logger field but
it was never initialized. Which means any call to s.logger was panicking
in the `serviceQuotaFetcher` methods.

Properly passing the logger and storing it in the structure will allow
us to properly use `s.logger` inside the `serviceQuotaFetcher` methods.

This change has been made following this [issue][1]

[1]: qonto#205

Signed-off-by: Clovis Delarue <[email protected]>
@airclovis
Copy link
Contributor

Hi,

A new version 0.10.1 has been released to fix this panic 🙏

You probably have an other issue as this panic was happening if prometheus-rds-exporter was failing to fetch some quota from AWS, but with the new version you should see the error message properly in the logs to fix this issue 🤞

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
bug Something isn't working
Projects
None yet
Development

No branches or pull requests

3 participants