Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Tracetest postgres user error #464

Open
Jdavid77 opened this issue Apr 14, 2023 · 0 comments
Open

Tracetest postgres user error #464

Jdavid77 opened this issue Apr 14, 2023 · 0 comments

Comments

@Jdavid77
Copy link

Describe the bug
Tracetest default helm chart values resulting in the following error

"could not execute migrations: could not get driver from postgres connection: pq: password authentication failed for user "tracetest"

To Reproduce
Steps to reproduce the behavior:

  1. Run helm install tracetest kubeshop/tracetest --values values.yaml (only changes postgres password and collector configs)
  2. See error

Expected behavior
TraceTest running properly

Version / Cluster

  • Tracetest helm chart version 0.2.47
  • kubernetes v.1.25.4 AKS

Screenshots
Screenshot 2023-04-14 at 10 53 10

Additional context

My values.yaml

# Section for configuring the postgres database that will be used by Tracetest
postgresql:
  # For now, this is required to be enabled, otherwise tracetest will not
  # be able to function properly.
  enabled: true
  architecture: standalone
  image:
    tag: 14.6.0-debian-11-r13

  # credentials for accessing the database
  auth:
    database: "tracetest"
    username: "tracetest"
    password: test123
    existingSecret: ""

# Provisioning allows to define initial settings to be loaded into the database on first run. 
# These will only be applied when running tracetest against an empty database. If tracetest has already
# been configured, the provisioning settings are ignored.
provisioning: |
  ---
  # Datastore is where your application stores its traces. You can define several different datastores with
  # different names, however, only one is used by Tracetest.
  # You can see all available datastore configurations at https://kubeshop.github.io/tracetest/supported-backends/
  type: DataStore
  spec:
    name: otlp
    # Indicates that this datastore is a jaeger instance
    type: otlp
    # Configures how tracetest connects to jaeger.
    otlp:
      type: otlp
  ---
  type: Config
  spec:
    analyticsEnabled: false
  ---
  type: PollingProfile
  spec:
    name: Custom Profile
    strategy: periodic
    default: true
    periodic:
      timeout: 2m
      retryDelay: 3s



# This section configures the strategy for pooling traces from the trace backend
poolingConfig:
  # How long tracetest will wait for a complete trace before timing out
  # If you have long-running operations that will generate part of your trace, you have
  # to change this attribute to be greater than the execution time of your operation.
  maxWaitTimeForTrace: 30s

  # How long tracetest will wait to retry fetching the trace. If you define it as 5s it means that
  # tracetest will retrieve the operation trace every 5 seconds and check if it's complete. It will
  # do that until the operation times out.
  retryDelay: 5s

# Section for anonymous analytics. If it's enabled, tracetest will collect anonymous analytics data
# to help us improving our project. If you don't want to send analytics data, set enabled as false.
analytics:
  enabled: true

# Section to configure how telemetry works in Tracetest.
telemetry:
  # exporters are opentelemetry exporters. It configures how tracetest generates and send telemetry to
  # a datastore.
  exporters:
    # This is an exporter called collector, but it could be named anything else.
    collector:
      # configures the service.name attribute in all trace spans generated by tracetest
      serviceName: tracetest
      # indicates the percentage of traces that would be sent to the datastore. 100 = 100%
      sampling: 100
      # configures the exporter
      exporter:
        # Tracetest sends data using the opentelemetry collector. For now there is no other
        # alternative. But the collector is flexible enough to send traces to any other tracing backend
        # you might need.
        type: collector
        collector:
          # endpoint to send traces to the collector
          endpoint: otel-collector.open-telemetry.svc.cluster.local:4317

# Configures the server
server:
  # Indicates the port that Tracetest will run ons
  httpPort: 11633
  # Indicates which telemetry components will be used by Tracetest
  telemetry:
    # The exporter that tracetest will use to send telemetry
    # exporter: collector
    # The exporter that tracetest will send the triggering transaction span. This is optional. If you
    # want to have the trigger transaction span in your trace, you have to configure this field to
    # send the span to the same place your application sends its telemetry.
    # applicationExporter: collector

replicaCount: 1

# You don't have to change anything bellow this line.

image:
  repository: kubeshop/tracetest
  pullPolicy: IfNotPresent
  # Overrides the image tag whose default is the chart appVersion.
  tag: ""

imagePullSecrets: []
nameOverride: ""
fullnameOverride: ""
serviceAccount:
  # Specifies whether a service account should be created
  create: false
  # Annotations to add to the service account
  annotations: {}
  # The name of the service account to use.
  # If not set and create is true, a name is generated using the fullname template
  name: ""

podAnnotations: {}

podSecurityContext: {}
  # fsGroup: 2000

securityContext: {}
  # capabilities:
  #   drop:
  #   - ALL
  # readOnlyRootFilesystem: true
  # runAsNonRoot: true
  # runAsUser: 1000

service:
  type: ClusterIP
  port: 11633
  annotations: {}

ingress:
  enabled: false
  className: ""
  annotations: {}
    # kubernetes.io/ingress.class: nginx
    # kubernetes.io/tls-acme: "true"
  hosts:
    - host: chart-example.local
      paths:
        - path: /
          pathType: ImplementationSpecific
  tls: []
  #  - secretName: chart-example-tls
  #    hosts:
  #      - chart-example.local

resources: {}
  # We usually recommend not to specify default resources and to leave this as a conscious
  # choice for the user. This also increases chances charts run on environments with little
  # resources, such as Minikube. If you do want to specify resources, uncomment the following
  # lines, adjust them as necessary, and remove the curly braces after 'resources:'.
  # limits:
  #   cpu: 100m
  #   memory: 128Mi
  # requests:
  #   cpu: 100m
  #   memory: 128Mi

autoscaling:
  enabled: false
  minReplicas: 1
  maxReplicas: 100
  targetCPUUtilizationPercentage: 80
  # targetMemoryUtilizationPercentage: 80

nodeSelector: {}

tolerations: []

affinity: {}


Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

1 participant