Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

How to set password to connect Crunchy Postgresql Operator on K8S? #12

Closed
sertceps opened this issue Oct 13, 2021 · 17 comments · May be fixed by #13
Closed

How to set password to connect Crunchy Postgresql Operator on K8S? #12

sertceps opened this issue Oct 13, 2021 · 17 comments · May be fixed by #13
Assignees
Labels
question Further information is requested

Comments

@sertceps
Copy link

sertceps commented Oct 13, 2021

My native language is not English
I hope there's no offense in tone.
I am sorry if any inappropriate remarks.

Hi.

I am reading and experimenting on the document 8.5. Pgpool-II on Kubernetes.

When I enter pod of pgpool, pool_passwd is empty.

And the logs of pgpool-pod is :

Configuring Pgpool-II...
...
2021-10-13 08:10:11: pid 13: LOG:  find_primary_node_repeatedly: waiting for finding a primary node
...
2021-10-13 08:10:11: pid 13: LOG:  pgpool-II successfully started. version 4.2.3 (chichiriboshi)
2021-10-13 08:10:11: pid 13: LOG:  node status[0]: 0
2021-10-13 08:10:11: pid 13: LOG:  node status[1]: 0

I want to know how to set password to connect postgresql cluster managered by Crunchy Operator?

pgpool-configmap.yaml is:

apiVersion: v1
kind: ConfigMap
metadata:
  name: pgpool-config
  labels:
    name: pgpool-config
data:
  pgpool.conf: |-
    listen_addresses = '*'
    port = 9999
    socket_dir = '/var/run/pgpool'
    pcp_listen_addresses = '*'
    pcp_port = 9898
    pcp_socket_dir = '/var/run/pgpool'
---------------------------------Where I changed the default file.-------------
    backend_hostname0 = 'stars-ha'
----------------------------------No--dash--in--real--config---file---------------
    backend_port0 = 5432
    backend_weight0 = 1
    backend_flag0 = 'ALWAYS_PRIMARY|DISALLOW_TO_FAILOVER'
---------------------------------Where I changed the default file.-------------
    backend_hostname1 = 'stars-replicas'
----------------------------------No--dash--in--real--config---file---------------
    backend_port1 = 5432
    backend_weight1 = 1
    backend_flag1 = 'DISALLOW_TO_FAILOVER'
    sr_check_period = 0
---------------------------------Where I changed the default file.-------------
    enable_pool_hba = on
----------------------------------No--dash--in--real--config---file---------------
    backend_clustering_mode = 'streaming_replication'
    num_init_children = 32
    max_pool = 4
    child_life_time = 300
    child_max_connections = 0
    connection_life_time = 0
    client_idle_limit = 0
    connection_cache = on
    load_balance_mode = on
---------------------------------Where I changed the default file.-------------
    ssl = on 
----------------------------------No--dash--in--real--config---file---------------
    failover_on_backend_error = off
  pool_hba.conf: |-
    local   all         all                               trust
    host    all         all         127.0.0.1/32          trust
    host    all         all         ::1/128               trust
    host    all         all         0.0.0.0/0             trust
---------------------------------Where I changed the default file.-------------
    hostssl all         all         0.0.0.0/0             trust 
----------------------------------No--dash--in--real--config---file---------------

kubctl get services output is:

kubernetes        ClusterIP   10.96.0.1        <none>        443/TCP    13d
pgpool            ClusterIP   10.99.49.6       <none>        9999/TCP   6m7s
stars-ha          ClusterIP   10.97.111.26     <none>        5432/TCP   6h12m
stars-ha-config   ClusterIP   None             <none>        <none>     6h12m
stars-pods        ClusterIP   None             <none>        <none>     6h12m
stars-primary     ClusterIP   None             <none>        5432/TCP   6h12m
stars-replicas    ClusterIP   10.107.250.144   <none>        5432/TCP   6h12m

Services start with stars were generated by Crunchy Postgresql Operator.

pgpool-deploy.yaml is:

apiVersion: apps/v1
kind: Deployment
metadata:
  name: pgpool
spec:
  replicas: 1
  selector:
    matchLabels:
      app: pgpool
  template:
    metadata:
      labels:
        app: pgpool
    spec:
      containers:
      - name: pgpool
        image: pgpool/pgpool
        env:
        - name: POSTGRES_USERAME
          valueFrom:
            secretKeyRef:
              name: stars-pguser-stars
              # This secret was generated by Crunchy Postgresql Operator.
              key: user
        - name: POSTGRES_PASSWORD
          valueFrom:
            secretKeyRef:
              name: stars-pguser-stars
              key: password
        volumeMounts:
        - name: pgpool-config
          mountPath: /config
      volumes:
      - name: pgpool-config
        configMap:
          name: pgpool-config
---
apiVersion: v1
kind: Service
metadata:
  name: pgpool
spec:
  selector:
    app: pgpool
  ports:
  - name: pgpool-port
    protocol: TCP
    port: 9999
    targetPort: 9999

The secret stars-pguser-stars was generated by Crunchy Postgresql Operator.

stars-pguser-stars describe is:

$ kubectl describe secrets stars-pguser-stars
Name:         stars-pguser-stars
Namespace:    default
Labels:       postgres-operator.crunchydata.com/cluster=stars
              postgres-operator.crunchydata.com/pguser=stars
              postgres-operator.crunchydata.com/role=pguser
Annotations:  <none>

Type:  Opaque

Data
====
host:      25 bytes
password:  24 bytes
port:      4 bytes
uri:       92 bytes
user:      5 bytes
verifier:  133 bytes
dbname:    7 bytes

The content of secret is :

kubectl get secrets stars-pguser-stars -o jsonpath={.data}
{
    "dbname": "dXBncmFkZQ==",
    "host": "c3RhcnMtcHJpbWFyeS5kZWZhdWx0LnN2Yw==",
    "password": "U0drOVcvZnNFPyl0Mlg2MmVQQV4rQ1R9",
    "port": "NTQzMg==",
    "uri": "cG9zdGdyZXNxbDovL3N0YXJzOlNHazlXJTJGZnNFJTNGJTI5dDJYNjJlUEElNUUrQ1QlN0RAc3RhcnMtcHJpbWFyeS5kZWZhdWx0LnN2Yzo1NDMyL3VwZ3JhZGU=",
    "user": "c3RhcnM=",
    "verifier": "U0NSQU0tU0hBLTI1NiQ0MDk2OllpKzZzb3NwMlFrdlNRUnl0a20wQmc9PSRoM0V6V2EydzNBaTJQV1dGaTFobTdaUWRDamVqVGhHbWM2SjJuNGh3U240PToxcGVLRll6cTJBOHp2NTBYeThFNHV1Q2p0UGJxSzJMQ1RjcGtYeW5hQWI0PQ=="
}
@sertceps sertceps changed the title How to set password using with Crunchy Postgresql Operator? How to set password to connect Crunchy Postgresql Operator on K8S? Oct 13, 2021
@pengbo0328
Copy link
Collaborator

I would like to check the logs to figure out the cause.
Could you show the full logs at startup?

Could you show the result of the following command?

kubectl describe secret stars-pguser-stars

@pengbo0328 pengbo0328 self-assigned this Oct 13, 2021
@pengbo0328 pengbo0328 added the question Further information is requested label Oct 13, 2021
@sertceps
Copy link
Author

sertceps commented Oct 13, 2021

I would like to check the logs to figure out the cause. Could you show the full logs at startup?

Could you show the result of the following command?

kubectl describe secret stars-pguser-stars

Hi.
Very thanks for your reply.

The full log is:

Configuring Pgpool-II...
Custom pgpool.conf file detected. Use custom configuration files.
Generating private key and certificate...
Starting Pgpool-II...
2021-10-13 08:52:56: pid 15: LOG:  Backend status file /tmp/pgpool_status does not exist
2021-10-13 08:52:56: pid 15: LOG:  health_check_stats_shared_memory_size: requested size: 12288
2021-10-13 08:52:56: pid 15: LOG:  memory cache initialized
2021-10-13 08:52:56: pid 15: DETAIL:  memcache blocks :64
2021-10-13 08:52:56: pid 15: LOG:  allocating (136555320) bytes of shared memory segment
2021-10-13 08:52:56: pid 15: LOG:  allocating shared memory segment of size: 136555320 
2021-10-13 08:52:56: pid 15: LOG:  health_check_stats_shared_memory_size: requested size: 12288
2021-10-13 08:52:56: pid 15: LOG:  health_check_stats_shared_memory_size: requested size: 12288
2021-10-13 08:52:56: pid 15: LOG:  memory cache initialized
2021-10-13 08:52:56: pid 15: DETAIL:  memcache blocks :64
2021-10-13 08:52:56: pid 15: LOG:  pool_discard_oid_maps: discarded memqcache oid maps
2021-10-13 08:52:56: pid 15: LOG:  Setting up socket for 0.0.0.0:9999
2021-10-13 08:52:56: pid 15: LOG:  Setting up socket for :::9999
2021-10-13 08:52:56: pid 15: LOG:  find_primary_node_repeatedly: waiting for finding a primary node
2021-10-13 08:52:56: pid 49: LOG:  PCP process: 49 started
2021-10-13 08:52:56: pid 50: LOG:  process started
2021-10-13 08:52:56: pid 51: LOG:  process started
2021-10-13 08:52:56: pid 52: LOG:  process started
2021-10-13 08:52:56: pid 15: LOG:  pgpool-II successfully started. version 4.2.3 (chichiriboshi)
2021-10-13 08:52:56: pid 15: LOG:  node status[0]: 0
2021-10-13 08:52:56: pid 15: LOG:  node status[1]: 0

The resutl of kubectl describe secret stars-pguser-stars is:

Name:         stars-pguser-stars
Namespace:    default
Labels:       postgres-operator.crunchydata.com/cluster=stars
              postgres-operator.crunchydata.com/pguser=stars
              postgres-operator.crunchydata.com/role=pguser
Annotations:  <none>

Type:  Opaque

Data
====
password:  24 bytes
port:      4 bytes
uri:       92 bytes
user:      5 bytes
verifier:  133 bytes
dbname:    7 bytes
host:      25 bytes

@pengbo0328
Copy link
Collaborator

pengbo0328 commented Oct 13, 2021

Thank you.
I think it may be a typo in POSTGRES_USERAME.
It should be POSTGRES_USERNAME.

        env:
        - name: POSTGRES_USERAME
          valueFrom:
            secretKeyRef:
              name: stars-pguser-stars
              # This secret was generated by Crunchy Postgresql Operator.
              key: user
        - name: POSTGRES_PASSWORD
          valueFrom:
            secretKeyRef:
              name: stars-pguser-stars
              key: password

@sertceps
Copy link
Author

Thank you.
I fixed the typo just now 🤣 🤣 🤣.
And the node[0] & node[1] is still 0.
Should it change if configured correctly?

The log is:

Configuring Pgpool-II...
Custom pgpool.conf file detected. Use custom configuration files.
Generating private key and certificate...
Starting Pgpool-II...
2021-10-13 09:37:10: pid 19: LOG:  Backend status file /tmp/pgpool_status does not exist
2021-10-13 09:37:10: pid 19: LOG:  health_check_stats_shared_memory_size: requested size: 12288
2021-10-13 09:37:10: pid 19: LOG:  memory cache initialized
2021-10-13 09:37:10: pid 19: DETAIL:  memcache blocks :64
2021-10-13 09:37:10: pid 19: LOG:  allocating (136555320) bytes of shared memory segment
2021-10-13 09:37:10: pid 19: LOG:  allocating shared memory segment of size: 136555320 
2021-10-13 09:37:10: pid 19: LOG:  health_check_stats_shared_memory_size: requested size: 12288
2021-10-13 09:37:10: pid 19: LOG:  health_check_stats_shared_memory_size: requested size: 12288
2021-10-13 09:37:10: pid 19: LOG:  memory cache initialized
2021-10-13 09:37:10: pid 19: DETAIL:  memcache blocks :64
2021-10-13 09:37:10: pid 19: LOG:  pool_discard_oid_maps: discarded memqcache oid maps
2021-10-13 09:37:10: pid 19: LOG:  Setting up socket for 0.0.0.0:9999
2021-10-13 09:37:10: pid 19: LOG:  Setting up socket for :::9999
2021-10-13 09:37:10: pid 19: LOG:  find_primary_node_repeatedly: waiting for finding a primary node
2021-10-13 09:37:10: pid 53: LOG:  PCP process: 53 started
2021-10-13 09:37:10: pid 54: LOG:  process started
2021-10-13 09:37:10: pid 55: LOG:  process started
2021-10-13 09:37:10: pid 56: LOG:  process started
2021-10-13 09:37:10: pid 19: LOG:  pgpool-II successfully started. version 4.2.3 (chichiriboshi)
2021-10-13 09:37:10: pid 19: LOG:  node status[0]: 0
2021-10-13 09:37:10: pid 19: LOG:  node status[1]: 0

The pgpool-configmap.yaml is:

apiVersion: apps/v1
kind: Deployment
metadata:
  name: pgpool
spec:
  replicas: 1
  selector:
    matchLabels:
      app: pgpool
  template:
    metadata:
      labels:
        app: pgpool
    spec:
      containers:
      - name: pgpool
        image: pgpool/pgpool
        env:
        - name: POSTGRES_USERNAME
          valueFrom:
            secretKeyRef:
              name: stars-pguser-stars
              key: user
        - name: POSTGRES_PASSWORD
          valueFrom:
            secretKeyRef:
              name: stars-pguser-stars
              key: password
        volumeMounts:
        - name: pgpool-config
          mountPath: /config
      volumes:
      - name: pgpool-config
        configMap:
          name: pgpool-config
---
apiVersion: v1
kind: Service
metadata:
  name: pgpool
spec:
  selector:
    app: pgpool
  ports:
  - name: pgpool-port
    protocol: TCP
    port: 9999
    targetPort: 9999

@pengbo0328
Copy link
Collaborator

And the node[0] & node[1] is still 0.
Should it change if configured correctly?

0 is correct, because pgpool doesn't manage the status of PostgreSQL on k8s.

If you can connect to pgpool and execute show pool_nodes command, the configuration is correct.
Check the status and role column in the result of show pool_nodes command.

# show pool_nodes;
 node_id | hostname | port  | status | lb_weight |  role   | 
---------+----------+-------+--------+-----------+---------+--
 0       | /tmp     | 11002 | up   | 0.500000  | primary |                    
 1       | /tmp     | 11003 | up   | 0.500000  | standby |

@sertceps
Copy link
Author

Thank you very much 😁 😁 😁.
The pool_passwd has content now.
The output of command kubectl exec -it pgpool-85cc9b75cd-hszkf -- cat /opt/pgpool-II/etc/pool_passwd is:

stars:md50f67b6fa692a85e6d8c31c33f061fa40

Is there other config should be set?

@sertceps
Copy link
Author

If you can connect to pgpool and execute show pool_nodes command, the configuration is correct. Check the status and role column in the result of show pool_nodes command.

# show pool_nodes;
 node_id | hostname | port  | status | lb_weight |  role   | 
---------+----------+-------+--------+-----------+---------+--
 0       | /tmp     | 11002 | up   | 0.500000  | primary |                    
 1       | /tmp     | 11003 | up   | 0.500000  | standby |

Thank you very very much.

I am sorry to trouble you, but I am stuck with the authentication error and can not connect to the database.

When I run comman:

kubectl port-forward svc/pgpool 9999:9999
psql -U stars -h 127.0.0.1 -p 9999

the result is:

psql: ERROR:  failed to authenticate with backend using SCRAM
DETAIL:  valid password not found
ERROR:  failed to authenticate with backend using SCRAM
DETAIL:  valid password not found

And the log of my application deployed by kubectl is:

error: failed to authenticate with backend using SCRAM
    at Parser.parseErrorMessage (/app/node_modules/pg-protocol/dist/parser.js:287:98)
    at Parser.handlePacket (/app/node_modules/pg-protocol/dist/parser.js:126:29)
    at Parser.parse (/app/node_modules/pg-protocol/dist/parser.js:39:38)
    at TLSSocket.<anonymous> (/app/node_modules/pg-protocol/dist/index.js:11:42)
    at TLSSocket.emit (events.js:376:20)
    at addChunk (internal/streams/readable.js:309:12)
    at readableAddChunk (internal/streams/readable.js:284:9)
    at TLSSocket.Readable.push (internal/streams/readable.js:223:10)
    at TLSWrap.onStreamRead (internal/stream_base_commons.js:188:23)

The out put of command kubectl exec -it pgpool-67b6ddbddf-ccstk -- cat opt/pgpool-II/etc/pool_passwd is:

stars:md50d33dc185b8bb8f192a9a0bc4967a22c

After I turned on the enable_pool_hba, and set config:

  pool_hba.conf: |-
    local   all         all                               trust
    host    all         all         127.0.0.1/32          trust
    host    all         all         ::1/128               trust
    host    all         all         0.0.0.0/0             md5
    hostssl all         all         0.0.0.0/0             md5

The error log of my application is:

[Nest] 1   - 10/14/2021, 9:26:55 AM   [TypeOrmModule] Unable to connect to the database. Retrying (7)... +3009ms
error: md5 authentication failed
    at Parser.parseErrorMessage (/app/node_modules/pg-protocol/dist/parser.js:287:98)
    at Parser.handlePacket (/app/node_modules/pg-protocol/dist/parser.js:126:29)
    at Parser.parse (/app/node_modules/pg-protocol/dist/parser.js:39:38)
    at TLSSocket.<anonymous> (/app/node_modules/pg-protocol/dist/index.js:11:42)
    at TLSSocket.emit (events.js:376:20)
    at addChunk (internal/streams/readable.js:309:12)
    at readableAddChunk (internal/streams/readable.js:284:9)
    at TLSSocket.Readable.push (internal/streams/readable.js:223:10)
    at TLSWrap.onStreamRead (internal/stream_base_commons.js:188:23)

I have tried use cleartext & md5 password & password in pool_passwd, errors are the same.

My full config map is:

apiVersion: v1
kind: ConfigMap
metadata:
  name: pgpool-config
  labels:
    name: pgpool-config
data:
  pgpool.conf: |-
    listen_addresses = '*'
    port = 9999
    socket_dir = '/var/run/pgpool'
    pcp_listen_addresses = '*'
    pcp_port = 9898
    pcp_socket_dir = '/var/run/pgpool'
---------------------------------Where I changed the default file.-------------
    backend_hostname0 = 'stars-ha'
----------------------------------No--dash--in--real--config---file---------------
    backend_port0 = 5432
    backend_weight0 = 1
    backend_flag0 = 'ALWAYS_PRIMARY|DISALLOW_TO_FAILOVER'
---------------------------------Where I changed the default file.-------------
    backend_hostname1 = 'stars-replicas'
----------------------------------No--dash--in--real--config---file---------------
    backend_port1 = 5432
    backend_weight1 = 1
    backend_flag1 = 'DISALLOW_TO_FAILOVER'
    sr_check_period = 0
---------------------------------Where I changed the default file.-------------
    enable_pool_hba = on
----------------------------------No--dash--in--real--config---file---------------
    backend_clustering_mode = 'streaming_replication'
    num_init_children = 32
    max_pool = 4
    child_life_time = 300
    child_max_connections = 0
    connection_life_time = 0
    client_idle_limit = 0
    connection_cache = on
    load_balance_mode = on
---------------------------------Where I changed the default file.-------------
    ssl = on
----------------------------------No--dash--in--real--config---file---------------
    failover_on_backend_error = off
---------------------------------Where I changed the default file.-------------
    # allow_clear_text_frontend_auth = on
    # I first turned on it, but document say `pool_hba.conf cannot be used with the parameter`, so I turned off it.
----------------------------------No--dash--in--real--config---file---------------
  pool_hba.conf: |-
    local   all         all                               trust
    host    all         all         127.0.0.1/32          trust
    host    all         all         ::1/128               trust
---------------------------------Where I changed the default file.-------------
    host    all         all         0.0.0.0/0             md5
    hostssl all         all         0.0.0.0/0             md5
----------------------------------No--dash--in--real--config---file---------------

My application config is:

        env:
        - name: DATABASE_HOST
          value: "pgpool"
        - name: DATABASE_PORT
          value: "9999"
        - name: DATABASE_USERNAME
          value: stars
        - name: DATABASE_PASSWORD
          valueFrom: { secretKeyRef: { name: stars-pguser-stars, key: password } }
          # value: 463efb32846eb45bb948a8362148bee7
          # value: 0d33dc185b8bb8f192a9a0bc4967a22c
          # value: md50d33dc185b8bb8f192a9a0bc4967a22c
        - name: DATABASE_DATABASE
          valueFrom: { secretKeyRef: { name: stars-pguser-stars, key: dbname } }

So sorry to trouble you.
Thanks very very much again.

@pengbo0328
Copy link
Collaborator

Are you using PostgreSQL 14?

@sertceps
Copy link
Author

Hi.
I think no.
The image Crunch Operator using is registry.developers.crunchydata.com/crunchydata/crunchy-postgres:centos8-13.4-1.

@sertceps
Copy link
Author

sertceps commented Oct 14, 2021

There was something wrong in my config.
I fix the password, then the error becom Error: The server does not support SSL connections.

Please ignore content above.

I found that, when I using valueFrom: { secretKeyRef: { name: stars-pguser-stars, key: password } } in my application,
the error logs is error: failed to authenticate with backend using SCRAM, while using string, like "12345" or "463efb32846eb45bb948a8362148bee7", the error log is error: md5 authentication failed.

@johanjk
Copy link

johanjk commented Oct 14, 2021

I faced the exact issue @sertceps what you need is to disable pgpool's md5 encryption of passwords, as Crunchy data secrets are already encrypted.

In the container in /opt/pgpool-II/etc/pool_passwd the password is currently set wrong. it needs to be the raw password from the secret.

In pool_hba.conf you need to add:

    host    all         all         0.0.0.0/0             scram-sha-256

@sertceps
Copy link
Author

@johanjk Thank you very much.
But the password in /opt/pgpool-II/etc/pool_passwd is still md5 format.
Should I edit any other config?

Now the configmap is:

apiVersion: v1
kind: ConfigMap
metadata:
  name: pgpool-config
  labels:
    name: pgpool-config
data:
  pgpool.conf: |-
    listen_addresses = '*'
    port = 9999
    socket_dir = '/var/run/pgpool'
    pcp_listen_addresses = '*'
    pcp_port = 9898
    pcp_socket_dir = '/var/run/pgpool'
    backend_hostname0 = 'stars-ha'
    backend_port0 = 5432
    backend_weight0 = 1
    backend_flag0 = 'ALWAYS_PRIMARY|DISALLOW_TO_FAILOVER'
    backend_hostname1 = 'stars-replicas'
    backend_port1 = 5432
    backend_weight1 = 1
    backend_flag1 = 'DISALLOW_TO_FAILOVER'
    sr_check_period = 0
    enable_pool_hba = on
    backend_clustering_mode = 'streaming_replication'
    num_init_children = 32
    max_pool = 4
    child_life_time = 300
    child_max_connections = 0
    connection_life_time = 0
    client_idle_limit = 0
    connection_cache = on
    load_balance_mode = on
    ssl = on
    failover_on_backend_error = off
    # allow_clear_text_frontend_auth = on
  pool_hba.conf: |-
    local   all         all                               trust
    host    all         all         127.0.0.1/32          trust
    host    all         all         ::1/128               trust
    host    all         all         0.0.0.0/0             scram-sha-256
    hostssl all         all         0.0.0.0/0             scram-sha-256

pgpool deploy is:

apiVersion: apps/v1
kind: Deployment
metadata:
  name: pgpool
spec:
  replicas: 1
  selector:
    matchLabels:
      app: pgpool
  template:
    metadata:
      labels:
        app: pgpool
    spec:
      containers:
      - name: pgpool
        image: pgpool/pgpool
        env:
        - name: POSTGRES_USERNAME
          valueFrom:
            secretKeyRef:
              name: stars-pguser-stars
              key: user
        - name: POSTGRES_PASSWORD
          valueFrom:
            secretKeyRef:
              name: stars-pguser-stars
              key: password
        volumeMounts:
        - name: pgpool-config
          mountPath: /config
      volumes:
      - name: pgpool-config
        configMap:
          name: pgpool-config
---
apiVersion: v1
kind: Service
metadata:
  name: pgpool
spec:
  selector:
    app: pgpool
  ports:
  - name: pgpool-port
    protocol: TCP
    port: 9999
    targetPort: 9999

Application env is:

        env:
        - name: DATABASE_HOST
          value: "pgpool"
        - name: DATABASE_PORT
          value: "9999"
        - name: DATABASE_USERNAME
          valueFrom: { secretKeyRef: { name: stars-pguser-stars, key: user } }
        - name: DATABASE_PASSWORD
          valueFrom: { secretKeyRef: { name: stars-pguser-stars, key: password } }
        - name: DATABASE_DATABASE
          valueFrom: { secretKeyRef: { name: stars-pguser-stars, key: dbname } }

@pengbo0328
Copy link
Collaborator

@sertceps

I found that, when I using valueFrom: { secretKeyRef: { name: stars-pguser-stars, key: password } } in my application,
the error logs is error: failed to authenticate with backend using SCRAM, while using string, like "12345" or "463efb32846eb45bb948a8362148bee7", the error log is error: md5 authentication failed.

I will look into this issue.
The current pgpool docker images may not support the latest Crunchy Operator.

@johanjk
Copy link

johanjk commented Oct 15, 2021

@sertceps see my pull request. it allows setting SKIP_PASSWORD_ENCRYPT in pgpool's env to skip the md5 encryption step.

@sertceps
Copy link
Author

@pengbo0328 @johanjk
Thank you very much.

I found a way to solve it.

In the document of PGO, User Management, it describe a way to Custom Passwords.

Use the offical script to generate md5 verfier, then run commands:

PASSWORD=example
VERIFIER=md595c1cdc86aed4cacfb226c3a048e1ed8

PASSWORD=$(echo -n $PASSWORD | base64 | tr -d '\n')
VERIFIER=$(echo -n $VERIFIER | base64 | tr -d '\n')

Patch password to pgo cluster:

kubectl patch secret -n postgres-operator hippo-pguser-hippo -p \
   "{\"data\":{\"password\":\"${PASSWORD}\",\"verifier\":\"${VERIFIER}\"}}"

And the configMap add:

hostssl all         all         0.0.0.0/0             md5

It seems work fine on my host now.

Emm, I don't know if it'll be a problem.

@medXPS
Copy link

medXPS commented Apr 22, 2024

hey Guys :
Does anyone have a guide on using PgPool-II alongside pgbouncer within a PostgreSQL Cluster on Kubernetes?
something Like : Pgbouncer for connection pooling and pgpool-II for load balancing -splitting read/write operations- .

@medXPS
Copy link

medXPS commented Apr 26, 2024

it works using pgpool minimal deployement( using ENV variables instead of config map ) : https://github.com/pgpool/pgpool2_on_k8s/blob/master/pgpool-deploy-minimal.yaml

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
question Further information is requested
Projects
None yet
Development

Successfully merging a pull request may close this issue.

4 participants