Skip to content
This repository has been archived by the owner on Jul 18, 2024. It is now read-only.

using postgres yaml I got pod 'CrashLoopBackOff' #91

Open
charlie-charlie opened this issue Dec 5, 2018 · 9 comments
Open

using postgres yaml I got pod 'CrashLoopBackOff' #91

charlie-charlie opened this issue Dec 5, 2018 · 9 comments
Labels

Comments

@charlie-charlie
Copy link

basically I adopted the yaml except small tweaks(I strongly do believe this isn't the cause). when using kubectl describe, the pod starts failed. Also, I tested other version postgres image, i got same error.
did any one run into same issue? thanks

@charlie-charlie
Copy link
Author

charlie-charlie commented Dec 5, 2018

Here is the describe pod output

Name:           dicro-postgresql-5b46dcbd8b-rvsd8
Namespace:      web-external
Node:           ip-10-2-2-104.ec2.internal/10.2.2.104
Start Time:     Wed, 05 Dec 2018 09:57:49 -0500
Labels:         app=dicro-postgres
                pod-template-hash=1602876846
                tier=dicro-postgreSQL
Annotations:    <none>
Status:         Running
IP:             10.2.2.109
Controlled By:  ReplicaSet/dicro-postgresql-5b46dcbd8b
Containers:
  dicro-postgresql:
    Container ID:   docker://3bad56087a3dd535e7fe24f2fa52f20408ae0ee86a110a31b5afb2b59dc2f066
    Image:          postgres:9.6.2-alpine
    Image ID:       docker-pullable://postgres@sha256:f88000211e3c682e7419ac6e6cbd3a7a4980b483ac416a3b5d5ee81d4f831cc9
    Port:           5432/TCP
    Host Port:      0/TCP
    State:          Waiting
      Reason:       CrashLoopBackOff
    Last State:     Terminated
      Reason:       Error
      Exit Code:    1
      Started:      Wed, 05 Dec 2018 09:58:09 -0500
      Finished:     Wed, 05 Dec 2018 09:58:09 -0500
    Ready:          False
    Restart Count:  1
    Environment:
      POSTGRES_USER:      gitlab
      POSTGRES_DB:        gitlabhq_production
      POSTGRES_PASSWORD:  gitlab
    Mounts:
      /var/lib/postgresql/data from postgresql (rw)
      /var/run/secrets/kubernetes.io/serviceaccount from default-token-x8hbp (ro)
Conditions:
  Type           Status
  Initialized    True
  Ready          False
  PodScheduled   True
Volumes:
  postgresql:
    Type:       PersistentVolumeClaim (a reference to a PersistentVolumeClaim in the same namespace)
    ClaimName:  dicro-postgres-claim
    ReadOnly:   false
  default-token-x8hbp:
    Type:        Secret (a volume populated by a Secret)
    SecretName:  default-token-x8hbp
    Optional:    false
QoS Class:       BestEffort
Node-Selectors:  <none>
Tolerations:     node.kubernetes.io/not-ready:NoExecute for 300s
                 node.kubernetes.io/unreachable:NoExecute for 300s
Events:
  Type     Reason                  Age                From                                 Message
  ----     ------                  ----               ----                                 -------
  Warning  FailedScheduling        38s (x3 over 39s)  default-scheduler                    pod has unbound PersistentVolumeClaims (repeated 4 times)
  Normal   Scheduled               36s                default-scheduler                    Successfully assigned dicro-postgresql-5b46dcbd8b-rvsd8 to ip-10-2-2-104.ec2.internal
  Normal   SuccessfulMountVolume   36s                kubelet, ip-10-2-2-104.ec2.internal  MountVolume.SetUp succeeded for volume "default-token-x8hbp"
  Warning  FailedAttachVolume      34s (x3 over 36s)  attachdetach-controller              AttachVolume.Attach failed for volume "pvc-205551ea-f89e-11e8-91a5-0e0920e8c426" : "Error attaching EBS volume \"vol-0d793cdeb8625d6af\"" to instance "i-07305ad491b7d9a9b" since volume is in "creating" state
  Normal   SuccessfulAttachVolume  28s                attachdetach-controller              AttachVolume.Attach succeeded for volume "pvc-205551ea-f89e-11e8-91a5-0e0920e8c426"
  Normal   SuccessfulMountVolume   17s                kubelet, ip-10-2-2-104.ec2.internal  MountVolume.SetUp succeeded for volume "pvc-205551ea-f89e-11e8-91a5-0e0920e8c426"
  Normal   Pulled                  16s (x2 over 17s)  kubelet, ip-10-2-2-104.ec2.internal  Container image "postgres:9.6.2-alpine" already present on machine
  Normal   Created                 16s (x2 over 17s)  kubelet, ip-10-2-2-104.ec2.internal  Created container
  Normal   Started                 16s (x2 over 17s)  kubelet, ip-10-2-2-104.ec2.internal  Started container
  Warning  BackOff                 14s (x2 over 15s)  kubelet, ip-10-2-2-104.ec2.internal  Back-off restarting failed container

@MohammedFadin
Copy link

Have you solved the issue?

@cjen07
Copy link

cjen07 commented Jun 12, 2019

same problem +1

@pboehma
Copy link

pboehma commented Jun 19, 2019

same issue...any solutions so far?

@richardforth
Copy link

richardforth commented Mar 2, 2020

So I fixed mine because of a missing POSGRES_PASSWORD:

env:
  - name: POSTGRES_PASSWORD
    value: mysecretpassword

I found this out by typing

kubectl logs postgres-pod -p

and I got this:

Error: Database is uninitialized and superuser password is not specified.
       You must specify POSTGRES_PASSWORD for the superuser. Use
       "-e POSTGRES_PASSWORD=password" to set it in "docker run".

       You may also use POSTGRES_HOST_AUTH_METHOD=trust to allow all connections
       without a password. This is *not* recommended. See PostgreSQL
       documentation about "trust":
       https://www.postgresql.org/docs/current/auth-trust.html

Once I updated the yaml file, I deleted the pod and recreated it, and it started no problems
Hope that helps

@DAAC
Copy link

DAAC commented Mar 31, 2020

I run the cmd below
kubectl logs postgres-pod -p
I got the same error
as @richardforth said,

I added the user and password and with that I resolve the problem, below I left the example

apiVersion: v1
kind: Pod
metadata:
  name: postgres-pod
  labels:
    name: postgres-pod
    app: demo-voting-app
spec:
  containers:
  - name: postgres
    image: postgres:9.4
    env:
      - name: POSTGRES_USER
        value: admin
      - name: POSTGRES_PASSWORD
        value: admin
    ports:
     - containerPort: 5432

@Devopsforum
Copy link

@DAAC this worked

@neerajkr25
Copy link

apiVersion: v1
kind: Pod
metadata:
name: postgres-pod
labels:
name: postgres-pod
app: demo-voting-app
spec:
containers:

  • name: postgres
    image: postgres:9.4
    env:
    • name: POSTGRES_USER
      value: admin
    • name: POSTGRES_PASSWORD
      value: admin
      ports:
    • containerPort: 5432

@pdusita
Copy link

pdusita commented Sep 29, 2021

I got the same problem. Then I added the POSTGRES_USER and POSTGRES_PASSWORD. The pod is up. However, when run the voting app and select cat/dog. The voting not reflect to the result page. Do I need to create the Azure Postgres SQL separately?

Sign up for free to subscribe to this conversation on GitHub. Already have an account? Sign in.
Labels
Projects
None yet
Development

No branches or pull requests

10 participants