You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
kubectl get pv,pvc
NAME CAPACITY ACCESS MODES RECLAIM POLICY STATUS CLAIM STORAGECLASS VOLUMEATTRIBUTESCLASS REASON AGE
persistentvolume/pvc-c37c06f5-f6e2-40ae-96a8-85c37aa4a25f 10Gi RWO Delete Bound default/pmm-storage-pmm-0 managed-nfs-storage <unset> 122m
NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS VOLUMEATTRIBUTESCLASS AGE
persistentvolumeclaim/pmm-storage-pmm-0 Bound pvc-c37c06f5-f6e2-40ae-96a8-85c37aa4a25f 10Gi RWO managed-nfs-storage <unset> 122m
The PMM pod is not going to a ready state since the health probe fails
The logs indicate that issue is with postgresql:
pmm-update-perform-init.log
TASK [initialization : Create grafana database in postgres] ********************
fatal: [localhost]: FAILED! => {"changed": false, "msg": "unable to connect to database: connection to server on socket \"/run/postgresql/.s.PGSQL.5432\" failed: No such file or directory\n\tIs the server running locally and accepting connections on that socket?\n"}
postgresql14.log
postgres: could not access the server configuration file "/srv/postgres14/postgresql.conf": No such file or directory
postgres: could not access the server configuration file "/srv/postgres14/postgresql.conf": No such file or directory
postgres: could not access the server configuration file "/srv/postgres14/postgresql.conf": No such file or directory
The /srv/postgres14/ directory also appears to be empty.
I am not seeing any cluster related errors/events on Kubernetes logs, any suggestions on what could be wrong here?
Thanks,
Amith
The text was updated successfully, but these errors were encountered:
AmythD
changed the title
PMM helm deployment issue caused by postgres failing
PMM helm deployment issue potentially caused by PostgreSQL
Oct 13, 2024
Hello,
I am using below command to install pmm to my baremetal kubernetes cluster on 1.29 with helm 3.16
Chart: pmm-1.3.16
PMM: 2.43.1
PV and PVC are bound:
The PMM pod is not going to a ready state since the health probe fails
The logs indicate that issue is with postgresql:
pmm-update-perform-init.log
postgresql14.log
The /srv/postgres14/ directory also appears to be empty.
I am not seeing any cluster related errors/events on Kubernetes logs, any suggestions on what could be wrong here?
Thanks,
Amith
The text was updated successfully, but these errors were encountered: