Skip to content

CrunchyDB

jatindersingh93 edited this page Nov 1, 2024 · 9 revisions

Reference documentation:

PostreSQL Deployment using Helm Charts

Note: References our COMS (Common Object Management Service) project for an example.
All CI/CD code added to COMS to implement CrunchyDB can be seen here

The postgrescluster Helm Chart is a fork of the official CrunchyData Helm chart example version: 5.6.0. No changes have been made to the postgres.yaml PostgresCluster template or the values.yaml files. 2 additional templates have been added to our copy of this helm chart repo, that should not need editing, whose values are passed in during deployment:

  • secret pgbackrest-s3-secret.yaml to store S3 credentials, should you enable postgres data backeups to S3 using pgBackrest
  • configMap postgres-bootstrap-sql-configmap.yaml containing any psql commands you want to run when crunchyDB initializes (see databaseInitSQL value)

Our pipeline installs the main coms Helm chart which has the postgrescluster Helm chart as a 'local' dependency. Our GitHub Actions-based pipeline uses distinct sets of values for each deployment environment. more info

  • in the COMS pipeline we run helm install.. command in a github action and set the postgrescluster name (eg: --set postgres.name=postgres-master). This name is required in the postgres templates and becomes the name of the PostgresCluster object.

  • In our values we provide a users object to create a database and user that our COMS app will use. see User / Database Management for more details

  users:
    - name: app
      databases:
        - app

When crunchyDB is installed a secret is created called postgres-master-pguser-app that contains postgres credentials that must be referenced in our COMS deployment template

PR Deployments share the main CrunchyDB in our Dev environment

  • PR deployments of COMS deploy a dedicated instance of the COMS app in our DEV environment in OpenShift. To reduce server resources, instead of also deploying an extra instance of CrunchyDB, the GitHub action 'On PR Opened' creates a temporary database and user (both named, eg pr-123) in the master (main) DEV instance of Postgres, that will get dropped when the PR is merged.

Memory and CPU Resource Allocation

  • CrunchyDB has some very high defaults set by the PGO operator. It is important to override these or re-configure these during your deployment of a crunchyDB cluster. Refer to our COMS database config as an example.

Backups

Backups are typically done to a PVC mounted to the crunchyDB postgres pods and S3. Our backup configuration for COMS (Prod) is defined in the environment file.

Other backup commands

  • Browse backups in PVC:
    • in terminal connected to the repo host. eg: postgres-master-repo-host-0
    • go to pvc mount path: /pgbackrest/repo1
  • manually create a one-off bckup:
oc annotate -n 1dca6b-dev postgrescluster postgres-master --overwrite \
postgres-operator.crunchydata.com/pgbackrest-backup="$( date '+%F_%H:%M:%S' )"

an alternate method is to run the pgBackrest command from the master postgres pod: pgbackrest backup --stanza=db --repo=1 --type=full

  • inspect backup configuration in the pgBackrest container
    • from the repo pod (eg postgres-master-repo-host-0), in pgbackrest-config container
    • check config with: cat /etc/pgbackrest/conf.d/pgbackrest_instance.conf

Other Postgres stuff

Ref: connect to postgres master using Port-forward and PSQL.

  • port-forward from local port (15432) to pgBouncer service port 5432. Note: you should port-forward using the name of the service (eg: postgres-master-pgbouncer)
oc -n bb17f9-dev port-forward service/postgres-master-pgbouncer <local port eg: 15432>:5432
  • localhost connection to postgres (using pgbouncer uri).
psql 'postgresql://<username>:<password>@postgres-master-pgbouncer.bb17f9-test.svc:5432/app'

Network policies

  • ensure there is a network policy to allow connections from all pods within namespace
spec:
  podSelector: {}
  ingress:
    - from:
        - podSelector: {}
  policyTypes:
    - Ingress
Clone this wiki locally