Skip to content

CrunchyDB Research

Csaky edited this page May 4, 2022 · 1 revision

Steps for testing

Notes:

  • Tested inside the namespace for our COMS project (1dca6b-dev)
  • Deployed from a fork/branch of the COMS repo

Step 1: Create the Postgres cluster

The postgres cluster (and other services provided by the crunchyDb operator) can be deployed using an openshift template applied using the oc command line interface.

postgres cluster openshift template: postgres-cluster.yaml

Apply the template with the following command:

oc apply -f postgres-cluster.yaml # run from inside directory containing template

The template will set up:

  • HA postgres cluster made up of:
    • 3 'postgres' pods, each includes resources for monitoring, backups and replication
    • a separate stateful set for each postgres pods
    • a PVC for each of the 3 postgres pods
    • pgBackRest - for creating backups
    • A PVC for storing backups
    • pgBouncer - proxy connection pooler
    • secrets:
      • Postgres connection parameters
      • SSL certs for connections between databases and other infrastructure
      • Credentials for authorizing Backups, monitoring and db connection pooling

Additional Requirements

To overide the default 'deny from all' network policy on our OpenShift namespaces, i added the an openshift network policy to allow traffic between postgres cluster pods

Step 2: Connecting to the cluster

With the Postgres cluster running you can connect to the database using the parameters provided in the secret that was created by the operator. for our test this was the secret named: crunchy-pguser-crunchy

  • connect to master postgres pod from local machine:
oc -n 1dca6b-dev port-forward $(oc -n 1dca6b-dev get pods -o name --selector postgres-operator.crunchydata.com/role=master,postgres-operator.crunchydata.com/cluster=crunchy) 15432:5432

Note: run your local postgres connection on port 15432 and use the connection details from the crunchy-pguser-crunchy secret in your DBeaver settings.

Step 3: Running the COMS app and connecting to the database

Deploy the COMS app in a quick and dirty manner just for testing purposes.

COMS deployment config template: coms-deploy.yaml

Apply the template with the following command:

oc apply -f coms-deploy.dc.yaml # run from inside directory containing template

Note: the database connection environment variables that COMS needs are pulled from the same secret created by the Postgres cluster. Additonal enviroment variables are hard coded into an openshift configMap: coms-config.yaml

To allow traffic from COMS app to master postgres container an OpenShift network policy can be applied to allow the COMS to connect to the master Postgres pod

To make the COMS application availaible with an external web url apply:

Observations:

  • The COMS app exits with error:
{
  "component": "dataConnection",
  "function": "checkConnection",
  "level": "error",
  "message": "Error with database connection: no pg_hba.conf entry for host \"10.97.91.198\", user \"hippo-ha\", database \"hippo-ha\", SSL off",
  "timestamp": "2022-05-04T17:52:16.399Z"
}

Pros and Cons

Architecture Overiew

See CruncyDB docs for full details

Other things to note

  • Openshift (Silver cluster) has the open-source version of the CrunchyDB Operator installed.
  • The current version is 5.0.5
  • Being open-source, there isn't a support contract
Clone this wiki locally