Skip to content

Commit

Permalink
fix: remove zarf var that is a string but meant as a boolean
Browse files Browse the repository at this point in the history
  • Loading branch information
JoeHCQ1 committed Aug 16, 2024
1 parent 64e9289 commit 229b013
Show file tree
Hide file tree
Showing 4 changed files with 8 additions and 9 deletions.
2 changes: 1 addition & 1 deletion chart/values.yaml
Original file line number Diff line number Diff line change
Expand Up @@ -17,7 +17,7 @@ sso:

# Do not manually set this value. It toggles on/off multiple resources based on the desire to cluster in concert
# with the main confluence chart. It is unlikely that you will be benefited by bypassing this var.
clustering_enabled: "###ZARF_VAR_CONFLUENCE_CLUSTERING_ENABLED###"
clustering_enabled: false # If clustering, make true

# Session Affinity settings for Istio Destination Rule. Only applies when clustering is enabled.
sessionAffinity:
Expand Down
4 changes: 3 additions & 1 deletion docs/clustering.md
Original file line number Diff line number Diff line change
Expand Up @@ -7,7 +7,7 @@

If you wish to enable clustering, consider the following steps and decisions to be made:

1. Set the Zarf variable `CONFLUENCE_CLUSTERING_ENABLED` to `true`. This will cause multiple flags between the main confluence helm chart and the "helper" chart (see the `chart/` directory) to create clustering-relevant resources.
1. Search for `# If clustering, make true` and make the suggested change. This can't be handled through a Zarf var because Zarf vars only handle strings, and `"false"` is truthy as `true` in helm.
2. Search for that variable, and then understand the helm values it is setting, to get an idea what is created for an Istio-injected clustering scenario (which does not work).
3. Decide to go forward with Istio - in this case you'll need to find a way to get it to work, or without Istio.

Expand Down Expand Up @@ -36,6 +36,8 @@ contributing factor to the inability of the clustering to work with Istio. Note,
this mode Hazelcast would be sending messages to `confluence-0` instead of `10.42.0.36`, it might play better with Istio. Injecting Hazelcast settings to alter the node discovery method and/or further work on Istio Destination
Rules are offered as two approaches that might lead to success.

This url may also help resolve the pod-IP access problem: <https://discuss.istio.io/t/istio-mtls-and-pod-ip-port/7537>.

### Clustering w/o Istio

If you choose to cluster without Istio injection enabled, you have the following **advantages:**
Expand Down
8 changes: 4 additions & 4 deletions values/common-values.yaml
Original file line number Diff line number Diff line change
Expand Up @@ -22,7 +22,7 @@ volumes:
storage: 10Gi
sharedHome:
persistentVolumeClaim:
create: "###ZARF_VAR_CONFLUENCE_CLUSTERING_ENABLED###"
create: false # If clustering, make true
resources:
requests:
storage: 10Gi
Expand Down Expand Up @@ -59,9 +59,9 @@ confluence:
type: ClusterIP # Use this whether doing a standalone server or a clustered setup
sessionAffinity: None # Change this to `ClientIP` if clustering is enabled and Istio injection is not enabled and no other system exists to provide sticky sessions
hazelcastService:
enabled: "###ZARF_VAR_CONFLUENCE_CLUSTERING_ENABLED###"
enabled: false # If clustering, make true
clustering:
enabled: "###ZARF_VAR_CONFLUENCE_CLUSTERING_ENABLED###"
enabled: false # If clustering, make true
tomcatConfig:
generateByHelm: true
seraphConfig:
Expand All @@ -70,7 +70,7 @@ confluence:
container:
requests:
cpu: "500m"
memory: "2.5Gi"
memory: "3Gi"
limits:
cpu: "4"
memory: "8Gi"
Expand Down
3 changes: 0 additions & 3 deletions zarf.yaml
Original file line number Diff line number Diff line change
Expand Up @@ -17,9 +17,6 @@ variables:
default: "uds.dev"
- name: CONFLUENCE_DB_ENDPOINT
default: "postgres"
- name: CONFLUENCE_CLUSTERING_ENABLED
default: "false" # Clustering does not work with Istio enabled, see docs/clustering.md for more details.
pattern: ^(true|false)$

components:
- name: confluence
Expand Down

0 comments on commit 229b013

Please sign in to comment.