-
Notifications
You must be signed in to change notification settings - Fork 343
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
clients report: Already Closed #1997
Comments
Hey, @angelAtSequent, I tried to execute the sequence of commands using minio, and the error didn't appear. |
This is how the stateful set looks like: apiVersion: apps/v1
kind: StatefulSet
metadata:
name: immudb
spec:
persistentVolumeClaimRetentionPolicy:
whenDeleted: Retain
whenScaled: Retain
podManagementPolicy: OrderedReady
replicas: 1
revisionHistoryLimit: 10
selector:
matchLabels:
app.kubernetes.io/instance: immudb
app.kubernetes.io/name: immudb
serviceName: immudb
template:
metadata:
labels:
app.kubernetes.io/instance: immudb
app.kubernetes.io/name: immudb
spec:
containers:
- env:
- name: IMMUDB_ADMIN_PASSWORD
valueFrom:
secretKeyRef:
key: adminPassword
name: immudb
- name: IMMUDB_S3_STORAGE
value: "true"
- name: IMMUDB_S3_EXTERNAL_IDENTIFIER
value: "true"
- name: IMMUDB_S3_ACCESS_KEY_ID
valueFrom:
secretKeyRef:
key: AWS_ACCESS_KEY_ID
name: immudb-s3-backend
- name: IMMUDB_S3_SECRET_KEY
valueFrom:
secretKeyRef:
key: AWS_SECRET_ACCESS_KEY
name: immudb-s3-backend
- name: IMMUDB_S3_BUCKET_NAME
valueFrom:
secretKeyRef:
key: BUCKET_NAME
name: immudb-s3-backend
- name: IMMUDB_S3_LOCATION
valueFrom:
secretKeyRef:
key: AWS_REGION
name: immudb-s3-backend
- name: IMMUDB_S3_PATH_PREFIX
value: immudb
- name: IMMUDB_S3_ENDPOINT
valueFrom:
secretKeyRef:
key: AWS_S3_ENDPOINT
name: immudb-s3-backend
image: codenotary/immudb:1.9.3
imagePullPolicy: IfNotPresent
livenessProbe:
failureThreshold: 9
httpGet:
path: /readyz
port: metrics
scheme: HTTP
periodSeconds: 10
successThreshold: 1
timeoutSeconds: 1
name: immudb
ports:
- containerPort: 8080
name: http
protocol: TCP
- containerPort: 3322
name: grpc
protocol: TCP
- containerPort: 9497
name: metrics
protocol: TCP
readinessProbe:
failureThreshold: 3
httpGet:
path: /readyz
port: metrics
scheme: HTTP
periodSeconds: 10
successThreshold: 1
timeoutSeconds: 1
securityContext:
capabilities:
drop:
- ALL
readOnlyRootFilesystem: true
terminationMessagePath: /dev/termination-log
terminationMessagePolicy: File
volumeMounts:
- mountPath: /var/lib/immudb
name: immudb-storage
subPath: immudb
- mountPath: /mnt/secrets
name: secrets-store-inline
readOnly: true
- mountPath: /mnt/s3-backend-secrets
name: s3-backend-secrets-store-inline
readOnly: true
dnsPolicy: ClusterFirst
restartPolicy: Always
schedulerName: default-scheduler
securityContext:
fsGroup: 3322
fsGroupChangePolicy: OnRootMismatch
runAsGroup: 3322
runAsNonRoot: true
runAsUser: 3322
serviceAccount: immudb
serviceAccountName: immudb
terminationGracePeriodSeconds: 30
volumes:
- emptyDir:
sizeLimit: 5Gi
name: immudb-storage
- csi:
driver: secrets-store.csi.k8s.io
readOnly: true
volumeAttributes:
secretProviderClass: immudb
name: secrets-store-inline
- csi:
driver: secrets-store.csi.k8s.io
readOnly: true
volumeAttributes:
secretProviderClass: immudb-s3-backend
name: s3-backend-secrets-store-inline
updateStrategy:
rollingUpdate:
partition: 0
type: RollingUpdate I would say yes, it has been updated from 1.9DOM2 but im not a 100% sure. |
I tried downgrading from 1.9.3 to 1.9DOM2 and it "works" again. The downside... the resource utilization is HUGE on 1.9DOM2. Does it make sense to... while running on 1.9DOM2 dump all the databases, then spin up a fresh 1.9.3 server and restore the databases? Thanks |
WDYT @ostafen |
Yes, it makes sense |
Now it is saying... immudb 2024/07/04 05:14:48 ERROR: unable to open database: corrupted index: index size is too large
immudb 2024/07/04 05:14:48 ERROR: database 'systemdb' was not correctly initialized.
Use replication to recover from external source or start without data folder.
immudb 2024/07/04 05:14:48 ERROR: unable to load system database: corrupted index: index size is too large
Error: corrupted index: index size is too large Don't know how to make it working to dump the databases and then restore into a fresh immudb installation |
We do have the same issue on 1.9.3.
Could it be an issue on the S3 implementation? |
Is this still working fine when downgrading to 1.9DOM2? |
Nop, is not.
|
What happened
immudb running with the backend on S3 reports "Already Closed" so clients can not connect to the database.
Tested from multiple clients, including the webinterface:
Some commands still works:
./immuadmin -a immudb-grpc database create helloissue database 'helloissue' {replica: false} successfully created
Others, doesn't
What you expected to happen
No error message at all
How to reproduce it (as minimally and precisely as possible)
Not sure
Environment
Additional info (any other context about the problem)
As usual, it was running fine on S3, eventually it stopped working
The text was updated successfully, but these errors were encountered: