Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[bitnami/kafka] SSL available when Kraft mode enabled? #43226

Closed
zapho opened this issue Aug 2, 2023 · 9 comments
Closed

[bitnami/kafka] SSL available when Kraft mode enabled? #43226

zapho opened this issue Aug 2, 2023 · 9 comments
Assignees
Labels
kafka solved stale 15 days without activity tech-issues The user has a technical issue about an application triage Triage is needed

Comments

@zapho
Copy link

zapho commented Aug 2, 2023

Name and Version

bitnami/kafka:3.4

What architecture are you using?

amd64

What steps will reproduce the bug?

I'm trying to spin a Kafka broker in Kraft mode using TLS mutual auth for client connection. But it ends up with a SSL handshake failure when starting a client.

version: "3.2"
services:

  kafka:
    image: bitnami/kafka:3.4
    ports:
      - 0.0.0.0:9093:9093/tcp
    volumes:
      - /local/path/to/kafka.keystore.jks:/opt/bitnami/kafka/config/certs/kafka.keystore.jks:ro
      - /local/path/to/kafka.truststore.jks:/opt/bitnami/kafka/config/certs/kafka.truststore.jks:ro
    environment:
      - BITNAMI_DEBUG=true
      - KAFKA_CFG_NODE_ID=0
      - KAFKA_TLS_TYPE=JKS
      - KAFKA_CFG_SSL_KEYSTORE_TYPE=JKS
      - KAFKA_CFG_LISTENERS=SECURED://:9093,CONTROLLER://:9094,INTERBROKER://:9092
      - KAFKA_CFG_ADVERTISED_LISTENERS=SECURED://kafka:9093,INTERBROKER://localhost:9092
      - KAFKA_CFG_LISTENER_SECURITY_PROTOCOL_MAP=CONTROLLER:PLAINTEXT,SECURED:SSL,INTERBROKER:PLAINTEXT
      - KAFKA_CFG_INTER_BROKER_LISTENER_NAME=INTERBROKER
      - KAFKA_CFG_PROCESS_ROLES=controller,broker
      - KAFKA_CFG_CONTROLLER_QUORUM_VOTERS=0@kafka:9094
      - KAFKA_CFG_CONTROLLER_LISTENER_NAMES=CONTROLLER
      - KAFKA_TLS_CLIENT_AUTH=required
      - PKCS12_STORE_PASSWORD=abcd1234
      - KAFKA_CFG_SSL_ENDPOINT_IDENTIFICATION_ALGORITHM=
      - KAFKA_CFG_SECURITY_PROTOCOL=SSL
      - KAFKA_CERTIFICATE_PASSWORD=abcd1234

When starting this container, there is no error in the logs:

kafka 15:29:30.72 INFO  ==> ** Starting Kafka setup **
kafka 15:29:30.75 DEBUG ==> Validating settings in KAFKA_* env vars...
kafka 15:29:30.76 INFO  ==> Initializing Kafka...
kafka 15:29:30.76 INFO  ==> No injected configuration files found, creating default config files
kafka 15:29:30.90 INFO  ==> Initializing KRaft...
kafka 15:29:30.90 WARN  ==> KAFKA_KRAFT_CLUSTER_ID not set - If using multiple nodes then you must use the same Cluster ID for each one
kafka 15:29:31.82 INFO  ==> Generated Kafka cluster ID '57I9pieBRH6gWy_bbQ3VYg'
kafka 15:29:31.82 INFO  ==> Formatting storage directories to add metadata...

Looking into the container, the PKS files are there and can be accessed (root user):

d exec -it c16 bash
c16c99f3c4f6:/$ ls -la /opt/bitnami/kafka/config/certs/
-rw-r--r-- 1 1000 1000  937 Aug  2 15:51 kafka.keystore.jks
-rw-r--r-- 1 1000 1000  930 Aug  2 15:51 kafka.truststore.jks

c16c99f3c4f6:/$ keytool -list -keystore /opt/bitnami/kafka/config/certs/kafka.keystore.jks
Enter keystore password:
Keystore type: JKS
Keystore provider: SUN

Your keystore contains 1 entry

kafka-client-cert, Aug 2, 2023, trustedCertEntry,
Certificate fingerprint (SHA-256): 24:FD:43:14:FF:CF:0B:EE:96:C1:37:79:4F:CD:BC:36:2B:37:14:F7:9E:9C:8F:F9:9E:9E:DB:2F:0B:B3:A9:5F

c16c99f3c4f6:/$ keytool -list -keystore /opt/bitnami/kafka/config/certs/kafka.truststore.jks
Enter keystore password:
3Keystore type: JKS
Keystore provider: SUN

Your keystore contains 1 entry

root_cert_for_kafka, Aug 2, 2023, trustedCertEntry,
Certificate fingerprint (SHA-256): 15:15:97:79:46:7F:59:7F:E4:47:56:AD:34:1F:22:79:04:8E:B2:0B:12:26:88:05:72:3B:CD:4D:DF:FF:1B:84
root@c16c99f3c4f6:/# 3

but what looks suspicious is that the logs show no ssl configuration

        ssl.cipher.suites = []
        ssl.client.auth = none
        ssl.enabled.protocols = [TLSv1.2, TLSv1.3]
        ssl.endpoint.identification.algorithm = https
        ssl.engine.factory.class = null
        ssl.key.password = null
        ssl.keymanager.algorithm = SunX509
        ssl.keystore.certificate.chain = null
        ssl.keystore.key = null
        ssl.keystore.location = null
        ssl.keystore.password = null
        ssl.keystore.type = JKS
        ssl.principal.mapping.rules = DEFAULT
        ssl.protocol = TLSv1.3
        ssl.provider = null
        ssl.secure.random.implementation = null
        ssl.trustmanager.algorithm = PKIX
        ssl.truststore.certificates = null
        ssl.truststore.location = null
        ssl.truststore.password = null
        ssl.truststore.type = JKS

There might be a configuration issue but there is no trace in the logs to understand where it could occur.

What is the expected behavior?

The kafka client can successfully connect via the TLS connection to the broker.

What do you see instead?

Running a Kafka client using the same keystore and trustore files used for the broker leads to a handshake issue:

kafka-topics.sh --list --bootstrap-server localhost:9093  --command-config ./ssl-client.properties
[2023-08-02 18:05:02,745] ERROR [AdminClient clientId=adminclient-1] Connection to node -1 (localhost/127.0.0.1:9093) failed authentication due to: SSL handshake failed (org.apache.kafka.clients.NetworkClient)
[2023-08-02 18:05:02,747] WARN [AdminClient clientId=adminclient-1] Metadata update failed due to authentication error (org.apache.kafka.clients.admin.internals.AdminMetadataManager)
org.apache.kafka.common.errors.SslAuthenticationException: SSL handshake failed
Caused by: javax.net.ssl.SSLHandshakeException: Received fatal alert: handshake_failure
        at java.base/sun.security.ssl.Alert.createSSLException(Alert.java:131)
        at java.base/sun.security.ssl.Alert.createSSLException(Alert.java:117)
        at java.base/sun.security.ssl.TransportContext.fatal(TransportContext.java:340)
        at java.base/sun.security.ssl.Alert$AlertConsumer.consume(Alert.java:293)
        at java.base/sun.security.ssl.TransportContext.dispatch(TransportContext.java:186)
        at java.base/sun.security.ssl.SSLTransport.decode(SSLTransport.java:172)
        at java.base/sun.security.ssl.SSLEngineImpl.decode(SSLEngineImpl.java:681)
        at java.base/sun.security.ssl.SSLEngineImpl.readRecord(SSLEngineImpl.java:636)
        at java.base/sun.security.ssl.SSLEngineImpl.unwrap(SSLEngineImpl.java:454)
        at java.base/sun.security.ssl.SSLEngineImpl.unwrap(SSLEngineImpl.java:433)
        at java.base/javax.net.ssl.SSLEngine.unwrap(SSLEngine.java:637)
        at org.apache.kafka.common.network.SslTransportLayer.handshakeUnwrap(SslTransportLayer.java:518)
        at org.apache.kafka.common.network.SslTransportLayer.handshake(SslTransportLayer.java:308)
        at org.apache.kafka.common.network.KafkaChannel.prepare(KafkaChannel.java:178)
        at org.apache.kafka.common.network.Selector.pollSelectionKeys(Selector.java:543)
        at org.apache.kafka.common.network.Selector.poll(Selector.java:481)
        at org.apache.kafka.clients.NetworkClient.poll(NetworkClient.java:560)
        at org.apache.kafka.clients.admin.KafkaAdminClient$AdminClientRunnable.processRequests(KafkaAdminClient.java:1407)
        at org.apache.kafka.clients.admin.KafkaAdminClient$AdminClientRunnable.run(KafkaAdminClient.java:1338)
        at java.base/java.lang.Thread.run(Thread.java:829)
Error while executing topic command : SSL handshake failed
[2023-08-02 18:05:02,750] ERROR org.apache.kafka.common.errors.SslAuthenticationException: SSL handshake failed
Caused by: javax.net.ssl.SSLHandshakeException: Received fatal alert: handshake_failure
        at java.base/sun.security.ssl.Alert.createSSLException(Alert.java:131)
        at java.base/sun.security.ssl.Alert.createSSLException(Alert.java:117)
        at java.base/sun.security.ssl.TransportContext.fatal(TransportContext.java:340)
        at java.base/sun.security.ssl.Alert$AlertConsumer.consume(Alert.java:293)
        at java.base/sun.security.ssl.TransportContext.dispatch(TransportContext.java:186)
        at java.base/sun.security.ssl.SSLTransport.decode(SSLTransport.java:172)
        at java.base/sun.security.ssl.SSLEngineImpl.decode(SSLEngineImpl.java:681)
        at java.base/sun.security.ssl.SSLEngineImpl.readRecord(SSLEngineImpl.java:636)
        at java.base/sun.security.ssl.SSLEngineImpl.unwrap(SSLEngineImpl.java:454)
        at java.base/sun.security.ssl.SSLEngineImpl.unwrap(SSLEngineImpl.java:433)
        at java.base/javax.net.ssl.SSLEngine.unwrap(SSLEngine.java:637)
        at org.apache.kafka.common.network.SslTransportLayer.handshakeUnwrap(SslTransportLayer.java:518)
        at org.apache.kafka.common.network.SslTransportLayer.handshake(SslTransportLayer.java:308)
        at org.apache.kafka.common.network.KafkaChannel.prepare(KafkaChannel.java:178)
        at org.apache.kafka.common.network.Selector.pollSelectionKeys(Selector.java:543)
        at org.apache.kafka.common.network.Selector.poll(Selector.java:481)
        at org.apache.kafka.clients.NetworkClient.poll(NetworkClient.java:560)
        at org.apache.kafka.clients.admin.KafkaAdminClient$AdminClientRunnable.processRequests(KafkaAdminClient.java:1407)
        at org.apache.kafka.clients.admin.KafkaAdminClient$AdminClientRunnable.run(KafkaAdminClient.java:1338)
        at java.base/java.lang.Thread.run(Thread.java:829)
 (kafka.admin.TopicCommand$)

Trying to check the TLS socker leads to the same kind of issue:

openssl s_client -connect localhost:9093
CONNECTED(00000003)
Can't use SSL_get_servername
405CCD44547F0000:error:0A000410:SSL routines:ssl3_read_bytes:sslv3 alert handshake failure:ssl/record/rec_layer_s3.c:1584:SSL alert number 40
---
no peer certificate available
---
No client certificate CA names sent
Server Temp Key: X25519, 253 bits
---
SSL handshake has read 243 bytes and written 295 bytes
Verification: OK
---
New, TLSv1.3, Cipher is TLS_AES_256_GCM_SHA384
Secure Renegotiation IS NOT supported
Compression: NONE
Expansion: NONE
No ALPN negotiated
Early data was not sent
Verify return code: 0 (ok)
---

Matching log from the broker:

[2023-08-02 16:06:31,969] INFO [SocketServer listenerType=BROKER, nodeId=0] Failed authentication with /172.23.0.1 (channelId=172.23.0.2:9093-172.23.0.1:51382-0) (SSL handshake failed) (org.apache.kafka.common.network.Selector)

Additional information

Full broker logs

kafka 15:56:15.52
kafka 15:56:15.52 Welcome to the Bitnami kafka container
kafka 15:56:15.52 Subscribe to project updates by watching https://github.com/bitnami/containers
kafka 15:56:15.52 Submit issues and feature requests at https://github.com/bitnami/containers/issues
kafka 15:56:15.52
kafka 15:56:15.52 INFO  ==> ** Starting Kafka setup **
kafka 15:56:15.56 DEBUG ==> Validating settings in KAFKA_* env vars...
kafka 15:56:15.57 INFO  ==> Initializing Kafka...
kafka 15:56:15.57 INFO  ==> No injected configuration files found, creating default config files
kafka 15:56:15.72 INFO  ==> Initializing KRaft...
kafka 15:56:15.72 WARN  ==> KAFKA_KRAFT_CLUSTER_ID not set - If using multiple nodes then you must use the same Cluster ID for each one
kafka 15:56:16.65 INFO  ==> Generated Kafka cluster ID '3vaDVZ4PTSuN9wu2zxFcHw'
kafka 15:56:16.65 INFO  ==> Formatting storage directories to add metadata...
Formatting /bitnami/kafka/data with metadata.version 3.4-IV0.

kafka 15:56:17.79 INFO  ==> ** Kafka setup finished! **
kafka 15:56:17.80 INFO  ==> ** Starting Kafka **
[2023-08-02 15:56:18,287] INFO Registered kafka:type=kafka.Log4jController MBean (kafka.utils.Log4jControllerRegistration$)
[2023-08-02 15:56:18,506] INFO Setting -D jdk.tls.rejectClientInitiatedRenegotiation=true to disable client-initiated TLS renegotiation (org.apache.zookeeper.common.X509Util)
[2023-08-02 15:56:18,622] INFO Registered signal handlers for TERM, INT, HUP (org.apache.kafka.common.utils.LoggingSignalHandler)
[2023-08-02 15:56:18,625] INFO Starting controller (kafka.server.ControllerServer)
[2023-08-02 15:56:18,916] INFO Updated connection-accept-rate max connection creation rate to 2147483647 (kafka.network.ConnectionQuotas)
[2023-08-02 15:56:18,922] INFO Awaiting socket connections on 0.0.0.0:9094. (kafka.network.DataPlaneAcceptor)
[2023-08-02 15:56:18,958] INFO [SocketServer listenerType=CONTROLLER, nodeId=0] Created data-plane acceptor and processors for endpoint : ListenerName(CONTROLLER) (kafka.network.SocketServer)
[2023-08-02 15:56:18,960] INFO [SharedServer id=0] Starting SharedServer (kafka.server.SharedServer)
[2023-08-02 15:56:19,014] INFO [LogLoader partition=__cluster_metadata-0, dir=/bitnami/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$)
[2023-08-02 15:56:19,014] INFO [LogLoader partition=__cluster_metadata-0, dir=/bitnami/kafka/data] Reloading from producer snapshot and rebuilding producer state from offset 0 (kafka.log.UnifiedLog$)
[2023-08-02 15:56:19,015] INFO [LogLoader partition=__cluster_metadata-0, dir=/bitnami/kafka/data] Producer state recovery took 1ms for snapshot load and 0ms for segment recovery from offset 0 (kafka.log.UnifiedLog$)
[2023-08-02 15:56:19,038] INFO Initialized snapshots with IDs Set() from /bitnami/kafka/data/__cluster_metadata-0 (kafka.raft.KafkaMetadataLog$)
[2023-08-02 15:56:19,049] INFO [raft-expiration-reaper]: Starting (kafka.raft.TimingWheelExpirationService$ExpiredOperationReaper)
[2023-08-02 15:56:19,146] INFO [RaftManager nodeId=0] Completed transition to Unattached(epoch=0, voters=[0], electionTimeoutMs=1620) (org.apache.kafka.raft.QuorumState)
[2023-08-02 15:56:19,152] INFO [RaftManager nodeId=0] Completed transition to CandidateState(localId=0, epoch=1, retries=1, electionTimeoutMs=1039) (org.apache.kafka.raft.QuorumState)
[2023-08-02 15:56:19,159] INFO [RaftManager nodeId=0] Completed transition to Leader(localId=0, epoch=1, epochStartOffset=0, highWatermark=Optional.empty, voterStates={0=ReplicaState(nodeId=0, endOffset=Optional.empty, lastFetchTimestamp=-1, lastCaughtUpTimestamp=-1, hasAcknowledgedLeader=true)}) (org.apache.kafka.raft.QuorumState)
[2023-08-02 15:56:19,184] INFO [kafka-raft-outbound-request-thread]: Starting (kafka.raft.RaftSendThread)
[2023-08-02 15:56:19,184] INFO [kafka-raft-io-thread]: Starting (kafka.raft.KafkaRaftManager$RaftIoThread)
[2023-08-02 15:56:19,200] INFO [RaftManager nodeId=0] High watermark set to LogOffsetMetadata(offset=1, metadata=Optional[(segmentBaseOffset=0,relativePositionInSegment=91)]) for the first time for epoch 1 based on indexOfHw 0 and voters [ReplicaState(nodeId=0, endOffset=Optional[LogOffsetMetadata(offset=1, metadata=Optional[(segmentBaseOffset=0,relativePositionInSegment=91)])], lastFetchTimestamp=-1, lastCaughtUpTimestamp=-1, hasAcknowledgedLeader=true)] (org.apache.kafka.raft.LeaderState)
[2023-08-02 15:56:19,210] INFO [RaftManager nodeId=0] Registered the listener org.apache.kafka.image.loader.MetadataLoader@774292104 (org.apache.kafka.raft.KafkaRaftClient)
[2023-08-02 15:56:19,212] INFO [Controller 0] Creating new QuorumController with clusterId 3vaDVZ4PTSuN9wu2zxFcHw, authorizer Optional.empty. (org.apache.kafka.controller.QuorumController)
[2023-08-02 15:56:19,214] INFO [RaftManager nodeId=0] Registered the listener org.apache.kafka.controller.QuorumController$QuorumMetaLogListener@1672792801 (org.apache.kafka.raft.KafkaRaftClient)
[2023-08-02 15:56:19,216] INFO [Controller 0] Becoming the active controller at epoch 1, committed offset -1, committed epoch -1 (org.apache.kafka.controller.QuorumController)
[2023-08-02 15:56:19,218] INFO [MetadataLoader 0] Publishing initial snapshot at offset -1 to SnapshotGenerator (org.apache.kafka.image.loader.MetadataLoader)
[2023-08-02 15:56:19,219] INFO [Controller 0] The metadata log appears to be empty. Appending 1 bootstrap record(s) at metadata.version 3.4-IV0 from the binary bootstrap metadata file: /bitnami/kafka/data/bootstrap.checkpoint. (org.apache.kafka.controller.QuorumController)
[2023-08-02 15:56:19,219] INFO [ThrottledChannelReaper-Fetch]: Starting (kafka.server.ClientQuotaManager$ThrottledChannelReaper)
[2023-08-02 15:56:19,220] INFO [ThrottledChannelReaper-Produce]: Starting (kafka.server.ClientQuotaManager$ThrottledChannelReaper)
[2023-08-02 15:56:19,220] INFO [Controller 0] Setting metadata.version to 8 (org.apache.kafka.controller.FeatureControlManager)
[2023-08-02 15:56:19,221] INFO [ThrottledChannelReaper-Request]: Starting (kafka.server.ClientQuotaManager$ThrottledChannelReaper)
[2023-08-02 15:56:19,222] INFO [ThrottledChannelReaper-ControllerMutation]: Starting (kafka.server.ClientQuotaManager$ThrottledChannelReaper)
[2023-08-02 15:56:19,232] INFO [ExpirationReaper-0-AlterAcls]: Starting (kafka.server.DelayedOperationPurgatory$ExpiredOperationReaper)
[2023-08-02 15:56:19,236] INFO [SocketServer listenerType=CONTROLLER, nodeId=0] Enabling request processing. (kafka.network.SocketServer)
[2023-08-02 15:56:19,241] INFO [BrokerServer id=0] Transition from SHUTDOWN to STARTING (kafka.server.BrokerServer)
[2023-08-02 15:56:19,242] INFO [BrokerServer id=0] Starting broker (kafka.server.BrokerServer)
[2023-08-02 15:56:19,255] INFO [ThrottledChannelReaper-Fetch]: Starting (kafka.server.ClientQuotaManager$ThrottledChannelReaper)
[2023-08-02 15:56:19,255] INFO [ThrottledChannelReaper-Produce]: Starting (kafka.server.ClientQuotaManager$ThrottledChannelReaper)
[2023-08-02 15:56:19,260] INFO [ThrottledChannelReaper-Request]: Starting (kafka.server.ClientQuotaManager$ThrottledChannelReaper)
[2023-08-02 15:56:19,261] INFO [ThrottledChannelReaper-ControllerMutation]: Starting (kafka.server.ClientQuotaManager$ThrottledChannelReaper)
[2023-08-02 15:56:19,281] INFO [BrokerToControllerChannelManager broker=0 name=forwarding]: Starting (kafka.server.BrokerToControllerRequestThread)
[2023-08-02 15:56:19,282] INFO [BrokerToControllerChannelManager broker=0 name=forwarding]: Recorded new controller, from now on will use node kafka:9094 (id: 0 rack: null) (kafka.server.BrokerToControllerRequestThread)
[2023-08-02 15:56:19,321] INFO Updated connection-accept-rate max connection creation rate to 2147483647 (kafka.network.ConnectionQuotas)
[2023-08-02 15:56:19,322] INFO Awaiting socket connections on 0.0.0.0:9093. (kafka.network.DataPlaneAcceptor)
[2023-08-02 15:56:19,414] INFO [SocketServer listenerType=BROKER, nodeId=0] Created data-plane acceptor and processors for endpoint : ListenerName(SECURED) (kafka.network.SocketServer)
[2023-08-02 15:56:19,414] INFO Updated connection-accept-rate max connection creation rate to 2147483647 (kafka.network.ConnectionQuotas)
[2023-08-02 15:56:19,414] INFO Awaiting socket connections on 0.0.0.0:9092. (kafka.network.DataPlaneAcceptor)
[2023-08-02 15:56:19,417] INFO [SocketServer listenerType=BROKER, nodeId=0] Created data-plane acceptor and processors for endpoint : ListenerName(INTERBROKER) (kafka.network.SocketServer)
[2023-08-02 15:56:19,421] INFO [BrokerToControllerChannelManager broker=0 name=alterPartition]: Starting (kafka.server.BrokerToControllerRequestThread)
[2023-08-02 15:56:19,421] INFO [BrokerToControllerChannelManager broker=0 name=alterPartition]: Recorded new controller, from now on will use node kafka:9094 (id: 0 rack: null) (kafka.server.BrokerToControllerRequestThread)
[2023-08-02 15:56:19,433] INFO [ExpirationReaper-0-Produce]: Starting (kafka.server.DelayedOperationPurgatory$ExpiredOperationReaper)
[2023-08-02 15:56:19,433] INFO [ExpirationReaper-0-Fetch]: Starting (kafka.server.DelayedOperationPurgatory$ExpiredOperationReaper)
[2023-08-02 15:56:19,435] INFO [ExpirationReaper-0-DeleteRecords]: Starting (kafka.server.DelayedOperationPurgatory$ExpiredOperationReaper)
[2023-08-02 15:56:19,435] INFO [ExpirationReaper-0-ElectLeader]: Starting (kafka.server.DelayedOperationPurgatory$ExpiredOperationReaper)
[2023-08-02 15:56:19,447] INFO [ExpirationReaper-0-Heartbeat]: Starting (kafka.server.DelayedOperationPurgatory$ExpiredOperationReaper)
[2023-08-02 15:56:19,450] INFO [ExpirationReaper-0-Rebalance]: Starting (kafka.server.DelayedOperationPurgatory$ExpiredOperationReaper)
[2023-08-02 15:56:19,486] INFO [RaftManager nodeId=0] Registered the listener kafka.server.metadata.BrokerMetadataListener@2015440035 (org.apache.kafka.raft.KafkaRaftClient)
[2023-08-02 15:56:19,486] INFO [BrokerToControllerChannelManager broker=0 name=heartbeat]: Starting (kafka.server.BrokerToControllerRequestThread)
[2023-08-02 15:56:19,488] INFO [BrokerLifecycleManager id=0] Incarnation HG-mR2swReyfWUSyVmt8Ig of broker 0 in cluster 3vaDVZ4PTSuN9wu2zxFcHw is now STARTING. (kafka.server.BrokerLifecycleManager)
[2023-08-02 15:56:19,494] INFO [BrokerToControllerChannelManager broker=0 name=heartbeat]: Recorded new controller, from now on will use node kafka:9094 (id: 0 rack: null) (kafka.server.BrokerToControllerRequestThread)
[2023-08-02 15:56:19,508] INFO [ExpirationReaper-0-AlterAcls]: Starting (kafka.server.DelayedOperationPurgatory$ExpiredOperationReaper)
[2023-08-02 15:56:19,509] INFO [BrokerServer id=0] Waiting for broker metadata to catch up. (kafka.server.BrokerServer)
[2023-08-02 15:56:19,543] INFO [Controller 0] Registered new broker: RegisterBrokerRecord(brokerId=0, isMigratingZkBroker=false, incarnationId=HG-mR2swReyfWUSyVmt8Ig, brokerEpoch=2, endPoints=[BrokerEndpoint(name='SECURED', host='kafka', port=9093, securityProtocol=1), BrokerEndpoint(name='INTERBROKER', host='localhost', port=9092, securityProtocol=0)], features=[BrokerFeature(name='metadata.version', minSupportedVersion=1, maxSupportedVersion=8)], rack=null, fenced=true, inControlledShutdown=false) (org.apache.kafka.controller.ClusterControlManager)
[2023-08-02 15:56:19,576] INFO [BrokerLifecycleManager id=0] Successfully registered broker 0 with broker epoch 2 (kafka.server.BrokerLifecycleManager)
[2023-08-02 15:56:19,582] INFO [BrokerLifecycleManager id=0] The broker has caught up. Transitioning from STARTING to RECOVERY. (kafka.server.BrokerLifecycleManager)
[2023-08-02 15:56:19,585] INFO [BrokerMetadataListener id=0] Starting to publish metadata events at offset 2. (kafka.server.metadata.BrokerMetadataListener)
[2023-08-02 15:56:19,587] INFO [BrokerMetadataPublisher id=0] Publishing initial metadata at offset OffsetAndEpoch(offset=2, epoch=1) with metadata.version 3.4-IV0. (kafka.server.metadata.BrokerMetadataPublisher)
[2023-08-02 15:56:19,588] INFO Loading logs from log dirs ArrayBuffer(/bitnami/kafka/data) (kafka.log.LogManager)
[2023-08-02 15:56:19,590] INFO Attempting recovery for all logs in /bitnami/kafka/data since no clean shutdown file was found (kafka.log.LogManager)
[2023-08-02 15:56:19,595] INFO [BrokerLifecycleManager id=0] The broker is in RECOVERY. (kafka.server.BrokerLifecycleManager)
[2023-08-02 15:56:19,606] INFO Loaded 0 logs in 17ms. (kafka.log.LogManager)
[2023-08-02 15:56:19,606] INFO Starting log cleanup with a period of 300000 ms. (kafka.log.LogManager)
[2023-08-02 15:56:19,607] INFO Starting log flusher with a default period of 9223372036854775807 ms. (kafka.log.LogManager)
[2023-08-02 15:56:19,627] INFO Starting the log cleaner (kafka.log.LogCleaner)
[2023-08-02 15:56:19,662] INFO [kafka-log-cleaner-thread-0]: Starting (kafka.log.LogCleaner)
[2023-08-02 15:56:19,664] INFO [LogDirFailureHandler]: Starting (kafka.server.ReplicaManager$LogDirFailureHandler)
[2023-08-02 15:56:19,664] INFO [GroupCoordinator 0]: Starting up. (kafka.coordinator.group.GroupCoordinator)
[2023-08-02 15:56:19,666] INFO [GroupCoordinator 0]: Startup complete. (kafka.coordinator.group.GroupCoordinator)
[2023-08-02 15:56:19,667] INFO [TransactionCoordinator id=0] Starting up. (kafka.coordinator.transaction.TransactionCoordinator)
[2023-08-02 15:56:19,668] INFO [Transaction Marker Channel Manager 0]: Starting (kafka.coordinator.transaction.TransactionMarkerChannelManager)
[2023-08-02 15:56:19,668] INFO [TransactionCoordinator id=0] Startup complete. (kafka.coordinator.transaction.TransactionCoordinator)
[2023-08-02 15:56:19,669] INFO [BrokerMetadataPublisher id=0] Updating metadata.version to 8 at offset OffsetAndEpoch(offset=2, epoch=1). (kafka.server.metadata.BrokerMetadataPublisher)
[2023-08-02 15:56:19,675] INFO KafkaConfig values:
        advertised.listeners = SECURED://kafka:9093,INTERBROKER://localhost:9092
        alter.config.policy.class.name = null
        alter.log.dirs.replication.quota.window.num = 11
        alter.log.dirs.replication.quota.window.size.seconds = 1
        authorizer.class.name =
        auto.create.topics.enable = true
        auto.include.jmx.reporter = true
        auto.leader.rebalance.enable = true
        background.threads = 10
        broker.heartbeat.interval.ms = 2000
        broker.id = 0
        broker.id.generation.enable = true
        broker.rack = null
        broker.session.timeout.ms = 9000
        client.quota.callback.class = null
        compression.type = producer
        connection.failed.authentication.delay.ms = 100
        connections.max.idle.ms = 600000
        connections.max.reauth.ms = 0
        control.plane.listener.name = null
        controlled.shutdown.enable = true
        controlled.shutdown.max.retries = 3
        controlled.shutdown.retry.backoff.ms = 5000
        controller.listener.names = CONTROLLER
        controller.quorum.append.linger.ms = 25
        controller.quorum.election.backoff.max.ms = 1000
        controller.quorum.election.timeout.ms = 1000
        controller.quorum.fetch.timeout.ms = 2000
        controller.quorum.request.timeout.ms = 2000
        controller.quorum.retry.backoff.ms = 20
        controller.quorum.voters = [0@kafka:9094]
        controller.quota.window.num = 11
        controller.quota.window.size.seconds = 1
        controller.socket.timeout.ms = 30000
        create.topic.policy.class.name = null
        default.replication.factor = 1
        delegation.token.expiry.check.interval.ms = 3600000
        delegation.token.expiry.time.ms = 86400000
        delegation.token.master.key = null
        delegation.token.max.lifetime.ms = 604800000
        delegation.token.secret.key = null
        delete.records.purgatory.purge.interval.requests = 1
        delete.topic.enable = true
        early.start.listeners = null
        fetch.max.bytes = 57671680
        fetch.purgatory.purge.interval.requests = 1000
        group.initial.rebalance.delay.ms = 3000
        group.max.session.timeout.ms = 1800000
        group.max.size = 2147483647
        group.min.session.timeout.ms = 6000
        initial.broker.registration.timeout.ms = 60000
        inter.broker.listener.name = INTERBROKER
        inter.broker.protocol.version = 3.4-IV0
        kafka.metrics.polling.interval.secs = 10
        kafka.metrics.reporters = []
        leader.imbalance.check.interval.seconds = 300
        leader.imbalance.per.broker.percentage = 10
        listener.security.protocol.map = CONTROLLER:PLAINTEXT,SECURED:SSL,INTERBROKER:PLAINTEXT
        listeners = SECURED://:9093,CONTROLLER://:9094,INTERBROKER://:9092
        log.cleaner.backoff.ms = 15000
        log.cleaner.dedupe.buffer.size = 134217728
        log.cleaner.delete.retention.ms = 86400000
        log.cleaner.enable = true
        log.cleaner.io.buffer.load.factor = 0.9
        log.cleaner.io.buffer.size = 524288
        log.cleaner.io.max.bytes.per.second = 1.7976931348623157E308
        log.cleaner.max.compaction.lag.ms = 9223372036854775807
        log.cleaner.min.cleanable.ratio = 0.5
        log.cleaner.min.compaction.lag.ms = 0
        log.cleaner.threads = 1
        log.cleanup.policy = [delete]
        log.dir = /tmp/kafka-logs
        log.dirs = /bitnami/kafka/data
        log.flush.interval.messages = 9223372036854775807
        log.flush.interval.ms = null
        log.flush.offset.checkpoint.interval.ms = 60000
        log.flush.scheduler.interval.ms = 9223372036854775807
        log.flush.start.offset.checkpoint.interval.ms = 60000
        log.index.interval.bytes = 4096
        log.index.size.max.bytes = 10485760
        log.message.downconversion.enable = true
        log.message.format.version = 3.0-IV1
        log.message.timestamp.difference.max.ms = 9223372036854775807
        log.message.timestamp.type = CreateTime
        log.preallocate = false
        log.retention.bytes = -1
        log.retention.check.interval.ms = 300000
        log.retention.hours = 168
        log.retention.minutes = null
        log.retention.ms = null
        log.roll.hours = 168
        log.roll.jitter.hours = 0
        log.roll.jitter.ms = null
        log.roll.ms = null
        log.segment.bytes = 1073741824
        log.segment.delete.delay.ms = 60000
        max.connection.creation.rate = 2147483647
        max.connections = 2147483647
        max.connections.per.ip = 2147483647
        max.connections.per.ip.overrides =
        max.incremental.fetch.session.cache.slots = 1000
        message.max.bytes = 1048588
        metadata.log.dir = null
        metadata.log.max.record.bytes.between.snapshots = 20971520
        metadata.log.max.snapshot.interval.ms = 3600000
        metadata.log.segment.bytes = 1073741824
        metadata.log.segment.min.bytes = 8388608
        metadata.log.segment.ms = 604800000
        metadata.max.idle.interval.ms = 500
        metadata.max.retention.bytes = 104857600
        metadata.max.retention.ms = 604800000
        metric.reporters = []
        metrics.num.samples = 2
        metrics.recording.level = INFO
        metrics.sample.window.ms = 30000
        min.insync.replicas = 1
        node.id = 0
        num.io.threads = 8
        num.network.threads = 3
        num.partitions = 1
        num.recovery.threads.per.data.dir = 1
        num.replica.alter.log.dirs.threads = null
        num.replica.fetchers = 1
        offset.metadata.max.bytes = 4096
        offsets.commit.required.acks = -1
        offsets.commit.timeout.ms = 5000
        offsets.load.buffer.size = 5242880
        offsets.retention.check.interval.ms = 600000
        offsets.retention.minutes = 10080
        offsets.topic.compression.codec = 0
        offsets.topic.num.partitions = 50
        offsets.topic.replication.factor = 1
        offsets.topic.segment.bytes = 104857600
        password.encoder.cipher.algorithm = AES/CBC/PKCS5Padding
        password.encoder.iterations = 4096
        password.encoder.key.length = 128
        password.encoder.keyfactory.algorithm = null
        password.encoder.old.secret = null
        password.encoder.secret = null
        principal.builder.class = class org.apache.kafka.common.security.authenticator.DefaultKafkaPrincipalBuilder
        process.roles = [controller, broker]
        producer.id.expiration.check.interval.ms = 600000
        producer.id.expiration.ms = 86400000
        producer.purgatory.purge.interval.requests = 1000
        queued.max.request.bytes = -1
        queued.max.requests = 500
        quota.window.num = 11
        quota.window.size.seconds = 1
        remote.log.index.file.cache.total.size.bytes = 1073741824
        remote.log.manager.task.interval.ms = 30000
        remote.log.manager.task.retry.backoff.max.ms = 30000
        remote.log.manager.task.retry.backoff.ms = 500
        remote.log.manager.task.retry.jitter = 0.2
        remote.log.manager.thread.pool.size = 10
        remote.log.metadata.manager.class.name = null
        remote.log.metadata.manager.class.path = null
        remote.log.metadata.manager.impl.prefix = null
        remote.log.metadata.manager.listener.name = null
        remote.log.reader.max.pending.tasks = 100
        remote.log.reader.threads = 10
        remote.log.storage.manager.class.name = null
        remote.log.storage.manager.class.path = null
        remote.log.storage.manager.impl.prefix = null
        remote.log.storage.system.enable = false
        replica.fetch.backoff.ms = 1000
        replica.fetch.max.bytes = 1048576
        replica.fetch.min.bytes = 1
        replica.fetch.response.max.bytes = 10485760
        replica.fetch.wait.max.ms = 500
        replica.high.watermark.checkpoint.interval.ms = 5000
        replica.lag.time.max.ms = 30000
        replica.selector.class = null
        replica.socket.receive.buffer.bytes = 65536
        replica.socket.timeout.ms = 30000
        replication.quota.window.num = 11
        replication.quota.window.size.seconds = 1
        request.timeout.ms = 30000
        reserved.broker.max.id = 1000
        sasl.client.callback.handler.class = null
        sasl.enabled.mechanisms = [PLAIN, SCRAM-SHA-256, SCRAM-SHA-512]
        sasl.jaas.config = null
        sasl.kerberos.kinit.cmd = /usr/bin/kinit
        sasl.kerberos.min.time.before.relogin = 60000
        sasl.kerberos.principal.to.local.rules = [DEFAULT]
        sasl.kerberos.service.name = null
        sasl.kerberos.ticket.renew.jitter = 0.05
        sasl.kerberos.ticket.renew.window.factor = 0.8
        sasl.login.callback.handler.class = null
        sasl.login.class = null
        sasl.login.connect.timeout.ms = null
        sasl.login.read.timeout.ms = null
        sasl.login.refresh.buffer.seconds = 300
        sasl.login.refresh.min.period.seconds = 60
        sasl.login.refresh.window.factor = 0.8
        sasl.login.refresh.window.jitter = 0.05
        sasl.login.retry.backoff.max.ms = 10000
        sasl.login.retry.backoff.ms = 100
        sasl.mechanism.controller.protocol = GSSAPI
        sasl.mechanism.inter.broker.protocol = GSSAPI
        sasl.oauthbearer.clock.skew.seconds = 30
        sasl.oauthbearer.expected.audience = null
        sasl.oauthbearer.expected.issuer = null
        sasl.oauthbearer.jwks.endpoint.refresh.ms = 3600000
        sasl.oauthbearer.jwks.endpoint.retry.backoff.max.ms = 10000
        sasl.oauthbearer.jwks.endpoint.retry.backoff.ms = 100
        sasl.oauthbearer.jwks.endpoint.url = null
        sasl.oauthbearer.scope.claim.name = scope
        sasl.oauthbearer.sub.claim.name = sub
        sasl.oauthbearer.token.endpoint.url = null
        sasl.server.callback.handler.class = null
        sasl.server.max.receive.size = 524288
        security.inter.broker.protocol = PLAINTEXT
        security.providers = null
        socket.connection.setup.timeout.max.ms = 30000
        socket.connection.setup.timeout.ms = 10000
        socket.listen.backlog.size = 50
        socket.receive.buffer.bytes = 102400
        socket.request.max.bytes = 104857600
        socket.send.buffer.bytes = 102400
        ssl.cipher.suites = []
        ssl.client.auth = none
        ssl.enabled.protocols = [TLSv1.2, TLSv1.3]
        ssl.endpoint.identification.algorithm = https
        ssl.engine.factory.class = null
        ssl.key.password = null
        ssl.keymanager.algorithm = SunX509
        ssl.keystore.certificate.chain = null
        ssl.keystore.key = null
        ssl.keystore.location = null
        ssl.keystore.password = null
        ssl.keystore.type = JKS
        ssl.principal.mapping.rules = DEFAULT
        ssl.protocol = TLSv1.3
        ssl.provider = null
        ssl.secure.random.implementation = null
        ssl.trustmanager.algorithm = PKIX
        ssl.truststore.certificates = null
        ssl.truststore.location = null
        ssl.truststore.password = null
        ssl.truststore.type = JKS
        transaction.abort.timed.out.transaction.cleanup.interval.ms = 10000
        transaction.max.timeout.ms = 900000
        transaction.remove.expired.transaction.cleanup.interval.ms = 3600000
        transaction.state.log.load.buffer.size = 5242880
        transaction.state.log.min.isr = 1
        transaction.state.log.num.partitions = 50
        transaction.state.log.replication.factor = 1
        transaction.state.log.segment.bytes = 104857600
        transactional.id.expiration.ms = 604800000
        unclean.leader.election.enable = false
        zookeeper.clientCnxnSocket = null
        zookeeper.connect = null
        zookeeper.connection.timeout.ms = null
        zookeeper.max.in.flight.requests = 10
        zookeeper.metadata.migration.enable = false
        zookeeper.session.timeout.ms = 18000
        zookeeper.set.acl = false
        zookeeper.ssl.cipher.suites = null
        zookeeper.ssl.client.enable = false
        zookeeper.ssl.crl.enable = false
        zookeeper.ssl.enabled.protocols = null
        zookeeper.ssl.endpoint.identification.algorithm = HTTPS
        zookeeper.ssl.keystore.location = null
        zookeeper.ssl.keystore.password = null
        zookeeper.ssl.keystore.type = null
        zookeeper.ssl.ocsp.enable = false
        zookeeper.ssl.protocol = TLSv1.2
        zookeeper.ssl.truststore.location = null
        zookeeper.ssl.truststore.password = null
        zookeeper.ssl.truststore.type = null
 (kafka.server.KafkaConfig)
[2023-08-02 15:56:19,679] INFO [SocketServer listenerType=BROKER, nodeId=0] Enabling request processing. (kafka.network.SocketServer)
[2023-08-02 15:56:19,684] INFO [Controller 0] The request from broker 0 to unfence has been granted because it has caught up with the offset of it's register broker record 2. (org.apache.kafka.controller.BrokerHeartbeatManager)
[2023-08-02 15:56:19,716] INFO [BrokerLifecycleManager id=0] The broker has been unfenced. Transitioning from RECOVERY to RUNNING. (kafka.server.BrokerLifecycleManager)
[2023-08-02 15:56:19,716] INFO [BrokerServer id=0] Transition from STARTING to STARTED (kafka.server.BrokerServer)
[2023-08-02 15:56:19,717] INFO Kafka version: 3.4.0 (org.apache.kafka.common.utils.AppInfoParser)
[2023-08-02 15:56:19,717] INFO Kafka commitId: 2e1947d240607d53 (org.apache.kafka.common.utils.AppInfoParser)
[2023-08-02 15:56:19,717] INFO Kafka startTimeMs: 1690991779716 (org.apache.kafka.common.utils.AppInfoParser)
[2023-08-02 15:56:19,718] INFO [KafkaRaftServer nodeId=0] Kafka Server started (kafka.server.KafkaRaftServer)
[2023-08-02 15:56:35,639] INFO [SocketServer listenerType=BROKER, nodeId=0] Failed authentication with /172.23.0.1 (channelId=172.23.0.2:9093-172.23.0.1:51054-0) (SSL handshake failed) (org.apache.kafka.common.network.Selector)
@zapho zapho added the tech-issues The user has a technical issue about an application label Aug 2, 2023
@github-actions github-actions bot added the triage Triage is needed label Aug 2, 2023
@javsalgar javsalgar changed the title SSL available when Kraft mode enabled? [bitnami/kafka] SSL available when Kraft mode enabled? Aug 3, 2023
@javsalgar javsalgar added the kafka label Aug 3, 2023
@github-actions github-actions bot added in-progress and removed triage Triage is needed labels Aug 3, 2023
@bitnami-bot bitnami-bot assigned jotamartos and unassigned javsalgar Aug 3, 2023
@zapho
Copy link
Author

zapho commented Aug 3, 2023

This issue might be caused by the way the JKS stores are created. The environment provides client and CA certificates along with their private keys and they are used to create the JKS stores.

BITNAMI_JKS_STORES_DEST=/bitnami/kafka/config/certs
mkdir -p ${BITNAMI_JKS_STORES_DEST}

STOREPASS=${KAFKA_CERTIFICATE_PASSWORD'}
KEYSTORE_FILE=${BITNAMI_JKS_STORES_DEST}/kafka.keystore.jks
TRUSTSTORE_FILE=${BITNAMI_JKS_STORES_DEST}/kafka.truststore.jks

CERTDIR=/etc/my-company/pki/app
CERTFILE="$CERTDIR/chain.pem"
KEYFILE="$CERTDIR/key.pem"
CLIENT_CERT_COMBINED="/tmp/client-cert-combined.pem"
cat $CERTFILE $KEYFILE > $CLIENT_CERT_COMBINED

CACERTDIR=/etc/my-company/pki/ca
CACERTFILE="$CACERTDIR/ca.crt.pem"
CAKEYFILE="$CACERTDIR/key.pem"
CA_CERT_COMBINED="/tmp/ca-combined.pem"
cat $CACERTFILE $CAKEYFILE > $CA_CERT_COMBINED

echo using PEM cert $CACERTFILE to create Java trustore in $TRUSTSTORE_FILE
keytool -import -alias root_cert_for_kafka -file $CACERTFILE -keystore $TRUSTSTORE_FILE -storetype JKS -storepass $STOREPASS -noprompt

echo using $KEYFILE and $CERTFILE to create Java keystore in $KEYSTORE_FILE
#openssl pkcs12 -export -inkey "$KEYFILE" -in "$CLIENT_CERT_COMBINED" -password "pass:$STOREPASS" -out "$KEYSTORE_FILE"
keytool -import -alias kafka-client-cert -file $CLIENT_CERT_COMBINED -keystore $KEYSTORE_FILE -storetype JKS -storepass $STOREPASS -noprompt

# 'just' adding the client cert to the Java keystore is not enough, a TLS handshake error occurs when trying to connect
# to the Kafka broker
# the following is an attempt to solve this issue
echo using $CAKEYFILE and $CACERTFILE add CA cert int the Java keystore in $KEYSTORE_FILE
keytool -import -alias ca-cert -file $CA_CERT_COMBINED -keystore $KEYSTORE_FILE -storetype JKS -storepass $STOREPASS -noprompt

@zapho
Copy link
Author

zapho commented Aug 3, 2023

Another thing I do not understand: when I do not provide the KAFKA_CERTIFICATE_PASSWORD in the environment variables or set a wrong value, the broker starts just fine, no exception. As the keystores are password-protected, I would expect an error at startup time.

@carrodher carrodher assigned migruiz4 and unassigned jotamartos Aug 4, 2023
@nvp152
Copy link

nvp152 commented Aug 6, 2023

Have you tried passing -keyalg RSA to your keytool commands? Depending on the version of the JDK you got the keytool from, the cert may be getting generated with DSA. Since the version of kafka in the latest image is running with jdk 17, i believe an SSL connection with TLSv1.3 is preferred but DSA is no longer supported for that version of TLS.

@zapho
Copy link
Author

zapho commented Aug 7, 2023

Hi @nvp152
Thanks for the answer. The keystores are fine, I used them with another Kafka broker images and could have a working SSL setup.

@migruiz4
Copy link
Member

migruiz4 commented Aug 7, 2023

Hi @zapho,

I haven't been able to reproduce your issue using the following docker-compose:

version: '2'

services:
  kafka:
    image: 'bitnami/kafka:3.4'
    hostname: kafka
    ports:
      - '9092'
    environment:
      - BITNAMI_DEBUG=yes
      # KRaft settings
      - KAFKA_CFG_NODE_ID=0
      - KAFKA_CFG_PROCESS_ROLES=controller,broker
      - KAFKA_CFG_CONTROLLER_QUORUM_VOTERS=0@kafka:9094
      # Listeners settings
      - KAFKA_CFG_ADVERTISED_LISTENERS=SECURED://kafka:9093,INTERBROKER://localhost:9092
      - KAFKA_CFG_LISTENERS=SECURED://:9093,CONTROLLER://:9094,INTERBROKER://:9092
      - KAFKA_CFG_LISTENER_SECURITY_PROTOCOL_MAP=CONTROLLER:PLAINTEXT,SECURED:SSL,INTERBROKER:PLAINTEXT
      - KAFKA_CFG_CONTROLLER_LISTENER_NAMES=CONTROLLER
      - KAFKA_CFG_INTER_BROKER_LISTENER_NAME=INTERBROKER
      # SSL settings
      - KAFKA_CERTIFICATE_PASSWORD=my_pass
      - KAFKA_TLS_TYPE=JKS
      - KAFKA_TLS_CLIENT_AUTH=required
      - KAFKA_CFG_SSL_ENDPOINT_IDENTIFICATION_ALGORITHM=
    volumes:
      - "./certs:/opt/bitnami/kafka/config/certs"

In my case, I generated the certificates using the following script:

#!/bin/bash
mkdir certs
# Root CA
echo "Creating CA certificate and key"
openssl req -new -x509 -keyout certs/ca.key -out certs/ca.crt -days 365 -subj "/CN=Sample CA/OU=US/O=US/ST=US/C=US" -passout pass:my_pass

echo "Creating Truststore"
keytool -keystore certs/kafka.truststore.jks -alias CARoot -import -file certs/ca.crt -storepass my_pass -keypass my_pass -noprompt

# Node cert
echo "Creating node key"
keytool -keystore certs/kafka.keystore.jks -alias kafka-$i -validity 365 -genkey -keyalg RSA -dname "cn=kafka, ou=US, o=US, c=US" -storepass my_pass -keypass my_pass
echo "Creating certificate sign request"
keytool -keystore certs/kafka.keystore.jks -alias kafka-$i -certreq -file certs/tls.srl -storepass my_pass -keypass my_pass
echo "Signing certificate request using self-signed CA"
openssl x509 -req -CA certs/ca.crt -CAkey certs/ca.key \
    -in certs/tls.srl -out certs/tls.crt \
    -days 365 -CAcreateserial \
    -passin pass:my_pass
echo "Adding Ca certificate to the keystore"
keytool -keystore certs/kafka.keystore.jks -alias CARoot -import -file certs/ca.crt -storepass my_pass -keypass my_pass -noprompt
echo "Adding signed certificate"
keytool -keystore certs/kafka.keystore.jks -alias kafka-$i -import -file certs/tls.crt -storepass my_pass -keypass my_pass -noprompt

# Cleanup
rm certs/tls.crt certs/tls.srl
Here are my output logs:
kafka 13:06:07.73 
kafka 13:06:07.73 Welcome to the Bitnami kafka container
kafka 13:06:07.73 Subscribe to project updates by watching https://github.com/bitnami/containers
kafka 13:06:07.73 Submit issues and feature requests at https://github.com/bitnami/containers/issues
kafka 13:06:07.73 
kafka 13:06:07.73 INFO  ==> ** Starting Kafka setup **
kafka 13:06:07.81 DEBUG ==> Validating settings in KAFKA_* env vars...
kafka 13:06:09.07 WARN  ==> Kafka has been configured with a PLAINTEXT listener, this setting is not recommended for production environments.
kafka 13:06:09.09 INFO  ==> Initializing Kafka...
kafka 13:06:09.10 INFO  ==> No injected configuration files found, creating default config files
kafka 13:06:09.34 INFO  ==> Initializing KRaft storage metadata
kafka 13:06:09.34 WARN  ==> KAFKA_KRAFT_CLUSTER_ID not set - If using multiple nodes then you must use the same Cluster ID for each one
kafka 13:06:10.43 INFO  ==> Generated Kafka cluster ID 'C6h8M2wWTsumdM_uEqfoTw'
kafka 13:06:10.43 INFO  ==> Formatting storage directories to add metadata...
Formatting /bitnami/kafka/data with metadata.version 3.4-IV0.
kafka 13:06:11.79 INFO  ==> ** Kafka setup finished! **

kafka 13:06:11.80 INFO  ==> ** Starting Kafka **
[2023-08-07 13:06:12,469] INFO Registered kafka:type=kafka.Log4jController MBean (kafka.utils.Log4jControllerRegistration$)
[2023-08-07 13:06:12,730] INFO Setting -D jdk.tls.rejectClientInitiatedRenegotiation=true to disable client-initiated TLS renegotiation (org.apache.zookeeper.common.X509Util)
[2023-08-07 13:06:12,872] INFO Registered signal handlers for TERM, INT, HUP (org.apache.kafka.common.utils.LoggingSignalHandler)
[2023-08-07 13:06:12,874] INFO Starting controller (kafka.server.ControllerServer)
[2023-08-07 13:06:13,215] INFO Updated connection-accept-rate max connection creation rate to 2147483647 (kafka.network.ConnectionQuotas)
[2023-08-07 13:06:13,222] INFO Awaiting socket connections on 0.0.0.0:9094. (kafka.network.DataPlaneAcceptor)
[2023-08-07 13:06:13,266] INFO [SocketServer listenerType=CONTROLLER, nodeId=0] Created data-plane acceptor and processors for endpoint : ListenerName(CONTROLLER) (kafka.network.SocketServer)
[2023-08-07 13:06:13,268] INFO [SharedServer id=0] Starting SharedServer (kafka.server.SharedServer)
[2023-08-07 13:06:13,334] INFO [LogLoader partition=__cluster_metadata-0, dir=/bitnami/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$)
[2023-08-07 13:06:13,335] INFO [LogLoader partition=__cluster_metadata-0, dir=/bitnami/kafka/data] Reloading from producer snapshot and rebuilding producer state from offset 0 (kafka.log.UnifiedLog$)
[2023-08-07 13:06:13,336] INFO [LogLoader partition=__cluster_metadata-0, dir=/bitnami/kafka/data] Producer state recovery took 1ms for snapshot load and 0ms for segment recovery from offset 0 (kafka.log.UnifiedLog$)
[2023-08-07 13:06:13,362] INFO Initialized snapshots with IDs Set() from /bitnami/kafka/data/__cluster_metadata-0 (kafka.raft.KafkaMetadataLog$)
[2023-08-07 13:06:13,376] INFO [raft-expiration-reaper]: Starting (kafka.raft.TimingWheelExpirationService$ExpiredOperationReaper)
[2023-08-07 13:06:13,485] INFO [RaftManager nodeId=0] Completed transition to Unattached(epoch=0, voters=[0], electionTimeoutMs=1161) (org.apache.kafka.raft.QuorumState)
[2023-08-07 13:06:13,491] INFO [RaftManager nodeId=0] Completed transition to CandidateState(localId=0, epoch=1, retries=1, electionTimeoutMs=1399) (org.apache.kafka.raft.QuorumState)
[2023-08-07 13:06:13,498] INFO [RaftManager nodeId=0] Completed transition to Leader(localId=0, epoch=1, epochStartOffset=0, highWatermark=Optional.empty, voterStates={0=ReplicaState(nodeId=0, endOffset=Optional.empty, lastFetchTimestamp=-1, lastCaughtUpTimestamp=-1, hasAcknowledgedLeader=true)}) (org.apache.kafka.raft.QuorumState)
[2023-08-07 13:06:13,528] INFO [kafka-raft-outbound-request-thread]: Starting (kafka.raft.RaftSendThread)
[2023-08-07 13:06:13,529] INFO [kafka-raft-io-thread]: Starting (kafka.raft.KafkaRaftManager$RaftIoThread)
[2023-08-07 13:06:13,542] INFO [MetadataLoader 0] initializeNewPublishers: the loader is still catching up because we still don't know the high water mark yet. (org.apache.kafka.image.loader.MetadataLoader)
[2023-08-07 13:06:13,548] INFO [RaftManager nodeId=0] High watermark set to LogOffsetMetadata(offset=1, metadata=Optional[(segmentBaseOffset=0,relativePositionInSegment=91)]) for the first time for epoch 1 based on indexOfHw 0 and voters [ReplicaState(nodeId=0, endOffset=Optional[LogOffsetMetadata(offset=1, metadata=Optional[(segmentBaseOffset=0,relativePositionInSegment=91)])], lastFetchTimestamp=-1, lastCaughtUpTimestamp=-1, hasAcknowledgedLeader=true)] (org.apache.kafka.raft.LeaderState)
[2023-08-07 13:06:13,557] INFO [RaftManager nodeId=0] Registered the listener org.apache.kafka.image.loader.MetadataLoader@265374995 (org.apache.kafka.raft.KafkaRaftClient)
[2023-08-07 13:06:13,563] INFO [Controller 0] Creating new QuorumController with clusterId C6h8M2wWTsumdM_uEqfoTw, authorizer Optional.empty. (org.apache.kafka.controller.QuorumController)
[2023-08-07 13:06:13,564] INFO [MetadataLoader 0] handleCommit: The loader is still catching up because we have loaded up to offset -1, but the high water mark is 1 (org.apache.kafka.image.loader.MetadataLoader)
[2023-08-07 13:06:13,564] INFO [RaftManager nodeId=0] Registered the listener org.apache.kafka.controller.QuorumController$QuorumMetaLogListener@792956498 (org.apache.kafka.raft.KafkaRaftClient)
[2023-08-07 13:06:13,566] INFO [Controller 0] Becoming the active controller at epoch 1, committed offset -1, committed epoch -1 (org.apache.kafka.controller.QuorumController)
[2023-08-07 13:06:13,568] INFO [Controller 0] The metadata log appears to be empty. Appending 1 bootstrap record(s) at metadata.version 3.4-IV0 from the binary bootstrap metadata file: /bitnami/kafka/data/bootstrap.checkpoint. (org.apache.kafka.controller.QuorumController)
[2023-08-07 13:06:13,569] INFO [Controller 0] Setting metadata.version to 8 (org.apache.kafka.controller.FeatureControlManager)
[2023-08-07 13:06:13,570] INFO [ThrottledChannelReaper-Fetch]: Starting (kafka.server.ClientQuotaManager$ThrottledChannelReaper)
[2023-08-07 13:06:13,571] INFO [ThrottledChannelReaper-Produce]: Starting (kafka.server.ClientQuotaManager$ThrottledChannelReaper)
[2023-08-07 13:06:13,573] INFO [ThrottledChannelReaper-Request]: Starting (kafka.server.ClientQuotaManager$ThrottledChannelReaper)
[2023-08-07 13:06:13,575] INFO [ThrottledChannelReaper-ControllerMutation]: Starting (kafka.server.ClientQuotaManager$ThrottledChannelReaper)
[2023-08-07 13:06:13,591] INFO [ExpirationReaper-0-AlterAcls]: Starting (kafka.server.DelayedOperationPurgatory$ExpiredOperationReaper)
[2023-08-07 13:06:13,598] INFO [SocketServer listenerType=CONTROLLER, nodeId=0] Enabling request processing. (kafka.network.SocketServer)
[2023-08-07 13:06:13,604] INFO [BrokerServer id=0] Transition from SHUTDOWN to STARTING (kafka.server.BrokerServer)
[2023-08-07 13:06:13,604] INFO [BrokerServer id=0] Starting broker (kafka.server.BrokerServer)
[2023-08-07 13:06:13,605] INFO [MetadataLoader 0] handleCommit: The loader finished catching up to the current high water mark of 2 (org.apache.kafka.image.loader.MetadataLoader)
[2023-08-07 13:06:13,616] INFO [ThrottledChannelReaper-Fetch]: Starting (kafka.server.ClientQuotaManager$ThrottledChannelReaper)
[2023-08-07 13:06:13,617] INFO [ThrottledChannelReaper-Produce]: Starting (kafka.server.ClientQuotaManager$ThrottledChannelReaper)
[2023-08-07 13:06:13,618] INFO [ThrottledChannelReaper-Request]: Starting (kafka.server.ClientQuotaManager$ThrottledChannelReaper)
[2023-08-07 13:06:13,618] INFO [ThrottledChannelReaper-ControllerMutation]: Starting (kafka.server.ClientQuotaManager$ThrottledChannelReaper)
[2023-08-07 13:06:13,644] INFO [BrokerToControllerChannelManager broker=0 name=forwarding]: Starting (kafka.server.BrokerToControllerRequestThread)
[2023-08-07 13:06:13,645] INFO [MetadataLoader 0] InitializeNewPublishers: initializing SnapshotGenerator with a snapshot at offset 1 (org.apache.kafka.image.loader.MetadataLoader)
[2023-08-07 13:06:13,646] INFO [BrokerToControllerChannelManager broker=0 name=forwarding]: Recorded new controller, from now on will use node kafka:9094 (id: 0 rack: null) (kafka.server.BrokerToControllerRequestThread)
[2023-08-07 13:06:13,694] INFO Updated connection-accept-rate max connection creation rate to 2147483647 (kafka.network.ConnectionQuotas)
[2023-08-07 13:06:13,694] INFO Awaiting socket connections on 0.0.0.0:9093. (kafka.network.DataPlaneAcceptor)
[2023-08-07 13:06:13,958] INFO [SocketServer listenerType=BROKER, nodeId=0] Created data-plane acceptor and processors for endpoint : ListenerName(SECURED) (kafka.network.SocketServer)
[2023-08-07 13:06:13,959] INFO Updated connection-accept-rate max connection creation rate to 2147483647 (kafka.network.ConnectionQuotas)
[2023-08-07 13:06:13,959] INFO Awaiting socket connections on 0.0.0.0:9092. (kafka.network.DataPlaneAcceptor)
[2023-08-07 13:06:13,964] INFO [SocketServer listenerType=BROKER, nodeId=0] Created data-plane acceptor and processors for endpoint : ListenerName(INTERBROKER) (kafka.network.SocketServer)
[2023-08-07 13:06:13,969] INFO [BrokerToControllerChannelManager broker=0 name=alterPartition]: Starting (kafka.server.BrokerToControllerRequestThread)
[2023-08-07 13:06:13,969] INFO [BrokerToControllerChannelManager broker=0 name=alterPartition]: Recorded new controller, from now on will use node kafka:9094 (id: 0 rack: null) (kafka.server.BrokerToControllerRequestThread)
[2023-08-07 13:06:13,979] INFO [ExpirationReaper-0-Produce]: Starting (kafka.server.DelayedOperationPurgatory$ExpiredOperationReaper)
[2023-08-07 13:06:13,980] INFO [ExpirationReaper-0-Fetch]: Starting (kafka.server.DelayedOperationPurgatory$ExpiredOperationReaper)
[2023-08-07 13:06:13,981] INFO [ExpirationReaper-0-DeleteRecords]: Starting (kafka.server.DelayedOperationPurgatory$ExpiredOperationReaper)
[2023-08-07 13:06:13,983] INFO [ExpirationReaper-0-ElectLeader]: Starting (kafka.server.DelayedOperationPurgatory$ExpiredOperationReaper)
[2023-08-07 13:06:13,998] INFO [ExpirationReaper-0-Heartbeat]: Starting (kafka.server.DelayedOperationPurgatory$ExpiredOperationReaper)
[2023-08-07 13:06:13,999] INFO [ExpirationReaper-0-Rebalance]: Starting (kafka.server.DelayedOperationPurgatory$ExpiredOperationReaper)
[2023-08-07 13:06:14,043] INFO [RaftManager nodeId=0] Registered the listener kafka.server.metadata.BrokerMetadataListener@137374710 (org.apache.kafka.raft.KafkaRaftClient)
[2023-08-07 13:06:14,043] INFO [BrokerToControllerChannelManager broker=0 name=heartbeat]: Starting (kafka.server.BrokerToControllerRequestThread)
[2023-08-07 13:06:14,043] INFO [BrokerToControllerChannelManager broker=0 name=heartbeat]: Recorded new controller, from now on will use node kafka:9094 (id: 0 rack: null) (kafka.server.BrokerToControllerRequestThread)
[2023-08-07 13:06:14,047] INFO [BrokerLifecycleManager id=0] Incarnation 4YQTZbjuQ0ue7XMgG44jgA of broker 0 in cluster C6h8M2wWTsumdM_uEqfoTw is now STARTING. (kafka.server.BrokerLifecycleManager)
[2023-08-07 13:06:14,068] INFO [ExpirationReaper-0-AlterAcls]: Starting (kafka.server.DelayedOperationPurgatory$ExpiredOperationReaper)
[2023-08-07 13:06:14,073] INFO [BrokerServer id=0] Waiting for broker metadata to catch up. (kafka.server.BrokerServer)
[2023-08-07 13:06:14,125] INFO [Controller 0] Registered new broker: RegisterBrokerRecord(brokerId=0, isMigratingZkBroker=false, incarnationId=4YQTZbjuQ0ue7XMgG44jgA, brokerEpoch=3, endPoints=[BrokerEndpoint(name='SECURED', host='kafka', port=9093, securityProtocol=1), BrokerEndpoint(name='INTERBROKER', host='localhost', port=9092, securityProtocol=0)], features=[BrokerFeature(name='metadata.version', minSupportedVersion=1, maxSupportedVersion=8)], rack=null, fenced=true, inControlledShutdown=false) (org.apache.kafka.controller.ClusterControlManager)
[2023-08-07 13:06:14,155] INFO [BrokerLifecycleManager id=0] Successfully registered broker 0 with broker epoch 3 (kafka.server.BrokerLifecycleManager)
[2023-08-07 13:06:14,160] INFO [BrokerLifecycleManager id=0] The broker has caught up. Transitioning from STARTING to RECOVERY. (kafka.server.BrokerLifecycleManager)
[2023-08-07 13:06:14,163] INFO [BrokerMetadataListener id=0] Starting to publish metadata events at offset 3. (kafka.server.metadata.BrokerMetadataListener)
[2023-08-07 13:06:14,166] INFO [BrokerMetadataPublisher id=0] Publishing initial metadata at offset OffsetAndEpoch(offset=3, epoch=1) with metadata.version 3.4-IV0. (kafka.server.metadata.BrokerMetadataPublisher)
[2023-08-07 13:06:14,167] INFO Loading logs from log dirs ArrayBuffer(/bitnami/kafka/data) (kafka.log.LogManager)
[2023-08-07 13:06:14,170] INFO Attempting recovery for all logs in /bitnami/kafka/data since no clean shutdown file was found (kafka.log.LogManager)
[2023-08-07 13:06:14,174] INFO [BrokerLifecycleManager id=0] The broker is in RECOVERY. (kafka.server.BrokerLifecycleManager)
[2023-08-07 13:06:14,185] INFO Loaded 0 logs in 17ms. (kafka.log.LogManager)
[2023-08-07 13:06:14,185] INFO Starting log cleanup with a period of 300000 ms. (kafka.log.LogManager)
[2023-08-07 13:06:14,187] INFO Starting log flusher with a default period of 9223372036854775807 ms. (kafka.log.LogManager)
[2023-08-07 13:06:14,200] INFO Starting the log cleaner (kafka.log.LogCleaner)
[2023-08-07 13:06:14,263] INFO [kafka-log-cleaner-thread-0]: Starting (kafka.log.LogCleaner)
[2023-08-07 13:06:14,267] INFO [LogDirFailureHandler]: Starting (kafka.server.ReplicaManager$LogDirFailureHandler)
[2023-08-07 13:06:14,268] INFO [GroupCoordinator 0]: Starting up. (kafka.coordinator.group.GroupCoordinator)
[2023-08-07 13:06:14,270] INFO [GroupCoordinator 0]: Startup complete. (kafka.coordinator.group.GroupCoordinator)
[2023-08-07 13:06:14,271] INFO [TransactionCoordinator id=0] Starting up. (kafka.coordinator.transaction.TransactionCoordinator)
[2023-08-07 13:06:14,273] INFO [Transaction Marker Channel Manager 0]: Starting (kafka.coordinator.transaction.TransactionMarkerChannelManager)
[2023-08-07 13:06:14,273] INFO [TransactionCoordinator id=0] Startup complete. (kafka.coordinator.transaction.TransactionCoordinator)
[2023-08-07 13:06:14,274] INFO [BrokerMetadataPublisher id=0] Updating metadata.version to 8 at offset OffsetAndEpoch(offset=3, epoch=1). (kafka.server.metadata.BrokerMetadataPublisher)
[2023-08-07 13:06:14,282] INFO KafkaConfig values: 
        advertised.listeners = SECURED://kafka:9093,INTERBROKER://localhost:9092
        alter.config.policy.class.name = null
        alter.log.dirs.replication.quota.window.num = 11
        alter.log.dirs.replication.quota.window.size.seconds = 1
        authorizer.class.name = 
        auto.create.topics.enable = true
        auto.include.jmx.reporter = true
        auto.leader.rebalance.enable = true
        background.threads = 10
        broker.heartbeat.interval.ms = 2000
        broker.id = 0
        broker.id.generation.enable = true
        broker.rack = null
        broker.session.timeout.ms = 9000
        client.quota.callback.class = null
        compression.type = producer
        connection.failed.authentication.delay.ms = 100
        connections.max.idle.ms = 600000
        connections.max.reauth.ms = 0
        control.plane.listener.name = null
        controlled.shutdown.enable = true
        controlled.shutdown.max.retries = 3
        controlled.shutdown.retry.backoff.ms = 5000
        controller.listener.names = CONTROLLER
        controller.quorum.append.linger.ms = 25
        controller.quorum.election.backoff.max.ms = 1000
        controller.quorum.election.timeout.ms = 1000
        controller.quorum.fetch.timeout.ms = 2000
        controller.quorum.request.timeout.ms = 2000
        controller.quorum.retry.backoff.ms = 20
        controller.quorum.voters = [0@kafka:9094]
        controller.quota.window.num = 11
        controller.quota.window.size.seconds = 1
        controller.socket.timeout.ms = 30000
        create.topic.policy.class.name = null
        default.replication.factor = 1
        delegation.token.expiry.check.interval.ms = 3600000
        delegation.token.expiry.time.ms = 86400000
        delegation.token.master.key = null
        delegation.token.max.lifetime.ms = 604800000
        delegation.token.secret.key = null
        delete.records.purgatory.purge.interval.requests = 1
        delete.topic.enable = true
        early.start.listeners = null
        fetch.max.bytes = 57671680
        fetch.purgatory.purge.interval.requests = 1000
        group.initial.rebalance.delay.ms = 3000
        group.max.session.timeout.ms = 1800000
        group.max.size = 2147483647
        group.min.session.timeout.ms = 6000
        initial.broker.registration.timeout.ms = 60000
        inter.broker.listener.name = INTERBROKER
        inter.broker.protocol.version = 3.4-IV0
        kafka.metrics.polling.interval.secs = 10
        kafka.metrics.reporters = []
        leader.imbalance.check.interval.seconds = 300
        leader.imbalance.per.broker.percentage = 10
        listener.security.protocol.map = CONTROLLER:PLAINTEXT,SECURED:SSL,INTERBROKER:PLAINTEXT
        listeners = SECURED://:9093,CONTROLLER://:9094,INTERBROKER://:9092
        log.cleaner.backoff.ms = 15000
        log.cleaner.dedupe.buffer.size = 134217728
        log.cleaner.delete.retention.ms = 86400000
        log.cleaner.enable = true
        log.cleaner.io.buffer.load.factor = 0.9
        log.cleaner.io.buffer.size = 524288
        log.cleaner.io.max.bytes.per.second = 1.7976931348623157E308
        log.cleaner.max.compaction.lag.ms = 9223372036854775807
        log.cleaner.min.cleanable.ratio = 0.5
        log.cleaner.min.compaction.lag.ms = 0
        log.cleaner.threads = 1
        log.cleanup.policy = [delete]
        log.dir = /tmp/kafka-logs
        log.dirs = /bitnami/kafka/data
        log.flush.interval.messages = 9223372036854775807
        log.flush.interval.ms = null
        log.flush.offset.checkpoint.interval.ms = 60000
        log.flush.scheduler.interval.ms = 9223372036854775807
        log.flush.start.offset.checkpoint.interval.ms = 60000
        log.index.interval.bytes = 4096
        log.index.size.max.bytes = 10485760
        log.message.downconversion.enable = true
        log.message.format.version = 3.0-IV1
        log.message.timestamp.difference.max.ms = 9223372036854775807
        log.message.timestamp.type = CreateTime
        log.preallocate = false
        log.retention.bytes = -1
        log.retention.check.interval.ms = 300000
        log.retention.hours = 168
        log.retention.minutes = null
        log.retention.ms = null
        log.roll.hours = 168
        log.roll.jitter.hours = 0
        log.roll.jitter.ms = null
        log.roll.ms = null
        log.segment.bytes = 1073741824
        log.segment.delete.delay.ms = 60000
        max.connection.creation.rate = 2147483647
        max.connections = 2147483647
        max.connections.per.ip = 2147483647
        max.connections.per.ip.overrides = 
        max.incremental.fetch.session.cache.slots = 1000
        message.max.bytes = 1048588
        metadata.log.dir = null
        metadata.log.max.record.bytes.between.snapshots = 20971520
        metadata.log.max.snapshot.interval.ms = 3600000
        metadata.log.segment.bytes = 1073741824
        metadata.log.segment.min.bytes = 8388608
        metadata.log.segment.ms = 604800000
        metadata.max.idle.interval.ms = 500
        metadata.max.retention.bytes = 104857600
        metadata.max.retention.ms = 604800000
        metric.reporters = []
        metrics.num.samples = 2
        metrics.recording.level = INFO
        metrics.sample.window.ms = 30000
        min.insync.replicas = 1
        node.id = 0
        num.io.threads = 8
        num.network.threads = 3
        num.partitions = 1
        num.recovery.threads.per.data.dir = 1
        num.replica.alter.log.dirs.threads = null
        num.replica.fetchers = 1
        offset.metadata.max.bytes = 4096
        offsets.commit.required.acks = -1
        offsets.commit.timeout.ms = 5000
        offsets.load.buffer.size = 5242880
        offsets.retention.check.interval.ms = 600000
        offsets.retention.minutes = 10080
        offsets.topic.compression.codec = 0
        offsets.topic.num.partitions = 50
        offsets.topic.replication.factor = 1
        offsets.topic.segment.bytes = 104857600
        password.encoder.cipher.algorithm = AES/CBC/PKCS5Padding
        password.encoder.iterations = 4096
        password.encoder.key.length = 128
        password.encoder.keyfactory.algorithm = null
        password.encoder.old.secret = null
        password.encoder.secret = null
        principal.builder.class = class org.apache.kafka.common.security.authenticator.DefaultKafkaPrincipalBuilder
        process.roles = [controller, broker]
        producer.id.expiration.check.interval.ms = 600000
        producer.id.expiration.ms = 86400000
        producer.purgatory.purge.interval.requests = 1000
        queued.max.request.bytes = -1
        queued.max.requests = 500
        quota.window.num = 11
        quota.window.size.seconds = 1
        remote.log.index.file.cache.total.size.bytes = 1073741824
        remote.log.manager.task.interval.ms = 30000
        remote.log.manager.task.retry.backoff.max.ms = 30000
        remote.log.manager.task.retry.backoff.ms = 500
        remote.log.manager.task.retry.jitter = 0.2
        remote.log.manager.thread.pool.size = 10
        remote.log.metadata.manager.class.name = null
        remote.log.metadata.manager.class.path = null
        remote.log.metadata.manager.impl.prefix = null
        remote.log.metadata.manager.listener.name = null
        remote.log.reader.max.pending.tasks = 100
        remote.log.reader.threads = 10
        remote.log.storage.manager.class.name = null
        remote.log.storage.manager.class.path = null
        remote.log.storage.manager.impl.prefix = null
        remote.log.storage.system.enable = false
        replica.fetch.backoff.ms = 1000
        replica.fetch.max.bytes = 1048576
        replica.fetch.min.bytes = 1
        replica.fetch.response.max.bytes = 10485760
        replica.fetch.wait.max.ms = 500
        replica.high.watermark.checkpoint.interval.ms = 5000
        replica.lag.time.max.ms = 30000
        replica.selector.class = null
        replica.socket.receive.buffer.bytes = 65536
        replica.socket.timeout.ms = 30000
        replication.quota.window.num = 11
        replication.quota.window.size.seconds = 1
        request.timeout.ms = 30000
        reserved.broker.max.id = 1000
        sasl.client.callback.handler.class = null
        sasl.enabled.mechanisms = [PLAIN, SCRAM-SHA-256, SCRAM-SHA-512]
        sasl.jaas.config = null
        sasl.kerberos.kinit.cmd = /usr/bin/kinit
        sasl.kerberos.min.time.before.relogin = 60000
        sasl.kerberos.principal.to.local.rules = [DEFAULT]
        sasl.kerberos.service.name = null
        sasl.kerberos.ticket.renew.jitter = 0.05
        sasl.kerberos.ticket.renew.window.factor = 0.8
        sasl.login.callback.handler.class = null
        sasl.login.class = null
        sasl.login.connect.timeout.ms = null
        sasl.login.read.timeout.ms = null
        sasl.login.refresh.buffer.seconds = 300
        sasl.login.refresh.min.period.seconds = 60
        sasl.login.refresh.window.factor = 0.8
        sasl.login.refresh.window.jitter = 0.05
        sasl.login.retry.backoff.max.ms = 10000
        sasl.login.retry.backoff.ms = 100
        sasl.mechanism.controller.protocol = 
        sasl.mechanism.inter.broker.protocol = 
        sasl.oauthbearer.clock.skew.seconds = 30
        sasl.oauthbearer.expected.audience = null
        sasl.oauthbearer.expected.issuer = null
        sasl.oauthbearer.jwks.endpoint.refresh.ms = 3600000
        sasl.oauthbearer.jwks.endpoint.retry.backoff.max.ms = 10000
        sasl.oauthbearer.jwks.endpoint.retry.backoff.ms = 100
        sasl.oauthbearer.jwks.endpoint.url = null
        sasl.oauthbearer.scope.claim.name = scope
        sasl.oauthbearer.sub.claim.name = sub
        sasl.oauthbearer.token.endpoint.url = null
        sasl.server.callback.handler.class = null
        sasl.server.max.receive.size = 524288
        security.inter.broker.protocol = PLAINTEXT
        security.providers = null
        socket.connection.setup.timeout.max.ms = 30000
        socket.connection.setup.timeout.ms = 10000
        socket.listen.backlog.size = 50
        socket.receive.buffer.bytes = 102400
        socket.request.max.bytes = 104857600
        socket.send.buffer.bytes = 102400
        ssl.cipher.suites = []
        ssl.client.auth = none
        ssl.enabled.protocols = [TLSv1.2, TLSv1.3]
        ssl.endpoint.identification.algorithm = 
        ssl.engine.factory.class = null
        ssl.key.password = [hidden]
        ssl.keymanager.algorithm = SunX509
        ssl.keystore.certificate.chain = null
        ssl.keystore.key = null
        ssl.keystore.location = /opt/bitnami/kafka/config/certs/kafka.keystore.jks
        ssl.keystore.password = [hidden]
        ssl.keystore.type = JKS
        ssl.principal.mapping.rules = DEFAULT
        ssl.protocol = TLSv1.3
        ssl.provider = null
        ssl.secure.random.implementation = null
        ssl.trustmanager.algorithm = PKIX
        ssl.truststore.certificates = null
        ssl.truststore.location = /opt/bitnami/kafka/config/certs/kafka.truststore.jks
        ssl.truststore.password = [hidden]
        ssl.truststore.type = JKS
        transaction.abort.timed.out.transaction.cleanup.interval.ms = 10000
        transaction.max.timeout.ms = 900000
        transaction.remove.expired.transaction.cleanup.interval.ms = 3600000
        transaction.state.log.load.buffer.size = 5242880
        transaction.state.log.min.isr = 1
        transaction.state.log.num.partitions = 50
        transaction.state.log.replication.factor = 1
        transaction.state.log.segment.bytes = 104857600
        transactional.id.expiration.ms = 604800000
        unclean.leader.election.enable = false
        zookeeper.clientCnxnSocket = null
        zookeeper.connect = 
        zookeeper.connection.timeout.ms = null
        zookeeper.max.in.flight.requests = 10
        zookeeper.metadata.migration.enable = false
        zookeeper.session.timeout.ms = 18000
        zookeeper.set.acl = false
        zookeeper.ssl.cipher.suites = null
        zookeeper.ssl.client.enable = false
        zookeeper.ssl.crl.enable = false
        zookeeper.ssl.enabled.protocols = null
        zookeeper.ssl.endpoint.identification.algorithm = HTTPS
        zookeeper.ssl.keystore.location = null
        zookeeper.ssl.keystore.password = null
        zookeeper.ssl.keystore.type = null
        zookeeper.ssl.ocsp.enable = false
        zookeeper.ssl.protocol = TLSv1.2
        zookeeper.ssl.truststore.location = null
        zookeeper.ssl.truststore.password = null
        zookeeper.ssl.truststore.type = null
 (kafka.server.KafkaConfig)
[2023-08-07 13:06:14,288] INFO [SocketServer listenerType=BROKER, nodeId=0] Enabling request processing. (kafka.network.SocketServer)
[2023-08-07 13:06:14,294] INFO [Controller 0] The request from broker 0 to unfence has been granted because it has caught up with the offset of it's register broker record 3. (org.apache.kafka.controller.BrokerHeartbeatManager)
[2023-08-07 13:06:14,327] INFO [BrokerLifecycleManager id=0] The broker has been unfenced. Transitioning from RECOVERY to RUNNING. (kafka.server.BrokerLifecycleManager)
[2023-08-07 13:06:14,327] INFO [BrokerServer id=0] Transition from STARTING to STARTED (kafka.server.BrokerServer)
[2023-08-07 13:06:14,328] INFO Kafka version: 3.4.1 (org.apache.kafka.common.utils.AppInfoParser)
[2023-08-07 13:06:14,328] INFO Kafka commitId: 8a516edc2755df89 (org.apache.kafka.common.utils.AppInfoParser)
[2023-08-07 13:06:14,328] INFO Kafka startTimeMs: 1691413574327 (org.apache.kafka.common.utils.AppInfoParser)
[2023-08-07 13:06:14,329] INFO [KafkaRaftServer nodeId=0] Kafka Server started (kafka.server.KafkaRaftServer)

@github-actions
Copy link

This Issue has been automatically marked as "stale" because it has not had recent activity (for 15 days). It will be closed if no further activity occurs. Thanks for the feedback.

@github-actions github-actions bot added the stale 15 days without activity label Aug 23, 2023
@zapho
Copy link
Author

zapho commented Aug 23, 2023

Thanks, @migruiz4. This was indeed a certificate generation issue.

@wedreamer
Copy link

Thanks, @migruiz4. This was indeed a certificate generation issue.

But I didn't have any problems using kafkajs and kafka-ui

@github-actions github-actions bot added triage Triage is needed and removed solved labels Oct 25, 2023
@migruiz4
Copy link
Member

migruiz4 commented Oct 25, 2023

Hi @wedreamer,

Could you please provide more details? I'm sorry but I do not understand your comment and how it is related to this topic.

If you are experiencing any issues related to the bitnami/kafka image, please create a new issue explaining your case and provide there all the details so we can give you better assistance.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
kafka solved stale 15 days without activity tech-issues The user has a technical issue about an application triage Triage is needed
Projects
None yet
Development

No branches or pull requests

6 participants