You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
This would enable Kommitted to work with Confluent Cloud. Unfortunately the API don't seem to return the timestamp of when the value current_offset was read, and so we will have to work on the assumption it's the wall clock value.
The "Emitter -> Register" design choice still works pretty well here: it will require to introduce a new Emitter that loops over the list of currently known Consumer Groups. That list can be obtained via the consumer_groups module, and then multiple reads will have to be arranged via API.
module consumer_groups must have it's own ConsumerGroupsRegister and return that on ::init() instead of the ConsumerGroups channel receiver
This new Register has to be configurable: how long after a consumer group stops being reported by the Kafka API, we keep it around in the data (eg application has a total failure, we don't want the CG Lag info to entirely disappear immediately)
Lag Register should use the new register as source of truth for the set of known groups, instead of relying on the data it contains
An entire "emit handling branch" in the Lag Register can be removed: no more need to populate it's internal structure, if the new Register is the source of truth
These changes are valid and doable even before we start consuming Confluent API.
Once the changes above are done, a new emitter, alternative to the KonsumerOffsetsDataEmitter can be created: this would also consume the new ConsumerGroupsRegister, and loop over the list of groups. It will:
list groups
for each group, query the Confluent Cloud API
Create and emit objects OffsetCommit
This emitter would "slotted" in as alternative to KonsumerOffsetsDataEmitter, based on command line arguments.
This would enable Kommitted to work with Confluent Cloud. Unfortunately the API don't seem to return the timestamp of when the value
current_offset
was read, and so we will have to work on the assumption it's the wall clock value.The "Emitter -> Register" design choice still works pretty well here: it will require to introduce a new
Emitter
that loops over the list of currently known Consumer Groups. That list can be obtained via theconsumer_groups
module, and then multiple reads will have to be arranged via API.Potentially, to save requests, the
listKafkaConsumerLags
API could be used.API doc:
getKafkaConsumerLag
listKafkaConsumerLags
API path:
Request example:
curl --request GET \ --url https://pkc-00000.region.provider.confluent.cloud/kafka/v3/clusters/cluster-1/consumer-groups/consumer-group-1/lags/topic-1/partitions/0 \ --header 'Authorization: Basic REPLACE_BASIC_AUTH'
Response example:
The text was updated successfully, but these errors were encountered: