-
Notifications
You must be signed in to change notification settings - Fork 27
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Revison guard store horizontal scaling #82
Comments
Source: Readme. |
From the code you just posted, the comment about queueTimeout and queueTimeoutMaxLoops says the events are in-memory queue. I'm wondering what are the limits of this 'in-memory queue'. |
Your original question was about different db implementations and horizontal scaling. In order to achieve true horizontal scaling, you'll need to use some sort of db ( most probably redis ), which the quoted section explains how. I am not sure what you mean by 'limits' of the in-memory implementation, but i wouldn't suggest it for production use, mainly because it has no persistence whatsoever ( hence the name ), which could lead to unwanted results by app restart. |
Hi @nanov, I'll chime in. We use a db for the revision guard store (mongodb in our case). We did a few tests to get a better understanding of the behavior of the guard store. We noticed that when we run this series of events for a specific aggregates: We therefore assumed that those events are stored in-memory (by a combo of misunderstanding the doc and our observations). And of course I was a bit concerned about the in-memory situation. Do you know if what we actually observed is true? Or do you know why we did not see those events stored in the mongodb collections? I'm going anyway to retry those cases to see if we did some mistake (most likely). |
It's hard to tell without some actual test case. The guardstore does not store events, rather revisions ( of aggregates ). When a missing event(s) is detected ( based on revisionNumber ), it waits ( while storing this event in memory ) for some ( preconfigured ) amount of time for the missing events to arrive, in case they arrive they are applied ( in order ) and then the ( stored ) event is applied as-well. In case they don't arrive a missingEvent callback is fired which allows you to ask for and apply the missing events by yourself. |
Ok, thanks for the explanation. That match our observed behavior. But a curiosity, how do you scale horizontally the denormalizers? Since some events might be stored in memory you preserve state in a non shared area. Therefore you can't plug another set of denormalizers in parallel. Would be possible, and would make sense, to store those temporary events in the same db (where for instance LAST_SEEN_EVENT is stored) |
Well, as events needs to be processed in-order ( that is what the revision-guard store ensures), horizontal scaling of a single denormalizer instance is not so trivial. Running the exact same set in parallel wouldn't do it. You could thought run two ( different sets ), in same ( clustered ) process in order to optimize resource usage. One way to achieve this is with right queue management on the message bus side, something like assigning a queue to each ( different ) set ( respectively also it's own guardstore ), then you should be able to run those in parallel and process events in-order. As missing event handling happens only on the current node, the in-memory implementation shouldn't be a problem ( ie. next event is taken from the queue only when this one is processed, guardstore should have no waiting time, as no new events are received until the current is marked as handled ). |
Is possible to change the db for the revision guard store? I see it saves the events in memory but I've some concerns about the horizontal scaling. Are there some limits? What happens if I'm missing a huge amount of events?
The text was updated successfully, but these errors were encountered: