A name for an idea: Dynamic Consistency Boundary
Sometimes the best way to promote your idea is to give it a proper name. That's what some weeks ago, @nicolaibaaring suggested I do.
Interesting concept. 😃 I have had a lot of these thoughts myself.I feel this concept should have a name of its own to make it easier to communicate and define. Maybe something was mentioned in the talk that I didn't get. Do you have anything in mind? 🤔— Nicolai Baaring (@nicolaibaaring) April 8, 2023
Maybe something in the direction of multi-stream event sourcing? Or dynamic consistency boundaries?
— Robert Baelde (@RobertBaelde) April 9, 2023
Also get a sense of event sourced transaction script with this concept, but that might not be the direction you want to go in
I want to use this blog post to summarize, in easy terms, what Dynamic Consistency Boundary means, in its most generic form.
Dynamic Consistency Boundary represents a form of optimistic lock specific for event sourcing based systems.
The idea behind it is pretty simple.
While using event sourcing, any decision could be represented by a function that receives as input an ordered stream of events and produces as output an additional ordered stream of events.
The input stream represents the given, the past events important for making the decision. The output stream represents the future, the evolution of the state that my decision causes. Any system that allows concurrent write operations, must admit the possibility that, in between the loading of the input event stream and the appending of the output event stream, another append could take place, which could potentially invalidate the decision made.
In order to guarantee the decision's consistency, the output event stream must be appended if and only if the input event stream at the time of the append is exactly the same as it was at the loading time. In other words, nothing relevant happened to influence the decision.
That's why the component responsible for loading the event stream relevant for making the proper decision is also responsible for verifying that this event stream is still the same at the moment of appending the decision outcome. In other words, it should perform the following operations:
- dynamically retrieve the event stream relevant to the decision,
- invoke the correct decision function, passing the relevant event stream as input,
- guarantee consistency, appending conditionally the output event stream if and only if the event stream relevant for the decision, at the time of the append, corresponds to the one passed as an input to the decision function.
- dynamic query - read event stream based on specific criteria
- conditional append - write events only if a query result matches the expected one
The immutable nature of the event store makes the conditional append simple.
Given a query, it is sufficient to verify that the last event matches, to be sure that the whole stream matches.
There are many other details that I could provide. But I want to keep the concept as simple as possible. And most of all, detached from any specific implementation. If you need more concrete examples, please read the other post from the "Kill Aggregate" series.
Comments
Post a Comment