I have seen numerous service integration patterns down the years that, in my opinion, have a myriad of issues that make it increasingly difficult to work with, or release, services independently. These patterns result in services that suffer from very high coupling, especially afferent, efferent, behavioural and temporal coupling. Now, integration strategies alone cannot guarantee low coupling, they need to be considered in tandem with service boundary definition, consistency boundaries, cohesiveness, data strategies and so on. I will try to blog about these in a future post.
One integration strategy is to have the service where the change is made emit a thin event that simply indicates something has happened and all interested services are expected to call back that service in order to get more information about the event. This architecture, in my opinion, defeats the purpose of going down the eventing route in the first place as it negates all the benefits associated with event-based systems. This approach increases temporal coupling, and by that token, reduces availability. Furthermore, these shortcomings are not given up in order to gain consistency as that suffers too because these notification are eventually delivered to the interested parties. Moreover, all those services interested in the event would try to fetch their data from the event publisher service at the same time and, hence, all event publishers need to be able to cope with that sudden spike. The cost of implementing that is also hardly justifiable, granted your system could be one that usually requires catering for such spikes - e.g. a retailer that is especially busy during black fridays and other special occasions - but for most systems this extra cost is purely self-inflicted as it is solely paid to mitigate the shortcomings of the chiosen integration strategy. Also, this does not scale very well as performance is inversely proportional to the the number of services interested in the event. To make matters worse, the implications of performance degradation are not isolated to a single service as the event publisher service would quickly become the bottleneck for all the other services trying to fetch the related data.
Similarly to the above approach, events are published/broadcasted and interested parties react to these events. The difference here is that the events carry as much information as possible about the change to ensure collaborators have everything they need to accomplish their job. Of course, this is only relevant to the data required from the event publisher, consumers might still need to call other service for related information. This way improves availability and reduces coupling. However, events now carry considerably more information; making them harder to version and maintain. Additionally, collaboration is not always as simple as a 1-to-1 relationship. This process is usually slightly more involved as collaborators could be waiting for several events before they can finallly do something. This would also mean external systems have to keep a record of the events received and their data; and deal with the complications associated with that, like receiving messages out of order, receiving messages multiple times and so on - not that other integration option don’t suffer from this but it is trickier in this case.
Event Collaboration with UI Composition
This is another integration option. I have discussed this in a previous blog post but to give a quick recap, UI composition is where all services taking part in particular use case provide UI components that are deployed as part of the Ui with the sole purpose of sending data back to their respective services. Each service would only receive data it owns. This data cannot be shared among services and, in order to facilitate collaboration, they are all correlated with a client-generated ID. Since the data required by each service to perform its task is already stored locally, the published events would only require to have the correlation IDs, which has the advantage of making the external events very thin and lean as such improving versionability of events. This approach also reduces coupling and massively improves availability. However, it requires exceedingly more design and coordination as the services are involved in an intricate choreography. I’ve found this approach works better in greenfield systems as you start with a clean slate. Introducing this to an already established system is quite taxing as it involves, redrawing service boundaries, redefining contracts, rethinking integration patterns, re-writing UIs, transforming and migrating data, and last but not least getting buy-in from the business and other teams/developers.
Like Event Collaboration, services are encouraged to share data through events. However, in this approach the medium to disseminate data must support streaming. Just to be clear here, this is not raw data that is being distributed, it is conclusions made by the relevant services and published as immutable facts. It is this immutability that is the key to the success of this method of integration as it gives us two benefits, indefinite cacheability and worry-free dissemination. If the events are not going to change, then interested services can feel free to store them, cache them, shape views off of them and so on without ever having to go back to the originator of those events. The performance implications of this cannot be underestimated as there is no longer a need to go over the wire in order to fetch data required to perform a task. This pushes the data closer to the consumers, which, along with the performance benefits, greatly improves autonomy/isolation of services, making them truly independent and removing all temporal coupling. In order to achieve these benefits though, data immutability must be respected. In other words, consumers of the data streams can only create caches/materialised views of the data but cannot alter it in any way. I will delve deeper into this topic in a future post to discuss what this actually means in terms of the dichotomy of data and discuss in more details the issues this approach resolves.
There are different integration methods when it comes to event-based microservices, all have their advantages and disadvantages and choosing the right integration pattern, like any other architecture decision, is about knowing your trade-offs and making an informed decision. You could trade temporal coupling for easier implementation or trade higher availability for performance. It all depends on what priorities your unique circumstances dictate, there’s no such things as best practices and one solution does not fit all.