Leaky Event-Based Microservices’ Integration
When Events are Passive-Aggressive Commands
A few years ago, I worked on a microservices system for a banking client. One of the services I inherited was a shared 'Communication' microservice responsible for sending e-mails.
At the time, event-driven architecture and choreography were in fashion, so they had been applied very broadly across the estate, including integration with this service.
At first glance, that sounds reasonable enough.
Event-based integration can reduce coupling, which is generally a good thing. So the thinking was: if events reduce coupling, why not use them everywhere?
The main rationale seemed to be to avoid temporal and behavioural coupling.
By temporal coupling, I mean one service needing another service to be available at the time a task is carried out. By behavioural coupling, I mean a service needing to know which other service has the capability to perform the action it wants.
On that basis, the design leaned heavily towards events for all service-to-service interactions, including cases where the real intent was to send an e-mail.
The problem: these were not really events
The issue is that in these scenarios, there is usually a clear intention from the producer: send an e-mail.
There is also a clear expectation: when this message is emitted, someone should act on it, and the e-mail is sent.
That is not really the same thing as publishing a business fact for interested parties to react to in their own way. It is much closer to what Martin Fowler calls a passive-aggressive command:
“...an event is used as a passive-aggressive command. This happens when the source system expects the recipient to carry out an action, and ought to use a command message to show that intention, but styles the message as an event instead.”
That distinction matters.
If the sender’s real intent is to ask for an action to be carried out, then hiding that intent behind an event does not eliminate coupling. It just obscures it.
You remove one kind of coupling and introduce another
This is where blanket thinking around event-driven integration tends to fall down.
Yes, you may reduce temporal coupling because the sender no longer needs the communication service to be available synchronously.
Yes, you may also reduce explicit behavioural coupling because the sender is no longer directly calling a service called Communication.
But the coupling has not disappeared. It has moved.
Now the shared communication service must understand all the messages across the wider system that imply “an e-mail should be sent”. It must know which events matter, what they mean, when they should result in an email, and often how to interpret enough data to construct the message.
That is not low coupling in any meaningful design sense. It is just a different and much leakier form of coupling.
Why does this become a leaky abstraction
Within the boundary of a single service, you can sometimes tolerate this kind of thing. An internal event may trigger some internal behaviour, and the implementation coupling stays inside the service boundary where it belongs.
Across service boundaries, it is different.
Once a shared communication service is made responsible for reacting to many different upstream events, it starts absorbing knowledge that does not belong to it. It must understand the semantics of events originating in many different business areas. Over time, it becomes a central place where unrelated domain knowledge accumulates.
That is where the abstraction starts to leak.
The communication service is no longer just responsible for communication. It becomes responsible for knowing which business situations across the estate should result in communication.
That is a very different responsibility.
And because it is shared, every new e-mail-triggering scenario now tends to require a change in that service. The same happens when an existing e-mail is changed, suppressed, or moved behind a feature toggle. So instead of isolating change, the design centralises it.
It also tends to bloat events
There is a second problem that often follows.
If the communication service is expected to send the e-mail purely from published events, then those events often start carrying more and more data “for convenience”. Names, e-mail addresses, template data, links, amounts, dates, statuses, and whatever else might be needed downstream all get pushed into the message.
So what should have been a clean expression of a business fact becomes a bloated integration payload designed to satisfy a particular downstream consumer.
At that point, the event is no longer being shaped by the needs of the publishing service or the business domain it belongs to. It is being shaped by the implementation needs of a shared technical capability elsewhere.
That is another sign that the boundary is wrong.
The real design question is one of intent
The key question is not “should services communicate via events or commands?”
The real question is: what is the intent of this interaction?
If the purpose is to publish a business fact that other services may legitimately react to in their own way, then an event is a good fit.
If the purpose is to ask for an action to be carried out, then pretending that the request is an event usually creates confusion and misplaced responsibility.
That does not automatically mean you must use a synchronous call. A command can still be sent asynchronously. The point is not sync versus async. The point is being honest about intent.
Too many discussions flatten this into a technology choice when it is really a modelling choice.
Cohesion matters as much as coupling
One thing I would say more strongly now than I would have back then is this: reducing technical coupling is not enough. You need to understand coupling from a system's perspective - I will cover this in a future post. Regardless, though, you also need to preserve cohesion.
A service should have a clear responsibility and change for reasons that belong together.
A shared communication service that must continuously change because upstream domains introduce new e-mail-worthy events is not especially cohesive. It becomes a grab bag of notification rules and cross-domain knowledge.
That is the opposite of what you want from a well-bounded service.
So even if an event-driven approach appears loosely coupled on the surface, you still need to ask whether it is creating a cohesive design, or whether it is just moving complexity and responsibility into the wrong place.
Conclusion
Event-driven integration is useful, but it is not a universal answer.
Used well, it can help decouple services and enable autonomous reactions to business facts. Used indiscriminately, it can obscure intent, create passive-aggressive commands, bloat integration messages, and turn shared services into leaky central dependencies.
The right choice depends on the nature of the interaction.
Be clear about the intent. Be honest about the trade-offs. Optimising for one dimension of coupling while ignoring cohesion and responsibility usually leads to worse design, not better.
Or to put it more simply: not every interaction should be modelled as an event just because events are available.
In the post below, I talk about how I redesigned this integration:
