Architecting for Change Part IV: How Accidental Complexity Gets Into a System
Accidental complexity rarely announces itself.
It usually arrives as drag. Changes that should be straightforward take longer than expected, teams need more coordination to deliver less, and confidence in change slowly drops. Workarounds begin to appear, dependencies multiply, and the system becomes harder to understand, evolve, and trust.
That is usually when people say the architecture has become “complex”. But by then, the problem has often been building for a while.
As I said in the previous post in this series, essential complexity is the complexity inherent in the domain. Accidental complexity is everything else. Some of it comes from the world we work in: tools, platforms, frameworks, and runtime environments. That kind can often be reduced, but not completely eliminated.
The more interesting kind is the complexity we introduce ourselves. It comes from poor boundaries, weak ownership, muddled decisions, extra dependencies, and process overhead masquerading as control. That is self-inflicted complexity, and once it gets into a system, it rarely stays contained. It spreads.
Adding complexity is easy. It enters through decisions that seem reasonable at the time. Removing it is much harder.
Martin Reeves puts it well:
“Creating and reducing complexity may sound like perfect opposites. But in fact fundamental asymmetries exist between the two.”
Accidental complexity enters locally, accumulates systemically, and is often reinforced organisationally.
How it gets in
Through small local decisions
A lot of accidental complexity enters through small decisions that do not just look harmless, but sensible in isolation.
A synchronous call is added across a boundary because it is convenient today. Shared access to data feels quicker than modelling the integration properly. Another abstraction, another framework, or another “temporary” workaround is introduced because it solves the immediate problem. None of these decisions looks especially dramatic on its own.
That is part of the problem. Each decision is easy to justify locally because the cost is small in that moment and the benefit is immediate. But the costs do not stay local. They accumulate across the system. What looked like a quick shortcut becomes another dependency to coordinate around, another interaction to understand, and another reason why change becomes slower somewhere else.
Together, these decisions change the shape of the system. They increase coupling, blur boundaries, and raise the amount of coordination needed to get even ordinary work done. Over time, teams stop just changing software and start negotiating their way through it.
The customer rarely sees the dependency graph, the unclear ownership model, or the overloaded team. They see slower change, more defects, less reliability, and a business that cannot respond as quickly as it should.
Through poor boundaries
Poor boundaries are one of the easiest ways for accidental complexity to enter a system.
A boundary is not useful just because it exists technically. Splitting something into smaller parts does not make the system simpler. Sometimes it does the opposite. It adds complexity by creating more, entangled moving parts.
Good boundaries do not remove complexity. They put it where it belongs. They allow teams to own a coherent part of the domain, make decisions locally, and integrate with others through explicit relationships rather than accidental dependencies.
That is why domain boundaries matter. A bounded context is not just a technical partition. It is a boundary around a model, a language, a set of business rules, and the team that can evolve them with confidence. When those boundaries are weak, the model leaks. Teams begin sharing data structures, databases, assumptions, knowledge and implementation details. The system may still be distributed physically, but it becomes tangled logically.
The result is familiar. A change in one place unexpectedly affects another. Teams need to coordinate on things that should have been local. Integration becomes less about clear contracts and more about excessive coordination... and hope.
Through indecision and excessive optionality
Sometimes keeping options open is sensible. If uncertainty remains high and the number of viable options remains small, deferring a decision can be the right move. But beyond that, optionality starts to turn into muddle.
Gregor Hohpe captured that in Gregor’s Law:
“Excessive complexity is nature’s punishment for organisations that are unable to make decisions.”
I have seen systems built that way. Huge codebases with almost endless configuration. In theory, they are flexible. In practice, they are confusing. Adopters with simple needs see them as overkill, while the ones with genuinely complex needs still find them time-consuming to configure, confusing, and risky to change. That is not flexibility. It is complexity with very little return.
Through rising cognitive load
Accidental complexity also grows when cognitive load rises too far.
Every service has an upper bound of what the team can comfortably own. That upper bound is not just technical. It is organisational too. It is the point at which the owning team can no longer hold enough of the service, its dependencies, its failure modes, and its change surface in its head to evolve it comfortably.
Once that point is reached, the system may still function, but change becomes slower, riskier, and more tiring. People start avoiding the parts they do not fully understand. They become more cautious in some places and more careless in others. They copy what already exists because it feels safer than rethinking it, and they introduce local fixes because unpicking the real problem feels too expensive. The service starts becoming harder to change, not because the domain suddenly became more complex, but because the team can no longer navigate the shape of what has been built with enough confidence.
That is when accidental complexity begins to compound. Workarounds increase, comprehensibility suffers, duplication grows, and lead times get worse. Complexity then becomes harder to manage because the team is no longer dealing with isolated design problems. It is dealing with a system whose shape has become difficult to reason about.
Through misaligned ownership and ignored Conway’s Law
When technical boundaries do not align with team boundaries, complexity gets pushed into coordination, hand-offs, ambiguity, and delay. This is not just poor ownership. It is Conway’s Law being ignored.
The organisation will shape the architecture whether we acknowledge it or not. You do not get to opt out of that. If communication patterns, ownership boundaries, and team structures cut across the design, the software will reflect it. What looked like a clean technical boundary on a diagram becomes an awkward social boundary in practice.
That is why ignoring Conway’s Law is so costly. The problem is not simply that people need to talk more. It is that the architecture starts working against the way the organisation actually communicates and operates. Interactions that looked straightforward on paper become harder in reality because the teams involved are not close enough, aligned enough, or empowered enough to keep those interactions simple.
Sometimes the effect is even more explicit. Organisational politics, personal friction, or local management incentives can lead to “Separate Ways” integrations, not because that is the best design choice, but because the people involved do not want to collaborate. The architecture then records the social reality of the organisation, whether intentionally or not.
This shows up in organisations driven only by short-term feature delivery or project-based funding models, which I will come to next, but it also appears in more traditional structures. Teams organised around functional silos, such as UI, backend, and database, or around SDLC phases such as analysis, design, development, testing, and deployment, create their own version of the same problem. Handoffs, queues, and delays become normal, and once that happens, teams are pushed towards local optimisation. Each group tries to make its own part easier, faster, or safer at the expense of the overall system.
That is how shortcuts and workarounds become rational locally while still making the whole system worse. A backend team exposes data in the easiest way for them. A UI team works around a slow dependency. A database team protects itself with more processes. A separate testing function adds another gate because quality is being discovered too late in the process. Each decision may make sense from where that team is standing, but together they create more accidental complexity.
The architecture then stops being something the organisation can work with and becomes something it has to fight.
Through short-term or misaligned incentives
This is often worse in organisations driven by short-term delivery structures, whether that takes the form of feature-throughput pressure or project-based work. It is not just that shared capabilities, foundational services, and cross-cutting concerns suffer. The whole system does.
The reason is simple. Teams are often incentivised to complete the work in front of them, not to live with the long-term consequences of the decisions made to deliver it. In project-based organisations, that problem is often sharper still, because the team itself may be temporary. The project ends, people move on, and the system is left carrying the cost.
So the system gets shaped by short-term delivery pressure. Dependencies are introduced because they are expedient, boundaries are crossed because it is quicker, and code is changed wherever it needs to be changed because nobody has clear enough stewardship to say no.
In theory, this looks like flexibility and speed. In practice, it often means distributed coupling, blurred ownership, and architectural drift. The dependencies have not gone away. They have just been spread across the system and hidden inside coordination, rework, and rising cognitive load.
Foundational capabilities are often treated like project deliverables, even though they need to evolve towards supported, reliable, reusable capabilities. If they are important enough for many teams to depend on, they are important enough to have clear ownership, investment, and a direction of travel.
That is why short-term delivery models can be such effective generators of accidental complexity. They do not just neglect the shared and cross-cutting parts. They push short-term incentives into the design of the whole system.
Final thoughts
Accidental complexity rarely enters through one large, obvious decision. It enters through many small decisions that make sense locally, especially when ownership is weak, cognitive load is high, Conway’s Law is ignored, and the organisation rewards short-term delivery over long-term stewardship.
That is why accidental complexity is not just a technical design problem. It is a socio-technical one. The wider system matters because systems reinforce behaviour. If the environment rewards shortcuts, hand-offs, local optimisation, and weak ownership, it will not just preserve accidental complexity. It will amplify it.
The natural response in many organisations is to add more control: more review boards, more approval steps, more meetings, and more gates. That response is understandable, but it often makes the problem worse. Heavyweight governance can become another source of accidental complexity in its own right.
That is what I will look at next.