Architecting for Change - Part II: Complexity & Cognitive Load's Impact

The last post in this series discussed the implications of not architecting for change. I highlighted several issues that ultimately increase the cost of change. Making minor, seemingly innocuous alterations without considering the broader system has wider repercussions on complexity and cognitive load. This post is a slight detour to examine complexity and cognitive load in more detail.
To effectively architect systems that can adapt and evolve, it's crucial to understand complexity and cognitive load, two core factors influencing how easily teams can respond to change and deliver value.
What is Complexity?
Complexity is not just about things being hard to comprehend and analyse, but also the non-determinism of behaviours and outcomes. The Cynefin framework distinguishes between complicated and complex to further clarify the concept of complexity. Complicated domains fall in the known-unknowns space, where we can identify knowledge gaps and find experts or comprehensive documentation to bridge those gaps. Complex domains reside in the unknown-unknowns, where we can't identify those gaps without experimentation. Cause-and-effect relationships are not apparent in complex domains and can only be discerned retrospectively.

Cognitive Load
High complexity drives up cognitive load – the mental effort required to comprehend and change a system. Dr. Laura Weis defines team cognitive load as “the collective cognitive burden experienced by a group working together,” which, when it exceeds the team’s capacity, leads to overload and impaired effectiveness.
Cognitive load is difficult to measure directly. It is, therefore, indirectly gauged through assessments of the underlying symptoms, such as burnout, slow delivery, and high error rates. Those assessments provide actionable insights designed to address cognitive overload.
Cognitive load is often overlooked, but one study estimates that cognitive overload in tech organisations costs over $322 billion per year in lost productivity - itrevolution.com,
Complexity in Software
The majority of software issues stem from the inability to comprehend the system due to its complexity.
"Complexity is the root cause of the vast majority of problems with software today... being able to understand a system is a prerequisite for avoiding all of them, and of course it is this which complexity destroys."
- Out of the Tar Pit, Moseley & Marks, 2006
The effects of unmanaged complexity on software can be far-reaching:
Increased Cognitive Load
Complexity leads to a higher cognitive load for developers, architects, and other stakeholders. As systems grow in size and intricacy, individuals must not only mentally map more components, dependencies, and behaviours but understand them when they are harder to due to their increased complexity.
When a system is more complex, errors and bugs are more likely to occur. Misunderstandings and incorrect assumptions seep into the code, resulting in products that don't align with customer expectations, errors, and bugs. Additionally, the system becomes slower to deliver, as more time is spent trying to understand how different, tightly coupled parts interact due to unmanaged complexity. This complexity also hampers decision-making, as developers struggle to grasp the full impact of changes. Furthermore, onboarding new team members becomes more time-consuming, as the steep learning curve makes it harder for them to understand the system.
Difficulty in Testing
Complex components are tightly coupled and less cohesive, complicating testing as it is more difficult to comprehend and isolate behaviours. Testing is even more challenging in distributed systems where complexity often manifests itself through incorrect partitioning of services, which creates tighter coupling across services. Making it impossible to test services in isolation, necessitating the need for complex integrated testing environments, slows feedback, increases coordination overhead, and raises the risk of failures.
Complexity of the underlying system is naturally mirrored in integrated test environments, resulting in longer feedback loops as the time between making a change and getting test results increases, which slows learning and reduces agility. Maintaining complex integrated test environments is also prone to configuration drift. Differences between testing and production can mask bugs until they reach users.
As complexity grows, tests become brittle and harder to maintain because small changes in one area can break multiple tests elsewhere. This discourages developers from keeping tests up to date or commenting out tests, which reduces overall test coverage and trust in automated tests.
Additionally, complex systems often require specialised tooling, test harnesses, or large datasets to run tests meaningfully, adding overhead and creating more points of failure.
Slower Development & Delivery, Higher Maintenance Costs
As systems grow in complexity, they become harder to understand and reason about. Every change requires navigating a larger web of components, dependencies, and interconnections, meaning more time is spent understanding the system than actually making changes or adding features.
Complexity tends to feed on itself—new features and fixes introduce more moving parts, which in turn create new dependencies. This growing tangle has a direct impact on delivery speed. Development slows because changes can’t be made in isolation; each one requires more coordination between teams, and longer integration and testing cycles. Instead of releasing quickly and independently, teams are pulled into synchronised deployments and protracted sign-off processes, increasing lead times and reducing agility.
The effect doesn’t stop at delivery—it drives maintenance costs sharply upward. The harder a system is to reason about, the more prone it becomes to bugs and the more it accrues technical debt. Debugging becomes a slow, painstaking process, requiring engineers to sift through sprawling logs, trace convoluted execution paths, and navigate interdependent components just to locate a root cause.
Reduced Agility and Flow of Value
Complex systems are less adaptable to change. The more complex the system, the harder it is to modify, extend, or replace components without affecting other parts of the system.
Changes in one area can have unintended side effects on other parts of the system, especially when dependencies and interactions are poorly understood.
The time from ideation to customers' “thank you”, as Daniel Terhorst-North describes it, can stretch dramatically in complex delivery environments. When it takes longer to move from concept to customer feedback, opportunities are missed, value realisation is delayed, and work piles up as inventory. Instead of quickly turning investment into outcomes, organisations end up spending heavily on creating features, documents, or systems that sit idle—waiting for release.
This growing inventory ties up capital and effort without generating returns, while also increasing the risk that the delivered product will miss the mark. The longer the work waits in the system, the greater the chance that customer needs, market conditions, or competitive landscapes will shift, eroding the value of what was initially conceived.
Complex Organisations
When systems become large and deeply intertwined, clean ownership boundaries start to erode. Instead of a single team being responsible for a component end-to-end, responsibility is spread across multiple teams or departments. This fragmentation makes accountability diffuse—issues fall into grey areas where no single team feels full ownership. Over time, this creates a culture of “no one’s problem,” where critical work can stall simply because it isn’t clearly assigned.
The web of dependencies between components forces teams to coordinate more often and across more organisational boundaries. Cross-department meetings, programme management offices, steering committees, and integration teams emerge as compensating structures, attempting to keep the pieces aligned. While these structures are well-intentioned, they introduce delays, slow down decision-making, and dilute team autonomy.
To cope with the cognitive load of such systems, organisations often regroup specialists into functional silos—such as a dedicated database team or integration team. While this can reduce the individual burden on each person, it increases the number of hand-offs in the delivery flow. Each hand-off introduces waiting time, communication overhead, and the potential for misunderstandings, further reducing delivery speed.
For teams, this environment erodes autonomy. With so many interdependencies, they can’t deploy or change their part of the system without waiting for others to align their work. This dependency-driven sequencing turns delivery into a relay race rather than a parallel effort.
At the same time, teams are expected to understand more than is feasible—spanning multiple domains or technical stacks just to do their job. Onboarding slows, productivity dips, and cognitive overload becomes a constant tax on progress.
Increased Risk of Security Vulnerabilities
Complex systems are fundamentally harder to secure than simpler ones. As software grows in size and intricacy, the attack surface expands proportionally—larger codebases mean more potential points of entry. Often, a significant portion of this code is unused or dormant, yet it can contain numerous known vulnerabilities, quietly increasing risk. Research shows that highly complex modules are more prone to faults, and the more components interact in subtle ways, the greater the chance that a bug or unintended access path goes unnoticed. As Bruce Schneier aptly puts it, “you can’t secure what you don’t understand,” and complexity directly undermines understanding.
Beyond the code itself, complex systems often rely on an array of interconnected tools and technologies, each adding seams that must be monitored, maintained, and secured. Interdependencies between components make even small changes risky, leading teams to delay updates and widening the window of exposure. High complexity also amplifies cognitive load, increasing the likelihood of human errors—misconfigurations, missed patches, or weak passwords—that attackers readily exploit.
In effect, complexity and cognitive burden reinforce one another, creating a cycle where vulnerabilities persist, fixes are delayed, and security erodes. Simplifying system design is therefore critical: the less there is to understand, the easier it is to secure and maintain, reducing both technical and human failure points.
Inefficient Resource Allocation
As systems become more complex, more resources are often needed to manage the system:
- Infrastructure Overhead: More complexity means more infrastructure—whether it’s cloud resources, servers, network setups, or storage—leading to higher costs.
- Operational Overhead: Keeping track of numerous components, services, and dependencies requires additional resources in terms of monitoring, alerting, and governance, increasing operational costs.
Conclusion
Complexity and cognitive load form a reinforcing loop that quietly taxes everything, including testability, delivery speed, organisational flow, security, and cost. When systems sprawl and understanding erodes, feedback slows, defects hide, hand-offs multiply, patches stall, and risk climbs. You don’t just get slower—you become less adaptable.
Designing for change, therefore, starts with actively managing complexity and budgeting cognitive load. The next post in this series will begin laying the foundations for managing complexity.