Words by

Magical systems thinking

12th September 2025
17 Mins

Systems thinking promises to give us a toolkit to design complex systems that work from the ground up. It fails because it ignores an important fact: systems fight back.

The systems that enable modern life share a common origin. The water supply, the internet, the international supply chains bringing us cheap goods: each began life as a simple, working system. The first electric grid was no more than a handful of electric lamps hooked up to a water wheel in Godalming, England, in 1881. It then took successive decades of tinkering and iteration by thousands of very smart people to scale these systems to the advanced state we enjoy today. At no point did a single genius map out the final, finished product.

But this lineage of (mostly) working systems is easily forgotten. Instead, we prefer a more flattering story: that complex systems are deliberate creations, the product of careful analysis. And, relatedly, that by performing this analysis – now known as ‘systems thinking’ in the halls of government – we can bring unruly ones to heel. It is an optimistic perspective, casting us as the masters of our systems and our destiny.

The empirical record says otherwise, however. Our recent history is one of governments grappling with complex systems and coming off worse. In the United States, HealthCare.gov was designed to simplify access to health insurance by knitting together 36 state marketplaces and data from eight federal agencies. Its launch was paralyzed by technical failures that locked out millions of users. Australia’s disability reforms, carefully planned for over a decade and expected to save money, led to costs escalating so rapidly that they will soon exceed the pension budget. The UK’s 2014 introduction of Contracts for Difference, intended to speed the renewables rollout by giving generators a guaranteed price, overstrained the grid and is a major contributor to the 15-year queue for new connections. Systems thinking is more popular than ever; modern systems thinkers have analytical tools that their predecessors could only have dreamt of. But the systems keep kicking back.

There is a better way. A long but neglected line of thinkers going back to chemists in the nineteenth century has argued that complex systems are not our passive playthings. Despite friendly names like ‘the health system’, they demand extreme wariness. If broken, a complex system often cannot be fixed. Meanwhile, our successes, when they do come, are invariably the result of starting small. As the systems we have built slip further beyond our collective control, it is these simple working systems that offer us the best path back. 

The world model

In 1970, the ‘Club of Rome’, a group of international luminaries with an interest in how the problems of the world were interrelated, invited Jay Wright Forrester to peer into the future of the global economy. An MIT expert on electrical and mechanical engineering, Forrester had cut his teeth on problems like how to keep a Second World War aircraft carrier’s radar pointed steadily at the horizon amid the heavy swell of the Pacific. 

The Club of Rome asked an even more intricate question: how would social and economic forces interact in the coming decades? Where were the bottlenecks and feedback mechanisms? Could economic growth continue, or would the world enter a new phase of equilibrium or decline? 

Forrester labored hard, producing a mathematical model of enormous sophistication. Across 130 pages of mathematical equations, computer graphical printout, and DYNAMO code,World Dynamics tracks the myriad relationships between natural resources, capital, population, food, and pollution: everything from the ‘capital-investment-in-agriculture-fraction adjustment time’ to the ominous ‘death-rate-from-pollution multiplier’.

A section of Forrester’s World Model.
Image
WAguirre 2017

World leaders had assumed that economic growth was an unalloyed good. But Forrester’s results showed the opposite. As financial and population growth continued, natural resources would be consumed at an accelerating rate, agricultural land would be paved over, and pollution would reach unmanageable levels. His model laid out dozens of scenarios and in most of them, by 2025, the world would already be in the first throes of an irreversible decline in living standards. By 2070, the crunch would be so painful that industrialized nations might regret their experiment with economic growth altogether. As Forrester put it, ‘[t]he present underdeveloped countries may be in a better condition for surviving forthcoming worldwide environmental and economic pressures than are the advanced countries.’

But, as we now know, the results were also wrong. Adjusting for inflation, world GDP is now about five times higher than it was in 1970 and continues to rise. More than 90 percent of that growth has come from Asia, Europe, and North America, but forest cover across those regions has increased, up 2.6 percent since 1990 to over 2.3 billion hectares in 2020. The death rate from air pollution has almost halved in the same period, from 185 per 100,000 in 1990 to 100 in 2021. According to the model, none of this should have been possible. 

What happened? The blame cannot lie with Forrester’s competence: it’s hard to imagine a better systems pedigree than his. To read his prose today is to recognize a brilliant, thoughtful mind. Moreover, the system dynamics approach Forrester pioneered had already shown promise beyond the mechanical and electrical systems that were its original inspiration. 

In 1956, the management of a General Electric refrigerator factory in Kentucky had called on Forrester’s help. They were struggling with a boom-and-bust cycle: acute shortages became gluts that left warehouses overflowing with unsold fridges. The factory based its production decisions on orders from the warehouse, which in turn got orders from distributors, who heard from retailers, who dealt with customers. Each step introduced noise and delay. Ripples in demand would be amplified into huge swings in production further up the supply chain. 

Looking at the system as a whole, Forrester recognized the same feedback loops and instability that could bedevil a ship’s radar. He developed new decision rules, such as smoothing production based on longer-term sales data rather than immediate orders, and found ways to speed up the flow of information between retailers, distributors, and the factory. These changes dampened the oscillations caused by the system’s own structure, checking its worst excesses. 

The Kentucky factory story showed Forrester’s skill as a systems analyst. Back at MIT, Forrester immortalized his lessons as a learning exercise (albeit with beer instead of refrigerators). In the ‘Beer Game’, now a rite of passage for students at the MIT Sloan School of Management, players take one of four different roles in the beer supply chain: retailer, wholesaler, distributor, and brewer. Each player sits at a separate table and can communicate only through order forms. As their inventory runs low, they place orders with the supplier next upstream. Orders take time to process, and shipments to arrive, and each player can see only their small part of the chain.

The objective of the Beer Game is to minimize costs by managing inventory effectively. But, as the GE factory managers had originally found, this is not so easy. Gluts and shortages arise mysteriously, without obvious logic, and small perturbations in demand get amplified up the chain by as much as 800 percent (‘the bullwhip effect’). On average, players’ total costs end up being ten times higher than the optimal solution

With the failure of his World Model, Forrester had fallen into the same trap as his MIT students. Systems analysis works best under specific conditions: when the system is static; when you can dismantle and examine it closely; when it involves few moving parts rather than many; and when you can iterate fixes through multiple attempts. A faulty ship’s radar or a simple electronic circuit are ideal. Even a limited human element – with people’s capacity to pursue their own plans, resist change, form political blocs, and generally frustrate best-laid plans – makes things much harder. The four-part refrigerator supply chain, with the factory, warehouse, distributor and retailer all under the tight control of management, is about the upper limit of what can be understood. Beyond that, in the realm of societies, governments and economies, systems thinking becomes a liability, more likely to breed false confidence than real understanding. For these systems we need a different approach.

Le Chatelier’s Principle

In 1884, in a laboratory at the École des Mines in Paris, Henri Louis Le Chatelier noticed something peculiar: chemical reactions seemed to resist changes imposed upon them. Le Chatelier found that if, say, you have an experiment where two molecules combine in a heat-generating exothermic reaction (in his case, it was two reddish-brown nitrogen dioxide molecules combining into colorless dinitrogen tetroxide and giving off heat in the process), then you can speed things up by cooling the reactants. To ‘resist’ the drop in temperature, the system restores its equilibrium by creating more of the products that release heat. 

Le Chatelier’s Principle, the idea that the system always kicks back, proved to be a very general and powerful way to think about chemistry. It was instrumental in the discovery of the Haber-Bosch process for creating ammonia that revolutionized agriculture. Nobel Laureate Linus Pauling hoped that, even after his students had ‘forgotten all the mathematical equations relating to chemical equilibrium’, Le Chatelier’s Principle would be the one thing they remembered. And its usefulness went beyond chemistry. A century after Le Chatelier’s meticulous lab work, another student of systems would apply the principle to the complex human systems that had stymied Forrester and his subsequent followers in government.

John Gall was a pediatrician with a long-standing practice in Ann Arbor, Michigan. Of the same generation as Forrester, Gall came at things from a different direction. Whereas Forrester’s background was in mechanical and electrical systems, which worked well and solved new problems, Gall was immersed in the human systems of health, education, and government. These systems often did not work well. How was it, Gall wondered, that they seemed to coexist happily with the problems – crime, poverty, ill health – they were supposed to stamp out? 

Le Chatelier’s Principle provided an answer: systems should not be thought of as benign entities that will faithfully carry out their creators’ intentions. Rather, over time, they come to oppose their own proper functioning. Gall elaborated on this idea in his 1975 book Systemantics, named for the universal tendency of systems to display antics. A brief, weird, funny book, Systemantics (The Systems Bible in later editions) is arguably the best field guide to contemporary systems dysfunction. It consists of a series of pithy aphorisms, which the reader is invited to apply to explain the system failures (‘horrible examples’) they witness every day.

These aphorisms are provocatively stated, but they have considerable explanatory power. For example, an Australian politician frustrated at the new headaches created by ‘fixes’ to the old disability system might be reminded that ‘NEW SYSTEMS CREATE NEW PROBLEMS’. An American confused at how there can now be 190,000 pages in the US Code of Federal Regulations, up from 10,000 in 1950, might note that this is the nature of the beast: ‘SYSTEMS TEND TO GROW, AND AS THEY GROW THEY ENCROACH’. During the French Revolution, in 1793 and 1794, the ‘Committee of Public Safety’ guillotined thousands of people, an early example of the enduring principles that ‘THE SYSTEM DOES NOT DO WHAT IT SAYS IT IS DOING’ and that ‘THE NAME IS EMPHATICALLY NOT THE THING’. And, just like student chemists, government reformers everywhere would do well to remember Le Chatelier’s Principle: ‘THE SYSTEM ALWAYS KICKS BACK’.

These principles encourage a healthy paranoia when it comes to complex systems. But Gall’s ‘systems-display-antics’ philosophy is not a counsel of doom. His greatest insight was a positive one, explaining how some systems do succeed in spite of the pitfalls. Known as ‘Gall’s law’, it’s worth quoting in full:

A complex system that works is invariably found to have evolved from a simple system that worked. A complex system designed from scratch never works and cannot be patched up to make it work. You have to start over with a working simple system.

Starting with a working simple system and evolving from there is how we went from the water wheel in Godalming to the modern electric grid. It is how we went from a hunk of germanium, gold foil, and hand-soldered wires in 1947 to transistors being etched onto silicon wafers in their trillions today.

This is a dynamic we can experience on a personal as well as a historical level. A trivial but revealing example is the computer game Factorio. Released in 2012 and famously hazardous to the productivity of software engineers everywhere, Factorio invites players to construct a factory. The ultimate goal is to launch a rocket, a feat that requires the player to produce thousands of intermediate products through dozens of complicated, interlocking manufacturing processes. 

It sounds like a nightmare. An early flow chart (pictured – it has grown much more complicated since) resembles the end product of a particularly thorny systems thinking project. But players complete its daunting mission successfully, without reference to such system maps, in their thousands, and all for fun.

Factorio production map.
Image

The genius of the game is that it lets players begin with a simple system that works. As you learn to produce one item, another is unlocked. If you get something wrong, the factory visibly grinds to a halt while you figure out a different approach. The hours tick by, and new systems – automated mining, oil refining, locomotives – are introduced and iterated upon. Before you realize it, you have built a sprawling yet functioning system that might be more sophisticated than anything you have worked on in your entire professional career.

How to build systems that work

Government systems, however, are already established, complicated, and relied upon by millions of people every day. We cannot simply switch off the health system and ask everyone to wait a few years while we build something better. The good news is that the existence of an old, clunky system does not stop us from starting something new and simple in parallel.

In the 1950s, the US was in a desperate race against a technologically resurgent Soviet Union. The USSR took the lead in developing advanced rockets of the type that launched Sputnik into orbit and risked launching a nuclear device into Washington, DC. In 1954, the Eisenhower administration tasked General Bernard Schriever with helping the US develop its own Intercontinental Ballistic Missile (ICBM). An experienced airman and administrator, the top brass felt that Schriever’s Stanford engineering master’s degree would make him a suitable go-between for the soldiers and scientists on this incredibly technical project (its scope was larger even than the Manhattan Project, costing over $100 billion in 2025 dollars versus the latter’s $39 billion). 

The organizational setup Schriever inherited was not fit for the task. With many layers of approvals and subcommittees within subcommittees, it was a classic example of a complex yet dysfunctional system. The technological challenges posed by the ICBM were extreme: everything from rocket engines to targeting systems to the integration with nuclear warheads had to be figured out more or less from scratch. This left no room for bureaucratic delay. 

Schriever produced what many systems thinkers would recognize as a kind of systems map: a series of massive boards setting out all the different committees and governance structures and approvals and red tape. But the point of these ‘spaghetti charts’ was not to make a targeted, systems thinking intervention. Schriever didn’t pretend to be able to navigate and manipulate all this complexity. He instead recognized his own limits. With the Cold War in the balance, he could not afford to play and lose his equivalent of the Beer Game. Charts in hand, Schriever persuaded his boss that untangling the spaghetti was a losing battle: they needed to start over.

They could not change the wider laws, regulations, and institutional landscape governing national defense. But they could work around them, starting afresh with a simple system outside the existing bureaucracy. Direct vertical accountability all the way to the President and a free hand on personnel enabled the program to flourish. Over the following years, four immensely ambitious systems were built in record time. The uneasy strategic stalemate that passed for stability during the Cold War was restored, and the weapons were never used in anger.

When we look in more detail at recent public policy successes, we see that this pattern tends to hold. Operation Warp Speed in the US played a big role in getting vaccines delivered quickly. It did so by bypassing many of the usual bottlenecks. For instance, it made heavy use of ‘Other Transaction Authority agreements’ to commit $12.5 billion of federal money by March 2021, circumventing the thousands of pages of standard procurement rules. Emergency powers were deployed to accelerate the FDA review process, enabling clinical trial work and early manufacturing scale-up to happen in parallel. These actions were funded through an $18 billion commitment made largely outside the typical congressional appropriation oversight channels – enough money to back not just one vaccine candidate but six, across three different technology platforms.

In France, the rapid reconstruction of Notre-Dame after the April 2019 fire has become a symbol of French national pride and its ability to get things done despite a reputation for moribund bureaucracy. This was achieved not through wholesale reform of that bureaucracy but by quickly setting up a fresh structure outside of it. In July 2019, the French Parliament passed Loi n° 2019-803, creating an extraordinary legal framework for the project. Construction permits and zoning changes were fast-tracked. President Macron personally appointed the veteran General Jean-Louis Georgelin to run the restoration, exempting him from the mandatory retirement age for public executives in order to do so.

The long-term promise of a small working system is that over time it can supplant the old, broken one and produce results on a larger scale. This creative destruction has long been celebrated in the private sector, where aging corporate giants can be disrupted by smaller, simpler startups: we don’t have to rely on IBM to make our phones or laptops or Large Language Models. But it can work in the public sector too. Estonia, for example, introduced electronic ID in the early 2000s for signing documents and filing online tax returns. These simple applications, which nonetheless took enormous focus to implement, were popular, and ‘digital government’ was gradually expanded to new areas: voting in 2005, police in 2007, prescriptions in 2010, residency in 2014, and even e-divorce in 2024. By 2025, 99 percent of residents will have an electronic ID card, digital signatures are estimated to save two percent of GDP per year, and every state service runs online. 

In desperate situations, such as a Cold War arms race or COVID-19, we avoid complex systems and find simpler workarounds. But, outside of severe crises, much time is wasted on what amounts to magical systems thinking. Government administrations around the world, whose members would happily admit their incompetence to fix a broken radio system, publish manifestos, strategies, plans, and priorities premised on disentangling systems problems that are orders of magnitude more challenging. With each ‘fix’, oversight bodies, administrative apparatus, and overlapping statutory obligations accumulate. Complexity is continuing to rise, outcomes are becoming worse, and voters’ goodwill is being eroded.

We will soon be in an era where humans are not the sole authors of complex systems. Sundar Pichai estimated in late 2024 that over 25 percent of Google’s code was AI generated; as of mid-2025, the figure for Anthropic is 80–90 percent. As in the years after the Second World War, the temptation will be to use this vast increase in computational power and intelligence to ‘solve’ systems design for once and for all. But the same laws that limited Forrester continue to bind: ‘NEW SYSTEMS CREATE NEW PROBLEMS’ and ‘THE SYSTEM ALWAYS KICKS BACK’. As systems become more complex, they become more chaotic, not less. The best solution remains humility, and a simple system that works.