Philosopher Karl Popper once argued that there are two kinds of problems in the world: clock problems and cloud problems. As this analogy suggests, clock problems follow a certain logic: clock problems are orderly, and they can be broken down and analyzed piece by piece. If a clock stops working, you can take it apart, see what’s wrong, and fix it. The fix may not be easy, but it is doable. The important thing is that you know the problem is solved because the clock starts telling time again.

Guru Madhavan
W.W. Norton, 2024
Cloud problems have no such guarantees. Cloud problems are inherently complex and unpredictable, and usually have social, psychological, or political dimensions. Because of their dynamic and shape-shifting nature, attempts to “solve” a cloud problem often result in multiple new problems. Because of this, cloud problems do not have a clear “solved” state, only good and bad outcomes (or better and worse outcomes). Trying to fix a broken-down car is a matter of time. Trying to solve a traffic problem is a cloud problem.
The engineers are known for solving clock problems. like Clock. Specialization and increasing cultural expectations influence this trend, but so do engineers themselves, as they are the ones who first define the problems they are trying to solve.
Don’t be satisfied with half stories.
Get access to the latest tech news with no paywalls.
Subscribe now
Already a subscriber? Sign in
In his latest book, A wicked problemGuru Madhavan argues that the world’s increasingly opaque problems require a broader, more public-minded approach to engineering. “Wicked” is Madhavan’s way of characterizing what he calls “the most opaque problems.” It’s a nod to a now-famous coinage by University of California, Berkeley professors Horst Rittel and Melvin Weber, who used the term “wicked” to describe complex social problems that resisted the mechanical science and engineering-based (i.e., clock-like) approaches that were invading their fields of design and urban planning in the 1970s.
Madhavan, a senior director for programs at the National Academy of Engineering, is no stranger to tough problems himself — he has tackled them like making prescription drugs more affordable in the U.S. and prioritizing the development of new vaccines — but this book is not about his own work. A wicked problem The book weaves together the story of Edwin A. Link, a largely forgotten aeronautical engineer and inventor, with case studies of man-made and natural disasters as Madhavan explains how wicked problems take shape in society and how they can be contained.
Link’s story is fascinating for those who don’t know him: he built the world’s first mechanical flight-training aircraft using parts from his family’s organ factory, and Madhavan recounts it. Among the challenges the inventor faced in the 1920s and ’30s were how to quickly and effectively train tens of thousands of pilots without throwing them all in the air (and putting them at risk), and how to instill confidence in “instrument flying” when pilots’ instincts frequently told them the instruments were wrong, a typically thorny problem of the time.
Addressing a world full of tough problems requires a broader, more inclusive way of thinking about what engineering is and who can participate in it.
Unfortunately, while the biography of Link and the many chapters on disasters such as the Great Boston Molasses Flood of 1919 are interesting and deeply researched, A wicked problem It suffers from some bad architectural choices.
The book’s elaborate conceptual framework and disjointed narrative feel cumbersome and unnecessary, making a complex and nuanced subject even more difficult to understand at times. In the prologue alone, the reader must jump from the notion of a cloud problem to the notion of a wicked problem. Wicked problems are decomposed into hard problems, soft problems, and nasty problems, which are then recomposed in various ways and linked to six attributes: efficiency, ambiguity, vulnerability, safety, maintainability, and resilience. Together, these form what Madhavan calls the “operational concept,” which becomes the primary organizing tool he uses to explore wicked problems.
That’s a lot. At the very least, it’s enough to make us wonder whether a “systems engineering” approach was the right lens through which to look at misdeeds. It’s also unfortunate, because Madhavan’s final point is a particularly important one in an era when solutionism and “clever tricks” approaches to complex problems are rampant: He says that to effectively address a world full of misdeeds, we’re going to need a broader, more inclusive way of thinking about what engineering is and who can participate in it.

John Downer
MIT Press, 2024
John Downer would probably agree, but his new book A reasonable coincidencemakes the strong case that even the best and most far-reaching engineering practices have severe limitations. Similarly set in the world of aviation, Downer’s book explores the fundamental paradox that underpins today’s commercial aviation industry: the fact that flying is safer and more reliable than technically possible should be.
Jetliners are an example of what Downer calls “catastrophic technologies”: “complex technical systems that require extraordinary, and historically unprecedented, failure rates of hundreds of millions, or even billions, of operating hours between catastrophic failures.”
The average modern jetliner has 7 million parts and 170 miles of wiring, a highly complex system in itself. Downer said that in 2014, there were more than 25,000 jetliners in scheduled service. Combined, they averaged 100,000 flights every day. Now consider that in 2017, there were no fatal accidents on commercial jetliners carrying passengers. Zero. That year, passenger numbers reached 4 billion on nearly 37 million flights. Yes, it was a record year for the aviation industry in terms of safety, but despite Boeing’s fatal 737 Max crashes and ongoing problems in 2018 and 2019, flying remains an immeasurably safe and reliable form of transportation.
Downer, a professor of science and technology studies at the University of Bristol, spends the first half of the book expertly dismantling the idea that all the risks associated with such complex technologies can be objectively known, understood, and therefore controlled. Using examples from famous jet crashes and the Fukushima nuclear meltdown, he shows why there are far too many failure scenarios and combinations to assess or predict such risks, even with the aid of today’s sophisticated modelling techniques and algorithms.
So how does the aviation industry achieve such a seemingly unattainable record of safety and reliability? It’s not because of regulations, Downer says. Instead, he points to three unique factors. First, the vast amount of service experience the industry has accumulated. Over the course of 70 years, manufacturers have built tens of thousands of passenger jets that have (and continue to) fail in a variety of unpredictable ways.
This detailed and continually growing dataset, combined with the industry’s commitment to thoroughly investigate every failure, allows us to generalize lessons learned across the industry – the second key to understanding jetliner reliability.
Finally, perhaps the most interesting and counterintuitive factor: Downer argues that the lack of innovation in jetliner design is a significant but overlooked part of the reliability record. The fact that the industry has been building essentially the same iterations of jetliners for 70 years ensures that lessons learned from failures will always be relevant and generalizable, he says.
This hyper-cautious approach to change runs counter to the “innovate or die” mentality promoted by most technology companies today, but it allows the aviation industry to learn from decades of failure and continually improve the “failure performance” of future jetliners.
Unfortunately, the lessons about jetliner reliability don’t apply to other catastrophic technologies: “It is an irony of our modern times that the only catastrophic technology with which we have any real experience, the jetliner, embodies our misconception that we are familiar with catastrophic technologies in general, when in fact it is completely unrepresentative,” Downer writes.
For example, to make nuclear reactors as reliable as jet planes, the nuclear reactor industry would need to commit to one common reactor design, build tens of thousands of reactors, operate them for decades, experience thousands of catastrophes, and slowly accumulate lessons and insights from those catastrophes and use them to improve that common reactor design.
Of course, that won’t happen, but “we remain so seduced by the promise of incredible reliability and incredible certainty about that reliability that our desire for innovation outweighs either insight or humility,” Downer writes. The age of disruptive technologies is still in its infancy, but our survival may depend not so much on innovating our way out of ambiguous and troubling problems, but on recognizing and respecting what we don’t know and will probably never understand.
if A wicked problem and A reasonable coincidence Georgina Voss’s new book is about the challenges and limitations of trying to understand complex systems using objective science and engineering methods. System Ultraoffers a refreshing alternative: rather than coldly depicting or trying to understand complex systems from the outside, Voss, a writer, artist, and researcher, uses the book to explore what complex systems feel like from the inside, and what they ultimately mean.

Georgina Voss
Verso, 2024
“It’s quite amazing to just feel your way into these enormous structures,” she writes, taking the reader on a whirlwind journey of systems visible and invisible, corrupt and benign, old and new. Stops include the hype halls of the annual Consumer Electronics Show in Las Vegas (“a carefree hellscape on a Friday”) and the “meme gold mine” of a container ship. Ever Given the global supply chains that collapsed when ships got stuck in the Suez Canal, and the payments system that underpins the porn industry.
For Voss, a system is both a structure and a behavior; it is a relational technology “defined by its ability to scale and, perhaps more importantly, by its specific relationship to scale.” She is also keenly aware of the pitfalls of using “empirical” approaches to understand these large-scale systems: “Attempts to neatly summarise in words what a system is can sometimes feel like the monologues of a marijuana addict, complete with sharp hand gestures (‘Have you ever thought electricity could be really big?’),” she writes.
And yet her writing is a joy to read. Voss deftly unravels the power structures that structure and reinforce the large-scale systems we live in. In the process, she also dispels many of the stories we’ve been told about their inexplicability and inevitability. And she does all this with humor, intelligence, and boundless curiosity. System Ultra These are shining examples of the “civic engagement as engineering” approach that Madhavan advocates. A wicked problemAnd it proves his point.
Brian Gardiner is a writer based in Oakland, California.
