The Great Global Computer Outage Is a Warning We Ignore at Our Peril
July 18, 2024, will go down in history books as an event that shook up the world in a unique way. It gave the mass of humanity a pointed wake-up call about the inherent fragility of the technological systems we’ve created and the societal complexities they’ve engendered. Critical services at hospitals, airports, banks, and government facilities around the world were all suddenly unavailable. We can only imagine what it must have been like to be undergoing treatment in an emergency room at the time with a serious or life-threatening illness.
So, what are we to make of this event and how can we rationally get our collective arms around its meaning and significance? As a journalist who specializes in writing about the impacts of technology on politics and culture, I would like to share a few initial thoughts.
For some of us who have worked in the tech field for many years, such an event was entirely predictable. This is simply because of three factors: 1) the inherent fragility of computer code, 2) the always-present possibility of human error, and 3) the fact that when you build interconnected systems, a vulnerability in one part of the system can easily spread like a contagion to other parts. We see this kind of vulnerability in play daily in terms of a constant outpouring of news stories about hacking, identity theft, and security breaches involving all sorts of companies and institutions. However, none of these isolated events had sufficient scale to engender greater public awareness and alarm until The Great Global Computer Outage of July 18.
Inherent Fragility is Always Present
As impressive as our new digital technologies are, our technocrats and policymakers often seem to lose sight of an important reality. These now massively deployed systems are also quite fragile in the larger scheme of things. Computers and the communications systems that support them—so called virtual systems—can concentrate huge amounts of informational power and control by wielding it like an Archimedean lever to manage the physical world. A cynic could probably argue that we’re now building our civilizational infrastructures on a foundation of sand.
At the recently held Aspen Security Forum, Anne Neuberger—a senior White House cybersecurity expert—noted, “We need to really think about our digital resilience not just in the systems we run but in the globally connected security systems, the risks of consolidation, how we deal with that consolidation and how we ensure that if an incident does occur it can be contained and we can recover quickly.” With all due respect, Ms. Neuberger was simply stating the obvious and not digging deep enough.
The problem runs much deeper. Our government and that of other advanced Western nations is now running on two separate but equal tracks: technology and governance. The technology track is being overseen by Big Tech entities with little accountability or oversight concerning the normative functions of government. In other words, they’re more or less given a free hand to operate according to the dictates of the free market economy.
Further, consider this thought experiment: Given AI’s now critical role in shaping key aspects of our lives and given its very real and fully acknowledged downsides and risks, why was it not even being discussed in the presidential debate? The answer is simple: These issues are often being left to unelected technocrats or corporate power brokers to contend with. But here’s the catch: Most technocrats don’t have the policy expertise needed to guide critical decision-making at a societal level while, at the same time, our politicians (and yes, sadly, most of our presidential candidates) don’t have the necessary technology expertise.
Scope, Scale, and Wisdom
Shifting to a more holistic perspective, humanity’s ability to continue to build these kinds of systems runs into the limitations of our conceptual ability to embrace their vastness and complexity. So, the question becomes: Is there a limit in the natural order of things to the amount of technological complexity that’s sustainable? If so, it seems reasonable to assume that this limit is determined by the ability of human intelligence to encompass and manage that complexity.
To put it more simply: At what point in pushing the envelope of technology advancement do we get in over our heads and to what degree is a kind of Promethean hubris involved?
As someone who has written extensively about the dangers of AI, I would argue that we’re now at a tipping point whereby it’s worth asking if we can even control what we’ve created and whether the “harmful side effects” of seeming constant chaos is now militating against the quality of life. Further, we can only speculate as to whether we should consider if the CrowdStrike event was somehow associated with some sort of still poorly understood or recognized AI hacking or error. The bottom line is: If we cannot control the effects of our own technological invention then in what sense can those creations be said to serve human interests and needs in this already overly complex global environment?
Finally, the advent of under-the-radar hyper-technologies such as nanotechnology and genetic engineering also need to be considered in this context. These are also technologies that can only be understood in the conceptual realm and not in any concrete and more immediate way because (I would argue) their primary and secondary effects on society, culture, and politics can no longer be successfully envisioned. Decisively moving into these realms, therefore, is like ad hoc experimentation with nature itself. But as many environmentalists have pointed out, “Nature bats last.” Runaway technological advancement is now being fueled by corporate imperatives and a “growth at any cost” mentality that offers little time for reflection. New and seemingly exciting prospects for advanced hyper-technology may dazzle us, but if in the process they also blind us, how can we guide the progress of technology with wisdom?
No comments:
Post a Comment