Saturday, May 04, 2024

 

Returning to the 11th Century

Before you leave, turn out the lights



Technology fetishism and dogmatic irresponsibility

Without the use of digital devices, instead mainly that analog apparatus known as the pen, I have managed to retain meaningful recollections and engage in analytical reflection for the better part of sixty two years. The manner in which I have worked since the earliest moments I can remember has engendered the habit of collecting, sorting, observing and evaluating life as I lived it or perceived it by others. It was about 1976 that I was introduced to Russell Ackoff, a professor at the Wharton School in the University of Pennsylvania. He was introducing some basic tenets of systems theory, also outlined in his short book Redesigning the Future. My attendance was accidental since it was my high school physics teacher who took me to this meeting of a regional planning commission where Professor Ackoff had been invited to speak. He was quite droll and said several witting things. However, the most important statement he made was that the purpose of planning was not to produce a plan. Rather planning was a purpose in its own right. What he clearly meant – and that was reiterated in the book I subsequently read – was that planning was an attitude toward the future or toward life and not an industrial process for producing planning documents. The logical consequence of Ackoff’s argument was that the attitude of planning was more important than the creation of machines for churning out plans which would be obsolete before they could be implemented.

Although I only learned about the book ten years later, Joseph Weizenbaum, a professor of computer science at various universities and one of the early researchers in what became the field of artificial intelligence (AI), published Computer Power and Human Reason in the same year. 1976 was one year after the ignominious withdrawal of US Forces from Vietnam, ending more than 30 years of their organized terror in that part of Southeast Asia. The US war against Vietnam was the first testing ground for both systems theory and artificial intelligence. These concepts and the technology developed to apply them were dedicated to surveillance, planning, target acquisition and destruction of the so-called Vietcong infrastructure, i.e. the civilian government that operated in lieu of the criminal state established by the French and US Americans first in Hanoi and then in Saigon after the partition of the country in Geneva. The government agency primarily responsible for planning and implementing the destruction of the popular government of Vietnam was the US Central Intelligence Agency. ICEX was the first name given to what became known as the Phoenix Program. One of the CIA officers interviewed after the war called it “computerized mass murder”. He was referring to the kill lists generated by the PHIS, the Phoenix Information System by which all the data about Vietnamese citizens was collated and evaluated to guide the deployment of the various hunter-killer teams. These teams were composed of local hires, mercenaries, RVN and US military personnel like the infamous Lt. Caley, and other contractors working on behalf of the Agency. Recently there has been mild consternation because of the PHIS legacy product used by the IDF to perform the same kinds of tasks. Lavender is called an AI solution. It is just a later version of the same computer-driven murder planning machine deployed half a century ago.

No one should wonder about this since the Israel Defense Force and the other government agencies in occupied Palestine were actively informed and involved in every stage of these system developments. The systems-driven assassination program was a major component of the US counter-insurgency operations throughout Latin America. Death squads and data processing are natural partners going back to IBM’s computer support to the NSDAP. Artificial intelligence is fundamentally an intelligence operation and part of the systems theory of mechanized murder. It has no other serious application.

Permit me to return to Joseph Weizenbaum. In 1976, many AI fetishists will argue, the technology was simply not very sophisticated. ELIZA and other experimental platforms were primitive and lacked the support of today’s super-computers. I met Weizenbaum shortly before he died. He had returned to Berlin, the city of his birth from which his family had emigrated in the 1930s. He had been invited to talk at the Einstein Forum in Potsdam. Having read the book in the 1980s I was anxious to meet the man who had so politely trashed the AI project. He was introduced by an obnoxious and obsequious American whose other qualities or qualifications left no impression on me. The young man tried to impress the audience by telling us that Joseph Weizenbaum was working at Case Western University when the university decided they needed a computer– and Weizenbaum built it. Normally such calculated flattery would be met with a demurred nod of appreciation. Professor Weizenbaum retorted that Case Western did not need a computer. Moreover no one needed one! That was the last we heard of the young man from Einstein Forum.

Nearly 30 years after his book was published Weizenbaum was just as adamant. Not the Internet (which most people clearly forget is an adjunct to the US atomic warfare system) or the so-called super-computers, whether in the US or China, have altered the premises upon which his argument is based. As recently as today I read some conversation strings about AI in which one author argues:

The result of having this ability is not to contest who is right or wrong, but to learn to be right most of the time so that the AI can successfully maintain a peaceful, harmonious human society. At the end of the day, humans are seriously flawed and cannot be trusted to run this society. Therefore, human management will be phased out.

The author and those who follow his reasoning clearly believe that the strip mining of the Congo and other parts of the world to obtain the rare (and toxic) minerals essential for super-computing capacity along with the impoverishment of all other components of human culture in favour of electrical engineering and computer sciences is the price to be borne by humanity so that computation can fully displace human judgement (and humanity itself). The naive yet thin veneer of modernism and claims to sophistication in the interest of peace and harmony are deeply anti-human, not only in their objectives but at every link in the chain these AI proponents would forge from cradle to grave.

Weizenbaum’s argument was not based on the state of the art in 1976. In fact he was quite clear that faster processors and larger memory storage would no doubt expand the computational capacity of the emerging technology. Instead Weizenbaum insisted that judgement was not computation. In Berlin he reiterated data is not information. Computation is nothing more than the arrangement of data according to rules defining the circulation of electrical power through increasingly complex circuits. Judgement is the result of human activity not electrical circuits. Data is the numerically codification of signals from whatever source. Information is the product of assessing data and responding to it– i.e. giving it meaning. Computers ought not to give meaning– control human responses to the world. Humans ought to control their own responses, even if they use tools like computers to generate and store data for evaluation.

Screenshot

Those who, like the author cited above, imagine that machine intelligence is superior to human intelligence are, to put it mildly, confused about what intelligence is. Claiming– either naively or cynically– that machine intelligence is at least potentially far more suited for regulating human society than humans themselves, these technology fetishists betray their primitive superstitions. Artificial intelligence, which until now has never advanced beyond its intention as a weapon for mass murder and surveillance, is simply the electronic manifestation of the omnipotent deity whose every will must be fulfilled. The desire to see human management rendered obsolete or impossible is the same denial that humans have any personality beyond that defined by the absolute deity of the kind we have known from the 11th century. The dream of the AI cultist is the same dream of the absolutist papacy and the regime that survives in the modern business corporation from which this nightmare arises.

Weizenbaum did not address the whole production chain in which AI needs to be seen. His humanist position stands on its own, especially when the lines are drawn between humanism and its antitheses transhumanism and anti-humanism. Much is made of the enormous progress– far beyond what the carcinogenic West has accomplished– in Chinese AI. Suffice it here to enumerate some of the absurd claims that dominate in the media and among the cult’s prosyletizers.

Computer power rests ultimately upon the power to extract highly toxic minerals from the Earth, until now based on quasi-slave labor in Congo, i.e. central Africa. For the past half-century computer power has cost more than six million lives and the independent development of a country whose territory is roughly the size of the European Union. To this must be added the wars and other violent and corrupt interventions to obtain these resources elsewhere on the planet. Then of course we have the highly dubious benefit of employment redundancies as so-called AI systems replace human labor in the industries and service sectors previously maintained by homo sapiens. Marxists praise AI contributions to the end of alienated labor. However the implementation of AI not only aims to kill people for the IDF or other counter-insurgency agencies but to kill the conditions for economic activity for huge numbers of people at all levels of educational and occupational qualification. The subsequent radical concentration of wealth will hardly be an inducement to enhance living conditions– which after all cannot be rationally calculated except as cost minimizing. (We need not ignore the eugenicism underlying the AI cult too.)

As to the claims that these machines will be infinitely more rational and therefore better managers of human society than humans themselves, the obscenity should be obvious. Any management of humans by agents other than humans can only be accomplished by subjugation of humanity to machines. This is the dream of those whose puerile malice leads them to identify peace with the absence of other people and order with absence of responsibility for their own actions. The nightmare of AI is the dream of what was once called the Dark Ages. Don’t forget, before you leave, to turn out the lights.Facebook

Dr T.P. Wilkinson writes, teaches History and English, directs theatre and coaches cricket between the cradles of Heine and Saramago. He is author of Unbecoming American: A War Memoir and also Church Clothes, Land, Mission and the End of Apartheid in South AfricaRead other articles by T.P..

No comments:

Post a Comment