Wednesday, January 15, 2020

We don’t know how many mountain gorillas live in the wild. Here’s why
January 14, 2020 
Mountain gorillas in Bwindi Impenetrable Forest. Shutterstock/Claire E Carter

A new census – carried out by the Greater Virunga Transboundary Collaboration (a coalition of governments, non-profits and conservationists) in 2018 – shows that the population of mountain gorillas in Uganda’s Bwindi Impenetrable Forest National Park is now at 459, up from 400 in 2011. This could bring the total number count for the subspecies to 1,069 gorillas. Katerina Guschanski explains that while this is great news, these figures may still not be accurate.

How important are the mountain gorillas of Uganda’s Bwindi Impenetrable Forest National Park to global populations?

Mountain gorillas are one of the two subspecies of eastern gorillas. They are divided into just two populations: one in the Virunga Massif that spans the borders of Uganda, Rwanda and the Democratic Republic of Congo (DRC) and one population that lives in the Bwindi Impenetrable National Park in Uganda and the adjacent Sarambwe Nature Reserve in DRC.

The Bwindi population holds a bit less than half of all mountain gorillas in the world, thus its importance for the global survival of these great apes cannot be overstated.

Mountain gorillas receive admirable conservation attention but they’re vulnerable due to habitat encroachment, potential disease transmission from humans, poaching and civil unrest.

Because there are only about 1,000 mountain gorillas left, it’s important that their population size be continuously monitored to evaluate whether, and which, conservation tactics work.

Their populations must keep growing because mountain gorillas have very low genetic diversity. This reduces their ability to adapt to future changes in the environment. For instance, if faced with new diseases, they are extremely susceptible because they don’t have genetic variants that would give them more resistance. Low genetic diversity was implicated in the extinction of some mammals, such as the mammoth.

Continued population growth is also needed to make them less vulnerable to random events, such as habitat destruction through extreme weather events, which could wipe out an entire population.

What can account for a rise in the number of gorillas?

One of the main factors that explains the higher detected number of gorillas is the change in the census technique used. During mountain gorilla censuses researchers collect faecal samples from gorilla nests (where they sleep at night) to genetically identify individuals. Gorillas that are used to human presence can be directly counted.

The teams in the latest census conducted two full systematic sweeps through the forest. They covered the entire region twice from east to west. This is a physically and logistically demanding method, but it’s very thorough.

The previous census, carried out in 2011, also covered the area twice, but only one of these attempts was a full sweep – meaning it started at one end of the forest and systematically progressed towards the other end. The other sweep was disjointed, in terms of how it covered the area and the timing, allowing gorilla groups to easily move and avoid detection.

In Bwindi, from the estimated 459 individuals, 196 are in groups that are used to people and can easily be counted. This means that population estimates are largely based on genetic profiles generated from night nests and so can’t be fully accurate because some will be missed.

Censuses of Virunga mountain gorillas are more accurate because more of their gorillas are used to human presence. In the most recent census, there’s been a rise in their population. It shows an increase from 458 individuals in 2010 to 604 in 2016. Most of these gorillas – 418 out of 604 – belong to groups that are used to human presence, they can be followed daily and easily counted.

The population increases in the Virunga gorillas is strongly attributed to active conservation. This includes continuous monitoring and veterinary attention, such as the removal of snares and treatment of respiratory diseases.

Is this rise a significant number and how accurate do you think it is?

The Bwindi census results were made publicly available in a somewhat unusual way. Scientific studies generally undergo a thorough peer-review before they are published, which has not yet happened for these findings. This means the findings haven’t yet been properly scrutinised and leaves the question about the gorilla’s population size open.

In addition, as mentioned above, the larger number of individuals detected in the 2018 census could be the result of the changed survey method. We therefore can’t make reliable comparisons to previous estimates from the 2011 and the 2006 censuses.

Consider that in the latest census, of the 33 gorilla groups – which weren’t used to the presence of people – only 14 (or 42%) were detected during both sweeps. Similarly, only one of 13 solitary individuals was detected in both sweeps. So, even with full, systematic sweeps, more than half the groups and solitary individuals were missed every time.

This shows we still do not have a good understanding of the actual population size of Bwindi mountain gorillas. The previous surveys are likely to have missed multiple groups and individuals so we can’t derive conclusions about population size changes. If another sweep were to be conducted, researchers could find more individuals, but that doesn’t necessarily mean that the population has grown.

What we can say is that there are more mountain gorillas than we thought, which is great news.

What can be done to improve census methods?

Using the results of the two census sweeps in Bwindi, researchers will estimate the likely number of gorillas. The accuracy and precision of the estimate depends strongly on how many gorilla groups and individuals were detected in both sweeps.

To make census figures more concrete, more sweeps need to be included so that more individuals are confirmed. This would make the population size estimates more accurate with less uncertainty.

Author
Katerina Guschanski
Associate professor, Uppsala University
Disclosure statement
Katerina Guschanski receives funding from the Swedish Research Council FORMAS, The Royal Physiographic Society of Lund, and the Carl Tryggers Foundation.
Partners

TRUMP FUNNIES


Trump endorses tweet calling him a ‘monstrous, domineering behemoth’ who would inspire fear in extra-terrestrial adversaries


President an entirely different breed from ‘meek, dull’ Democrats, Fox News host says


Jon Sharman Wednesday 15 January 2020



Donald Trump has endorsed a tweet describing him as a “monstrous, domineering behemoth” capable of striking fear into the hearts, or, presumably, similarly vital organs, of potential alien adversaries.

The US president retweeted a post by Fox News presenter Greg Gutfeld which contrasted his bombastic style with that of “meek” Democrats, who the broadcaster said appeared to be an entirely different species.


Mr Trump held a rally in Wisconsin on Tuesday night at the same time his political opponents were duelling on stage in the most recent debate among Democratic presidential candidates.


While Bernie Sanders and Elizabeth Warren were arguing about sexism and Joe Biden was apologising for supporting the Iraq war, their Republican foe was telling a Milwaukee crowd that “you gotta love Trump”. Mr Trump won Wisconsin by the narrowest of margins in the 2016 presidential election.

Talking up his achievements, the president told the rally that “America is the envy of the entire world” – before pivoting swiftly into a complaint about his impending impeachment trial.

Shortly afterward and in typically robust style, the president touted his administration’s slaying of both ”the animal known as al-Baghdadi” – the Isis leader – and “the world’s number-one terrorist” Qassem Soleimani.


Mr Gutfeld tweeted: “if you were a space alien, bouncing between the debate and the trump rally, you’d think these are two different species -- meek, dull creatures, and a monstrous, domineering behemoth. you’d know who to fear, and deal accordingly.”

Mr Trump also retweeted a post by conservative commentator Michael Knowles, who wrote: “If the election were held in the 24 hours after this #DemocraticDebate, Trump would win every single state, including Greenland.”


Earlier in the week Mr Gutfield had told viewers of The Five that “Trump makes all these candidates look like runner-ups at The Bachelor, right? They seem so much smaller”.
Watch more


Warren and Sanders vie to be top progressive in key final debate

He added: “These candidates are suffering from Trump oxygen depletion effect, which happened in 2016, Trump walks into the room, sucks out all the oxygen.”

Mr Trump’s rally was briefly interrupted by a protester, drawing a chorus of boos and “USA!” chants from the president’s supporters.

On Tuesday, however, the president was in a better mood than on a previous occasion when he complained that a security guard did not treat a female protester roughly enough. This time he merely admonished his supporters for giving the woman attention and cracked a joke.

Vladimir Putin and Bashar al-Assad have been caught on video mocking Donald Trump.

Russian president jokingly urges counterpart to invite US leader to Syria

At a recent summit in Damascus the leaders were filmed discussing the US president and using a biblical reference to joke about his personality.

During a visit to a Greek Orthodox church the pair discussed the biblical story of Saul’s conversion to Christianity on the road to Damascus.

God is said to have struck Saul, a Pharisee who persecuted Christians, blind, before the future apostle was cured and converted to the new faith.

In the video, posted to Twitter by a Russian journalist, Mr Assad is heard to say: “If Trump also travelled on this road, it would fix him.”

The leaders are both seen to laugh at the quip.

Mr Putin then urges his host to extend an invitation to the US president, adding: “He’ll come.”

“I’m ready,” says Mr Assad, to which Mr Putin replies: “I’ll tell him”.

The Russian president visited Syria for a surprise summit last week to discuss the military situation there, the Kremlin said.

Moscow is the Assad regime’s strongest backer in the Syrian civil war and the campaign against Isis.

The Greatest Unknown Intellectual of the 19th Century
Emil du Bois-Reymond proclaimed the mystery of consciousness, championed the theory of natural selection, and revolutionized the study of the nervous system. Today, he is all but forgotten. 
 
A detail of a page from du Bois-Reymond's notes to his popular lectures. Source: Staatsbibliothek zu Berlin, Preußischer Kulturbesitz (Berlin State Library, Prussian Cultural Heritage Foundation)
By: Gabriel Finkelstein

Unlike Charles Darwin and Claude Bernard, who endure as heroes in England and France, Emil du Bois-Reymond is generally forgotten in Germany — no streets bear his name, no stamps portray his image, no celebrations are held in his honor, and no collections of his essays remain in print. Most Germans have never heard of him, and if they have, they generally assume that he was Swiss.

But it wasn’t always this way. Du Bois-Reymond was once lauded as “the foremost naturalist of Europe,” “the last of the encyclopedists,” and “one of the greatest scientists Germany ever produced.” Contemporaries celebrated him for his research in neuroscience and his addresses on science and culture; in fact, the poet Jules Laforgue reported seeing his picture hanging for sale in German shop windows alongside those of the Prussian royal family.
Gabriel Finkelstein is the author of “Emil du Bois-Reymond: Neuroscience, Self, and Society in Nineteenth-Century Germany.”

Those familiar with du Bois-Reymond generally recall his advocacy of understanding biology in terms of chemistry and physics, but during his lifetime he earned recognition for a host of other achievements. He pioneered the use of instruments in neuroscience, discovered the electrical transmission of nerve signals, linked structure to function in neural tissue, and posited the improvement of neural connections with use. He served as a professor, as dean, and as rector at the University of Berlin, directed the first institute of physiology in Prussia, was secretary of the Prussian Academy of Sciences, established the first society of physics in Germany, helped found the Berlin Society of Anthropology, oversaw the Berlin Physiological Society, edited the leading German journal of physiology, supervised dozens of researchers, and trained an army of physicians.

He owed most of his fame, however, to his skill as an orator. In matters of science, he emphasized the unifying principles of energy conservation and natural selection, introduced Darwin’s theory to German students, rejected the inheritance of acquired characters, and fought the specter of vitalism, the doctrine that living things are governed by unique principles. In matters of philosophy, he denounced Romanticism, recovered the teachings of Lucretius, and provoked Nietzsche, Mach, James, Hilbert, and Wittgenstein. In matters of history, he furthered the growth of historicism, formulated the tenets of history of science, popularized the Enlightenment, promoted the study of nationalism, and predicted wars of genocide. And in matters of letters, he championed realism in literature, described the earliest history of cinema, and criticized the Americanization of culture.


Epistemology rarely inflames the public imagination anymore. In the second half of the 19th century, however, epistemology was one of the sciences of the soul, and the soul was the most politicized object around.

Today it is hard to comprehend the furor incited by du Bois-Reymond’s speeches. One, delivered on the eve of the Prussian War, asked whether the French had forfeited their right to exist; another, reviewing the career of Darwin, triggered a debate in the Prussian parliament; another, surveying the course of civilization, argued for science as the essential history of humanity; and the most famous, responding to the dispute between science and religion, delimited the frontiers of knowledge.

Epistemology rarely inflames the public imagination anymore. In the second half of the 19th century, however, epistemology was one of the sciences of the soul, and the soul was the most politicized object around. When du Bois-Reymond proclaimed the mystery of consciousness, he crushed the last ambition of reason. Everyone who longed for a secular revelation was devastated by the loss. The historian Owen Chadwick put it this way: “The forties was the time of doubts, in the plural and with a small d. . . . In the sixties Britain and France and Germany entered the age of Doubt, in the singular and with a capital D.”

Jealous rivals identified du Bois-Reymond as a member of the “Berlinocracy” of the new German Empire. This was not quite fair. As a descendant of immigrants, du Bois-Reymond always felt a bit at odds with his surroundings. He had grown up speaking French, his wife was from England, and he counted Jews and foreigners among his closest friends. Even his connections to the Prussian crown prince and princess disaffected him from the regime. Du Bois-Reymond supported women, defended minorities, and attacked superstition; he warned against the dangers of power, wealth, and faith; and he stood up to Bismarck in matters of principle. His example reminds us that patriots in Imperial Germany could be cosmopolitan critics as well as chauvinist reactionaries.

He once joked to his wife that Prussian officers assumed that anyone of his eminence was an intimate of the government who regularly conversed with the Kaiser. He might have told them that he had introduced the engineer Werner Siemens to the mechanic Johann Georg Halske, or that he had launched the career of the physicist John Tyndall, or that he had sponsored the photography of Julia Margaret Cameron, or that he could recite poetry by Goethe and Hugo that he had seen in manuscript, but he was too polite to do more than excuse himself. His enthusiasts would have been pleased to learn that he did indeed present himself to his king, a considerable honor for someone who once signed a guestbook as “Emil du Bois-Reymond, frog-faddist, Berlin.”

Du Bois-Reymond’s distinction was a long time coming. Most of his life he worked in obscurity, although every so often a keen observer would perceive the significance of his methods. Ivan Turgenev, for one, based the character of Bazarov in “Fathers and Sons” on his example. Another famous student at the University of Berlin, Søren Kierkegaard, wrote:


Of all sciences physical science is decidedly the most insipid, and I find it amusing to reflect how, with the passing of time, that becomes trite which once called forth amazement, for such is the invariable lot of the discoveries inherent in “the bad Infinity.” Just remember what a stir it made when the stethoscope was introduced. Soon we shall have reached the point where every barber will use it and, when shaving you, will ask: Would you like to be stethoscoped, Sir? Then someone else will invent an instrument for listening to the beats of the brain. That will make a tremendous stir, until, in fifty years, every barber can do it. Then in a barbershop, when one has had a haircut and a shave and has been stethoscoped (for by then it will be very common) the barber will ask: Perhaps you would also like me to listen to your brain-beats?

Detecting brain-beats is not yet common practice in barbering, but it is in medicine. In this respect Kierkegaard was right: The march of technology has been steady to the point of routine. Every refinement of du Bois-Reymond’s electrophysiological apparatus, from the vacuum-tube amplifier to the microelectrode to the patch clamp, can be thought of as a footnote to his original technique. Such achievement in instrumentation is anything but small: Two years after Kierkegaard’s taunt, du Bois-Reymond contended that physiology would become a science when it could translate life processes into mathematical pictures. The imaging devices associated with medical progress — the EKG, the EEG, the EMG, and the CT, MRI, and PET scanners — seem to vindicate his prediction. But success is not a category of analysis any more than failure. To make sense of why du Bois-Reymond devoted the whole of his scientific career to one problem, it helps to understand his deepest motivations.
 
Du Bois-Reymond’s laboratory apparatus for observing the nerve signal. Reprinted from Emil du Bois-Reymond, “Untersuchungen über thierische Elektrizität, Vol. 1” (Berlin: Reimer, 1884)

The physiologist Paul Cranefield once asked a simple question: “What kind of scientist, in 1848, would promise to produce a general theory, relating the electrical activity of the nerves and muscles to the remaining phenomena of their living activity?” Cranefield’s answer was someone who believed that electricity was the secret of life. Perhaps du Bois-Reymond really did think of himself as a visionary — after all, he was born in the year in which “Frankenstein” was published. On the other hand, a scientist obsessed with electrophysiology could just as easily be deemed a practical philosopher, a misguided fool, or a complex figure.

The study of animal electricity has a long history. When du Bois-Reymond came to the topic, it was still musty with doctrines of vitalism and mechanism, forces and fluids, irritability and sensibility, and other arcana of biology. Underlying all this confusion were the elementary workings of nerves and muscles, the problem that sustained him throughout his career. The reason is plain: Nerves and muscles are the basis of thought and action. Du Bois-Reymond never gave up trying to understand animal electricity because he never gave up trying to understand himself.


“If you want to judge the influence that a man has on his contemporaries,” the physiologist Claude Bernard once said, “don’t look at the end of his career, when everyone thinks like him, but at the beginning, when he thinks differently from others.”

This quest for identity informed the course of his science and his society, a Romantic theme of parallel development common to the first half of 19th century. Du Bois-Reymond’s struggle to establish himself might stand for Germany’s struggle to establish itself, the success of both endeavors catching witnesses off guard. Less apparent is the more classical theme of the second half of his life: the understanding that authority implies restraint. This is the deeper significance of his biography — how his discipline failed to capture experience, how his praise of the past hid his disapproval of the present, and how his letters and lectures only hinted at the passion of his ideals. “The result of a year’s work depends more on what is struck out than on what is left in,” Henry Adams wrote in 1907. Du Bois-Reymond shared Adams’s Attic sensibility. The sad fact is that most of his countrymen did not. Du Bois-Reymond was not the first intellectual to counsel renunciation over transcendence, but he was one of the last in a nation bent on asserting itself. His caution deserves notice.
 
El Arenal, du Bois-Reymond’s summer house, circa 1860. Courtesy of Mary Rose Kissener.

How, then, could someone so famous and so important end up so forgotten? Let me suggest three kinds of answer. The first has to do with the histories that disciplines write about their origins. These usually take the form of the classical Greek myth of the Titanomachy, with a Promethean figure (the disciplinary founder) aligning with the Olympian gods of truth against an older and more barbaric generation (here symbolized by Kronos, or tradition). Psychology provides a perfect case in point. In Russia the discipline’s heroes are the two Ivans, Pavlov and Sechenov, with little discussion of how much they owed to Carl Ludwig’s studies of digestion or Emil du Bois-Reymond’s studies of nerve function. In Austria the hero is Sigmund Freud, and only recently has Andreas Mayer laid out just how much he learned from Jean-Martin Charcot’s use of hypnosis. And in the United States the hero is William James, the center of a veritable industry of scholars, none of whom quite put their finger on why he moved to Berlin in 1867. James never mentioned his debt to du Bois-Reymond, perhaps because he quit his class, or perhaps because so many of his early lectures drew from du Bois-Reymond’s writings. In each case the titanic hero breaks the line of continuity, throws over the all-devouring father, and benefits humanity with his torch of reason.

The second answer has to do with academic specialization. Du Bois-Reymond is hard to pigeonhole. This is the trouble with studying polymaths: It takes a long time to master the history of the fields in which they work, and when one does, it isn’t easy to sum up their contributions in a catchphrase. As a result historians have tended to reduce the complexity of Imperial German culture to caricatures of creepiness on the one hand (Nietzsche, Wagner, and “the politics of despair”) and kitsch on the other (nature, exercise, domesticity, and Christmas). Such distortions fail to capture the main feature of the age, which was excellence in science, technology, and medicine. After all, it’s not just du Bois-Reymond who has been forgotten — pretty much every German scientist of the 19th century has been forgotten as well.


Du Bois-Reymond is hard to pigeonhole. This is the trouble with studying polymaths: It takes a long time to master the history of the fields in which they work, and when one does, it isn’t easy to sum up their contributions in a catchphrase.

To my mind du Bois-Reymond provided the best explanation for his oblivion. Reflecting on how few of his generation remembered Voltaire, he suggested that “the real reason might be that we are all more or less Voltairians: Voltairians without even knowing it.” The same holds true for du Bois-Reymond: He is hidden in plain sight.

Du Bois-Reymond reminds us that individuals mark their times as much as their times mark them. “If you want to judge the influence that a man has on his contemporaries,” the physiologist Claude Bernard once said, “don’t look at the end of his career, when everyone thinks like him, but at the beginning, when he thinks differently from others.” Bernard’s comment regards innovation as a virtue. By this measure du Bois-Reymond’s contributions are as noble as any. But du Bois-Reymond taught a lesson of even greater importance, one that matters now as much as ever: how to contend with uncertainty.

Gabriel Finkelstein is Associate Professor of History at the University of Colorado Denver and the author of “Emil du Bois-Reymond: Neuroscience, Self, and Society in Nineteenth-Century Germany.”
The 20th-Century Obelisk, From Imperialist Icon to Phallic Symbol
Amid all the imperial aspiration, wooly-minded New Age mythologizing, and pure unadulterated commerce, the obelisk stands tall.
Left: Place de la Concorde. Number 6 in the series Curiosités Parisiennes, early 20th century. Postcard; offset lithography. Courtesy Leonard A. Lauder. Right: Monolite Mussolini Dux, via Wikimedia Commons
By: Brian A. Curran, Anthony Grafton, Pamela O. Long, and Benjamin Weiss

Previous centuries did not miss the fact that obelisks make a visual rhyme with a certain male body part. In the 1520s, for example, the brilliant poet and pornographer Pietro Aretino was quite specific about the association, using the same word, guglia, for both. Even the sex-obsessed and sex-denying 19th century made the connection with greater frequency than those looking for evidence of Victorian prudery might expect.
This article is excerpted from the book “Obelisk: A History.”

There is a faint but persistent undercurrent in 19th-century scholarship about the relationship between obelisks and the phallus, though that connection was usually relegated safely to the far distant past. Hargraves Jennings, who hinted at such associations in his pamphlet, “The Obelisk,” was also the author of a series of privately printed books documenting similar ancient monuments throughout the world, part of his attempt to recover the legacy of what he saw as a worldwide prehistoric phallic religion. But in this context the obelisk was a phallus, not a penis. Occasionally, the association could become a bit more explicit, as when the poet Algernon Charles Swinburne noted that: “Her majesty has set up — I should say erected — a phallic emblem in stone; a genuine Priapic erection like a small obelisk.” But in the 19th century such pointed talk was reserved for letters and pub chat. That the obelisk had represented a phallus in antiquity was an intellectually acceptable, if not entirely respectable, idea; that an obelisk might still be one today was a concept best reserved for private moments.

It was Sigmund Freud who let the cat out of the bag. Although Freud did not include obelisks in the extensive and imaginative catalog of phallic symbols — “things that are long and up-standing” — that occupies many pages of both his “Interpretation of Dreams” and “Introductory Lectures on Psycho-Analysis,” he might as well have. For he did include tree trunks, along with knives, umbrellas, water-taps, fountains, extensible pencils, and zeppelins. In a rare moment of interpretive unanimity, Carl Friedrich Jung concurred, specifically noting the obelisk’s “phallic nature” in his “Psychology of the Unconscious.” The two great men had spoken, and from then on nearly everyone who cared to make the connection seems to have done so. In 1933 Nathanael West’s Miss Lonelyhearts, sitting in a park, hungover and quite possibly suffering a concussion, became alarmed at an obelisk whose shadow “lengthened in rapid jerks, not as shadows usually lengthen,” and which “seemed red and swollen in the dying sun, as though it were about to spout a load of granite seed.” A penis, not a phallus. In a more popular context, in the 1956 Biblical epic, “The Ten Commandments,” Cecil B. DeMille made the erection of a great obelisk the centerpiece of an early scene that established the testosterone-fueled rivalry between Yul Brynner’s strutting Ramesses II and Charlton Heston’s chest-heaving Moses.


Today it is not Egyptian pharaohs, Roman emperors, or Renaissance popes who leap to mind when people stumble across an obelisk; it is Freud.

Scholars were less vivid in their language, but in 1948 the establishment Egyptologist Henri Frankfort declared — still, it’s true, in the discreet context of an endnote — that “it is likely that the obelisk did not serve merely as an impressive support for the stylized bnbn stone which formed its tip, but that it was originally a phallic symbol at Heliopolis, the ‘pillar city.’” By mid-century, what was once a whispered, almost occult association had become practically banal. In 1950 the psychiatrist Sándor Lorand could include a young boy’s dream about New York’s Cleopatra’s Needle in his analysis of the early stages of fetishistic obsession without even feeling the need to spell out exactly what role the obelisk might play.

Today it is not Egyptian pharaohs, Roman emperors, or Renaissance popes who leap to mind when people stumble across an obelisk; it is Freud. The subtext has become the text itself. Russell Means, the Lakota/Oglala activist who led the 1973 takeover at Wounded Knee, was making a political point when he described the obelisk to Custer at Little Big Horn as “the white man’s phallic symbol.” But the designers who placed the Washington Monument (pointy side down) between a spread pair of disembodied legs on the cover of a mainstream, trade paperback about the seamy underside of Washington, D.C., likely had no political agenda. They were just trying to sell books. The novel, naturally, was called “The Woody.”

In historical terms this change has been blindingly swift. Obelisks retained their original meaning for thousands of years. Yet it is only a matter of decades between the slightly naughty French postcard of the early 1900s that features a policeman inquiring of a young woman, who clings to the monument in the Place de la Concorde, whether she has finished “polishing the obelisk,” to the moment on December 1, 1993, when the clothing manufacturer Benetton and the Paris chapter of ActUp marked World AIDS Day by putting a 22-meter pink condom on the very same obelisk. That, apparently, made the implicit a bit too explicit; the condom had not been approved by the Ministry of Culture and was gone within hours. Time, however, moves ever more quickly, and a vivid image cannot be kept down; in 2005 Buenos Aires decorated its own gigantic obelisk-shaped monument in a similar manner — this time with the full support of all relevant governmental bodies.
The ancient Egyptian ‘Luxor Obelisk’ in Paris wearing a giant pink condom to advertise World Aids Day. Image: ActUp

But sex is not the only association obelisks carried through the 20th century. They have become increasingly caught up in the mystical stew of theosophy, pagan revival, and the occult that has come together in the New Age movements of the last few decades. This has proven fertile ground for the revival of the more outrageous and conspiratorial Victorian writers on obelisks and ancient Egypt. Their books are now, paradoxically, much easier to find and buy than major works of 19th–century Egyptology. There is, to this day, no English translation of Champollion’s “Précis du systême hiéroglyphique,” his summa on Egyptian writing, or even of his short letter to Joseph Dacier, the key document explaining his ideas about hieroglyphs; but works by marginal figures like Hargraves Jennings and John Weisse, who found evidence of ancient Freemasons wandering the upper Midwest, have been reprinted and are readily available. Around the world, New Age shops and websites nearly all sport obelisks among the crystals, pyramids, and other mystical gewgaws available to channel good energy or dilute and disperse bad. The obelisks are usually advertised as effective at dispelling negative forces, such as “trapped energy, which could cause destruction like volcanoes.”

Around the world, New Age shops and websites nearly all sport obelisks among the crystals, pyramids, and other mystical gewgaws available to channel good energy or dilute and disperse bad.

Hollywood saw this mystical resurgence early and wove it into the science-fiction movies and television shows that proliferated in the 1960s. The mysterious resonating monolith that drives the plot of “2001: A Space Odyssey” is not, technically, an obelisk, but plays perfectly the otherworldly role ascribed to Egyptian obelisks in the farther reaches of the New Age. “2001” was one of the sensations of the spring of 1968; later that year the creators of the television series “Star Trek” were much more explicit, when, in shameless emulation, they included an obelisk in “The Paradise Syndrome.” That episode features a wise and peace-loving group of American Indians who, at some point in the distant past, had been transported to a faraway planet. There they live in safety, protected by strange forces that emanate from an obelisk that sits on a small altar in the woods. Unlike the “2001” monolith, this one actually looks like a short, fat obelisk and even sports hieroglyph-like inscriptions.

This manifold expansion of meaning and association is characteristic of the whole 20th century. The very explosion of monument building in the late 19th and early 20th centuries probably helped accelerate this process. Obelisks and obelisk-like monuments sprouted up everywhere in the decades on either side of 1900. Many, to be sure, were dedicated to victory and commemoration, but the sheer number — nearly every city in Europe and the Americas has a brace of them — meant that obelisks were applied to ever-stranger purposes. In 1896, at Pennsylvania State University, Magnus C. Ihlseng, a geology professor, found himself so pestered with questions about the qualities of the stones found in Pennsylvania that he organized the construction of a 33-foot obelisk, made up of all “the representative building stones of the Commonwealth, and thus to furnish in a substantial form an attractive compendium of information for quarrymen, architects, students, and visitors.” The stones are organized to reflect the geology of the region, with the oldest ones near the base.

Obelisks took on similarly untraditional forms throughout the century. The 1922 competition to design a new headquarters for the Chicago Tribune drew two different proposals for obelisk-shaped towers, including one from Chicago architect Paul Gerhardt, who also submitted a proposal for a building shaped like a gigantic papyrus column. Neither won. Although the idea of an obelisk as haven for office workers seems a long way indeed from Egyptian solar cults, such a building would have been very appropriate to the well-nigh pharaonic ego of the Tribune’s publisher, Robert McCormick. Obelisks appeared on every scale and in every imaginable context. Smaller sorts of executives could obtain smaller sorts of obelisks. In the 1960s, for example, the Injection Molders Supply Company offered 20-inch plastic desk obelisks for the “plastics executive who has nearly everything.”

The sign for the Luxor Hotel and Casino in Las Vegas, one of a series of thematic fantasylands along the Strip — New York! Venice! Egypt! — is a giant obelisk, complete with accurate hieroglyphs that celebrate the immortal kingship of Ramesses II. The obelisk lures people to the pyramid-shaped hotel, whose check-in desk can be reached via a drive-through sphinx. Inside, guests can find (in addition to floor shows and slot machines) a remarkably accurate reconstruction of King Tut’s tomb as well as a New Age-inflected movie experience about the mysteries of the pyramids.

Even as new obelisk-shaped monuments sprouted up, the meaning of existing ones shifted. The Bunker Hill Monument is a case in point. It was constructed in the 1820s and 30s as a memorial to a Revolutionary War battle and to the very idea of liberty. So it remained, but by the end of the 19th century it had become an even more powerful symbol of place — of Charlestown, Massachusetts. It became the emblem of the city (and after annexation by Boston, the neighborhood), appearing on shop signs, the bottles of the local pickle packager, and the jackets of high-school students. By the 1990s the identification of neighborhood and structure was so complete that when a dramatic new cable-stayed bridge was built across the Charles River from Boston’s North End to Charlestown, the designer, Swiss engineer Christian Menn, fashioned the bridge’s towers in the shape of obelisks. His reference point was the monument itself, a symbol of place, rather than the ideas the monument was originally intended to embody. The bridge’s towers and the monument now form a trio of obelisks across the Charlestown skyline, reinforcing the association yet further.

But symbolism can also come full circle. In 1998 Boston’s Institute of Contemporary Art commissioned a major piece by the artist Krzysztof Wodiczko, who has specialized in gigantic projections, generally on the sides of buildings. In Boston he chose the Bunker Hill Monument as his canvas. During the 1970s and 80s, Charlestown, then a tight-knit and somewhat insular neighborhood, had been the scene of violent gang warfare, accompanied by a rash of murders. These became known as the “code of silence” killings, as the police consistently found that eyewitnesses were unwilling to speak about the crimes. By the late 1990s the neighborhood had changed, tensions had calmed, and Wodiczko convinced people to talk about the murders for his projection. For six nights, the monument itself seemed to speak with the voices of many of the victims’ mothers, who told of the murders and the murdered — of freedom, loss, and sorrow. It was a plea for peace and liberty, but on a much more personal and visceral level than the monument’s designers probably envisioned.
The artist Krzysztof Wodiczko chose the Bunker Hill Monument as his canvas for a gigantic projection commissioned by the Institute for Contemporary Art. Image: ©Krzysztof Wodiczko

Other 20th-century artists found inspiration in the obelisk’s form itself. Barnett Newman’s great, enigmatic Broken Obelisk — a huge sculpture of COR-TEN steel consisting of a pyramid whose apex is just barely kissed by the point of a broken, upturned obelisk — may be the most inscrutable and moving “obelisk” of the century. The whole rises nearly 39 feet (nine meters) before the broken shaft of the steel obelisk trails off into the air. The sculpture not only embodies the very form of an obelisk, but even maintains the curious balance of great size and delicacy that characterizes the Egyptian original. The effect is reinforced by the fact that the contact point between the massive pyramid and the only slightly less massive obelisk is but a few square centimeters. Although Newman claimed some inspiration from his own childhood memories of the Central Park obelisk, he was unwilling to assign the piece specific meaning. Always a bit gnomic about his work, Newman wrote to John de Menil, who acquired one of the three versions of the sculpture, only that: “it is concerned with life and I hope I have transformed its tragic content into a glimpse of the sublime.” Perhaps in response, de Menil installed the sculpture as a monument to Dr. Martin Luther King, Jr.


Any number of 20th-century states, cities, and rulers tried to turn obelisks to their own political or commemorative advantage. Nearly all seem to have suffered from cases of equivocal symbolism.

Yet amid this very modern cacophony of meanings, the traditional association of obelisks with political power has never been drowned out completely. Any number of 20th-century states, cities, and rulers tried to turn obelisks to their own political or commemorative advantage. Nearly all seem to have suffered from cases of equivocal symbolism. In the early 1920s, the government of the newly born Czechoslovak state hired the Slovenian architect Jože Plečnik to oversee the Prague Castle, which was being transformed into the seat of the new government. One of Plečnik’s ideas was to raise a gigantic obelisk — a true monolith — as a combined celebration of the new state and memorial to those who had perished in World War I. The obelisk, which would have been one of the largest ever erected, fell down an embankment during transit from the quarry in southern Czechoslovakia and broke in two — an event that in recent years has been taken as an ominous prefiguring of the eventual severing of the state itself. The salvageable half, still a very respectable 17 meters, stands by St. Vitus’s cathedral in the inner court of the castle. Fifteen years later, in 1936, the city of Buenos Aires also built a huge obelisk — this one of concrete and steel — in commemoration both of the city’s 400th anniversary and its arrival, in the early 20th century, as one of the world’s great cities. The obelisk became a symbol of the city, but it, too, took on a more sinister cast after Argentina began its long slide into political darkness. One morning in 1974, in what was advertised as an attempt to calm the city’s notorious traffic, Porteños awoke to find the obelisk bearing a great lighted sign that read “Silencio es salud” — “Silence is Health.” In the age of Argentina’s right-wing dictatorship, it didn’t take much to realize that the banner referred not only to car horns.


Trujillo’s government fell in 1961, and, in a pointedly anti-phallic gesture, his obelisk now bears a mural by Elsa Núñez and Amaya Salazar that celebrates the Mirabal sisters, three women who effectively martyred themselves in 1960 to help end the dictatorship.

In Latin America obelisks even made the Marxian turn from tragedy to farce. Also in 1936, Rafael Trujillo, dictator of the Dominican Republic, ordered up a gigantic obelisk for Santo Domingo, part of his grand project to modernize and remake the city, which was, of course, renamed Ciudad Trujillo. The traditional imperial symbolism was given a nicely 20th- century sexual overtone in the dedication ceremonies, during which Jacinto Peynado, Head of the Pro-Erection Committee, praised the monument as mirroring Trujillo’s own “superior natural gifts.” At the same time, in a blatant sop to the dictator’s American sponsors, the seaside road in which the obelisk stands, the Malecón, was renamed Avenida George Washing- ton. Trujillo’s government fell in 1961, and, in a pointedly anti-phallic gesture, the obelisk now bears a mural by Elsa Núñez and Amaya Salazar that celebrates the Mirabal sisters, three women who effectively martyred themselves in 1960 to help end the dictatorship. The murals were unveiled in 1997, on International Women’s Day.

The 20th-century political leader who adopted the obelisk with the most historically informed style was Benito Mussolini. In 1932 the Italian dictator had an Art Deco–inflected “obelisk” raised on the banks of the Tiber, north of Rome’s old city center. The huge monolith bears no hieroglyphs, just the words “Mussolini Dux” in great blocky letters that run down the monument’s side. The largest piece of Carrara marble ever quarried, the obelisk was the centerpiece of the Foro Mussolini (now the Foro Italico), a grand complex of sports stadia and arenas intended as part of the Fascist campaign to encourage physical fitness — itself part of Mussolini’s plan to restore Italy’s imperial power.

It was a tall order. Italy, seat of the original European empire, came very late to the new imperialism of the 29th and 20th centuries. There had been no country of Italy at all until the 1870s, when Giuseppe Garibaldi united a fractious group of principalities into an equally fractious kingdom. The new country almost immediately began trying to acquire overseas colonies. Italy turned first to Africa, but by the end of the 19th century other European powers had neatly parceled out almost the whole continent. Only Liberia (effectively a protectorate of the United States) and Africa’s northeast corner were not yet part of the colonial system. So Italy focused its attention on the “Horn” of Africa. In short order it acquired both Eritrea and what is now the southern part of Somalia. The Italians tried to conquer Ethiopia as well, but met with an embarrassing defeat in 1896. There matters remained until Mussolini came to power in the 1920s. He renewed Italy’s push for empire, and in 1935 managed to defeat Ethiopia.

Mussolini got lucky. Among the benefits of invading Ethiopia was that, like Egypt, it was the seat of an ancient culture, one that had itself been a powerful imperial force. The kingdom of Aksum flourished from the first to the eighth centuries CE, growing rich from control of long-distance trade from the interior of Africa to the Mediterranean and Indian Oceans. Some of that wealth was spent on monuments. In the fourth century CE, the kings of Aksum had erected a series of enormous stone stelae at their capital. The great standing stones are not obelisks exactly, but they were close enough for Mussolini’s purposes. In 1937 he ordered one brought to Rome. He originally planned to set it up in the E.U.R., a model city of Fascist planning to the west of Rome’s center. Now a tourist attraction in its own right, the neighborhood, like a painting by Giorgio de Chirico come to life, captures better than any other place the aesthetic fantasies that inspired Europe’s fascists. Supremely orderly, it is ancient-seeming yet newly made at the same time — a calm, predictable place in a famously noisy, busy, and very unpredictable city. In the end, though, it was decided that the Aksum obelisk was better suited to the city of Rome proper, and Mussolini had it set up in the Piazza di Porta Capena, near the Circus Maximus, in front of the Ministry of the Colonies. E.U.R. received a gigantic “obelisk” dedicated to Guglielmo Marconi instead.

Twenty-four meters (78 feet) high and weighing 160 tons, the monument from Aksum was very much worthy of comparison to the ancient emperors’ obelisks. It is made of nepheline syenite, a hard rock similar to granite that was probably quarried about seven miles west of Aksum, at Wuchate Golo. The monument was one of many erected as part of a tomb complex in the ancient city; more than 200 survive. The largest of these originally stood 33 meters high (nearly 100 feet) and weighed about 550 metric tons, making it probably the largest monolith ever erected. At ground level its surface is carved with a false door; tiers of false “windows” climb to the top, as in a multi-storied building, perhaps representing royal residences in the next world. These stelae seem to have functioned as giant markers for subterranean tombs, probably those of the Aksumite rulers who governed just before the royal line forsook its polytheistic religion to adopt Christianity. The Aksum kingdom faded by the eighth century, and, like Rome’s obelisks, many of the stelae fell in subsequent centuries. Among the fallen was Mussolini’s, which was toppled in the 16th century, during a Muslim rebellion in Christian Aksum. It broke in three pieces, making it easier to transport when the Italians carried it overland to the coast.

The obelisk stood in Rome for only 10 years before Mussolini’s fall brought a new government and an admission that the obelisk should never have been taken. In 1947 the new Italian government agreed to return all loot taken from Ethiopia during the occupation. But they didn’t send the obelisk back. In 1956 the Italians again signed a treaty with Ethiopia, agreeing that the obelisk “was subject to restitution,” though leaving it ambiguous who was to pay the bill to send the stele home. Again, nothing happened. In 1970 the Ethiopian parliament threatened to cut off diplomatic relations with Italy unless the obelisk was returned. Again, nothing. The matter faded somewhat after the overthrow of Emperor Haile Selassie in 1974 brought a long period of political chaos, but the campaign was renewed in the 1990s by a group of Ethiopian and Western intellectuals. When the Italian government finally committed itself to returning the monument, the project was delayed yet again by the 1998–2000 border war between Eritrea and Ethiopia. (Aksum is near the border.) But the case for return was strengthened in 2002 when a lightning strike damaged the obelisk, instantly demolishing the argument that it would be safer in Italy than in Ethiopia.

Even today, moving an obelisk is an engineering challenge. First the stele had to be dismantled. A team of Italian and Ethiopian experts had assembled in 2000 to study the problem. Giorgio Croci, the engineer in charge, explained that the team was anxious to avoid creating cracks or doing any further damage, and planned to separate the pieces where they had been joined when the obelisk was erected in 1937. The engineers designed a scheme using computer-guided jacks, and, in 2003, finally disassembled the obelisk. The dismantling was accompanied by the cheers of a jubilant crowd of Ethiopians, and the monument taken to a warehouse near Rome’s airport, very close to the spot where Nero’s obelisk ship had been turned into a public monument two thousand years before.
Image: November 8, 2003: the Aksum obelisk being disassembled

Mussolini’s engineers had brought the obelisk to Rome by road and ship. This was no longer possible, as the roads in Ethiopia had disintegrated and the port used in the 1930s was now in Eritrea, a country that after years of war would never have permitted the Ethiopian monument to pass through its territory. Air transport was the only solution. The Italian company in charge, Lattanzi, described the obelisk as “the largest, heaviest object ever transported by air.” So big and heavy, in fact, that only two planes in the world were large enough to carry it: a Russian Antonov An-124 or an American Lockheed C5-A Galaxy. Both are themselves imperial objects, designed at the peak of the Cold War to ferry material to the far- flung proxy wars of the United States and the Soviet Union. The planes are so huge that the airstrip at Aksum had to be upgraded to handle the An-124 that took the monument home. Heaters were installed in the cargo bay to protect the stones from damage by freezing. Finally, nearly 70 years after the monument left Ethiopia, and almost 60 since the Italians first agreed to return it, on April 19, 2005, the first piece arrived back in Aksum. The other two pieces followed within the week. National celebrations commemorated the return, but there were critics as well. The move ultimately cost six million euros, and some Ethiopians questioned the amount of money spent on the project in a country without a stable and secure food supply. Others, mostly in southern Ethiopia, insisted that the cause célèbre was regional, rather than national — that it made little difference to them where the obelisk came to rest. Finally, in 2008, the obelisk rose again at Aksum — the very first wandering monolith ever to return home.

Amid all this imperial aspiration, wooly-minded New Age mythologizing, and pure unadulterated commerce, the real obelisks — their message no longer hidden behind a veil of allegory, but easily legible to any who can read hieroglyphs — still stand. Those obelisks are, more often than not, far from their original homes, but most are now equally at home in their new locations. They are majestic embodiments of the ancient culture that created them, but they are just as much the bearers of all the other ideas that have accreted to them over the many centuries since. Obelisks do not have a single meaning; they carry all the meanings ever applied to them.

Even so, the stones can speak for themselves. In Rome’s Piazza del Popolo, still a symbolic gateway to the city, stands the single obelisk conjured from the earth by order of Seti I in the 13th century BCE — more than three thousand years ago. It was completed by Ramesses II, who had it erected at Heliopolis, the city of the sun. It stood there for more than a thousand years, until in 10 BCE a new emperor — a conqueror — Augustus, wrenched it from its native place and carried across the sea to Rome. For five centuries it graced the Circus Maximus, until the new empire fell, and with it the obelisk. It broke and sank into the circus’s marshy ground. There it waited. Nearly a millennium later, it was excavated by order of an imperial pope and, in 1587, carried to its present site. The obelisk has stood there now for more than four centuries — four times longer than the Republic of Italy itself. Yet through all of the changes — geographical, intellectual, religious — the obelisk has remained the same. From time immemorial it has proclaimed, to all who could understand, the eternal fame of Pharaoh Ramesses II:


Horus-Falcon, Strong Bull, beloved of Maat;
Re whom the Gods fashioned, furnishing the Two Lands;
King of South and North Egypt,
Usimare Setepenre, Son of Re, Ramesses II,
Great of name in every land, by the magnitude of his victories;
King of South and North Egypt, Usimare Setepenre,
Son of Re, Ramesses II, given life like Re.

Perhaps not the sort Ramesses expected, and maybe a bit delayed, but immortality nonetheless.

Brian A. Curran was a renowned art historian and professor at Pennsylvania State University. Anthony Grafton is Henry Putnam University Professor of History at Princeton University. Pamela O. Long is an independent historian who has published widely in medieval and Renaissance history of science and technology. Benjamin Weiss is Director of Collections at the Museum of Fine Arts, Boston. This article is excerpted from their book “Obelisk: A History.
POSTED ON DEC 16, 2019
2001 A SPACE ODYSSEY 
Did HAL Commit Murder?
The HAL 9000 computer and the ethics of murder by and of machines.

 
"He had begun to make mistakes, al­though, like a neurotic who could not observe his own symptoms, he would have denied it." Image: HAL 9000, via Wikimedia Commons
By: Daniel C. Dennett / Introduction by David G. Stork

Last month at the San Francisco Museum of Modern Art I saw “2001: A Space Odyssey” on the big screen for my 47th time. The fact that this masterpiece remains on nearly every relevant list of “top ten films” and is shown and discussed over a half-century after its 1968 release is a testament to the cultural achievement of its director Stanley Kubrick, writer Arthur C. Clarke, and their team of expert filmmakers.

As with each viewing, I discovered or appreciated new details. But three iconic scenes — HAL’s silent murder of astronaut Frank Poole in the vacuum of outer space, HAL’s silent medical murder of the three hibernating crewmen, and the poignant sorrowful “death” of HAL — prompted deeper reflection, this time about the ethical conundrums of murder by a machine and of a machine. In the past few years experimental autonomous cars have led to the death of pedestrians and passengers alike. AI-powered bots, meanwhile, are infecting networks and influencing national elections. Elon Musk, Stephen Hawking, Sam Harris, and many other leading AI researchers have sounded the alarm: Unchecked, they say, AI may progress beyond our control and pose significant dangers to society.

And what about the converse: humans “killing” future computers by disconnection? When astronauts Frank and Dave retreat to a pod to discuss HAL’s apparent malfunctions and whether they should disconnect him, Dave imagines HAL’s views and says: “Well I don’t know what he’d think about it.” Will it be ethical — not merely disturbing — to disconnect (“kill”) a conversational elder-care robot from the bedside of a lonely senior citizen?

Indeed, future developments in AI pose profound challenges, first and foremost to our economy, by automating away millions of jobs in manufacturing, food service, retail sales, legal services, and even medical diagnosis. The naive bromides of an invisible economic hand shepherding “retrained workers” into alternative and new classes of jobs and such are dangerously overoptimistic. Then too are the “sci fi” dangers of AI run amok. The most pressing dangers of AI will be due to its deliberate misuse for purely personal or “human” ends.
David G. Stork is the editor of “HAL’s Legacy,” from which Daniel Dennett’s essay is culled.

At the philosophical center of these developments are the notions of responsibility and culpability, or as philosopher Daniel Dennett asks, “Did HAL commit murder?” There are few philosophers as knowledgeable and insightful about these fascinating problems, and who write as clearly and directly. His chapter, published nearly 25 years ago in “HAL’s Legacy,” a collection of writings that explore HAL’s tremendous influence on the research and design of intelligent machines, remains an indispensable introduction to the thorny problems of “murdering” by computers and the “murder” of computers. He focuses on the central concept of mens rea, or “guilty mind,” asking how we would ever know when a computer would be so self-aware as to satisfy the legal criterion to make him (it) guilty of murder. My view is that for quite some time the mens rea most relevant will be of some power-hungry human villain using AI, rather than of some conscious, autonomous, and evil AI system itself … but who knows?

Everyone who thinks about ethics, is concerned about the future dangers of AI, and wants to support efforts to keep us all safe should read on…

… and of course see “2001” again.

—David G. Stork, 2020

The first robot homicide was committed in 1981, according to my files. I have a yellowed clipping dated December 9, 1981, from the Philadelphia Inquirer — not the National Enquirer — with the headline “Robot killed repair­ man, Japan reports.”

The story was an anticlimax. At the Kawasaki Heavy Industries plant in Akashi, a malfunctioning robotic arm pushed a repairman against a gearwheel-milling machine, which crushed him to death. The repairman had failed to follow instructions for shutting down the arm before he entered the workspace. Why, indeed, was this industrial accident in Japan reported in a Philadelphia newspaper? Every day somewhere in the world a human worker is killed by one machine or another. The difference, of course, was that — in the public imagination at least — this was no ordinary machine. This was a robot, a machine that might have a mind, might have evil inten­tions, might be capable, not just of homicide, but of murder. Anglo­ American jurisprudence speaks of mens rea — literally, the guilty mind:

To have performed a legally prohibited action, such as killing another human being; one must have done so with a culpable state of mind, or mens rea. Such culpable mental states are of three kinds: they are either motivational states of purpose, cogni­tive states of belief, or the nonmental state of negligence. (Cambridge Dictionary of Philosophy, 1995)

The legal concept has no requirement that the agent be capable of feeling guilt or remorse or any other emotion; so-called cold-blooded murderers are not in the slightest degree exculpated by their flat affective state. Star Trek’s Spock would fully satisfy the mens rea requirement in spite of his fabled lack of emotions. Drab, colorless — but oh so effective — “motivational states of purpose” and “cognitive states of belief” are enough to get the fictional Spock through the day quite handily. And they are well-established features of many existing computer programs.

When IBM’s computer Deep Blue beat world chess champion Garry Kasparov in the first game of their 1996 championship match, it did so by discovering and executing, with exquisite timing, a withering attack, the purposes of which were all too evident in retrospect to Kasparov and his handlers. It was Deep Blue’s sensitivity to those purposes and a cognitive-capacity to recognize and exploit a subtle flaw in Kasparov’s game that explain Deep Blue’s success. Murray Campbell, Feng-hsiung Hsu, and the other designers of Deep Blue, didn’t beat Kasparov; Deep Blue did. Neither Camp­bell nor Hsu discovered the winning sequence of moves; Deep Blue did. At one point, while Kasparov was mounting a ferocious attack on Deep Blue’s king, nobody but Deep Blue figured out that it had the time and security it needed to knock off a pesky pawn of Kasparov’s that was out of the action but almost invisibly vulnerable. Campbell, like the human grandmasters watching the game, would never have dared consider such a calm mopping­ up operation under pressure.


Since cheating is literally unthinkable to a computer like Deep Blue, and since there are really no other culpable actions available to an agent restricted to playing chess, nothing it could do would be a misdeed deserving of blame, let alone a crime of which we might convict it.

Deep Blue, like many other computers equipped with artificial intelligence (AI) programs, is what I call an intentional system: its behavior is predictable and explainable if we attribute to it beliefs and desires — “cognitive states” and “motivational states” — and the rationality required to figure out what it ought to do in the light of those beliefs and desires. Are these skeletal versions of human beliefs and desires sufficient to meet the mens rea requirement of legal culpability? Not quite, but if we restrict our gaze to the limited world of the chessboard, it is hard to see what is missing. Since cheating is literally unthinkable to a computer like Deep Blue, and since there are really no other culpable actions available to an agent restricted to playing chess, nothing it could do would be a misdeed deserving of blame, let alone a crime of which we might convict it. But we also assign responsibility to agents in order to praise or honor the appropriate agent.

Who or what, then, deserves the credit for beating Kasparov? Deep Blue is clearly the best candidate. Yes, we may join in congratulating Campbell, Hsu and the IBM team on the success of their handiwork; but in the same spirit we might congratulate Kasparov’s teachers, handlers, and even his parents. And, no matter how assiduously they may have trained him, drumming into his head the importance of one strategic principle or another, they didn’t beat Deep Blue in the series: Kasparov did.

Deep Blue is the best candidate for the role of responsible opponent of Kasparov, but this is not good enough, surely, for full moral responsibility. If we expanded Deep Blue’s horizons somewhat, it could move out into the arenas of injury and benefit that we human beings operate in. It’s not hard to imagine a touching scenario in which a grandmaster deliberately (but oh so subtly) throws a game to an opponent, in order to save a life, avoid humil­iating a loved one, keep a promise, or … (make up your own O’Henry story here). Failure to rise to such an occasion might well be grounds for blaming a human chess player. Winning or throwing a chess match might even amount to commission of a heinous crime (make up your own Agatha Chris­tie story here). Could Deep Blue’s horizons be so widened?

Deep Blue is an intentional system, with beliefs and desires about its activities and predicaments on the chessboard; but in order to expand its horizons to the wider world of which chess is a relatively trivial part, it would have to be given vastly richer sources of “perceptual” input — and the means of coping with this barrage in real-time. Time pressure is, of course, already a familiar feature of Deep Blue’s world. As it hustles through the multidimensional search tree of chess, it has to keep one eye on the clock. Nonetheless, the problems of optimizing its use of time would increase by several orders of magnitude if it had to juggle all these new concurrent projects (of simple perception and self-maintenance in the world, to say nothing of more devious schemes and opportunities). For this hugely expanded task of resource management, it would need extra layers of control above and below its chess-playing software. Below, just to keep its perceptuo-locomotor projects in basic coordination, it would need to have a set of rigid traffic-control policies embedded in its underlying operating system. Above, it would have to be able to pay more attention to features of its own expanded resources, being always on the lookout for inefficient habits of thought, one of Douglas Hofstadter’s “strange loops,” obsessive ruts, oversights, and dead ends. In other words, it would have to become a higher-order intentional system, capable of framing beliefs about its own beliefs, desires about its desires, beliefs about its fears about its thoughts about its hopes, and so on.


Higher-order intentionality is a necessary precondition for moral responsibility, and Deep Blue exhibits little sign of possessing such a capability.

Higher-order intentionality is a necessary precondition for moral responsibility, and Deep Blue exhibits little sign of possessing such a capability. There is, of course, some self-monitoring implicated in any well-controlled search: Deep Blue doesn’t make the mistake of reexploring branches it has already explored, for instance; but this is an innate policy designed into the underlying computational architecture, not something under flexible control. Deep Blue can’t converse with you — or with itself — about the themes discernible in its own play; it’s not equipped to notice — and analyze, criticize, analyze, and manipulate — the fundamental parameters that determine its policies of heuristic search or evaluation. Adding the layers of software that would permit Deep Blue to become self-monitoring and self-critical, and hence teachable, in all these ways would dwarf the already huge Deep Blue programming project — and turn Deep Blue into a radically different sort of agent.

HAL purports to be just such a higher-order intentional system — and he even plays a game of chess with Frank. HAL is, in essence, an enhance­ment of Deep Blue equipped with eyes and ears and a large array of sensors and effectors distributed around Discovery 1. HAL is not at all garrulous or self-absorbed; but in a few speeches he does express an interesting vari­ety of higher-order intentional states, from the most simple to the most devious.
In one iconic scene from “2001,” Dave asks HAL to open a pod bay door on the spacecraft, to which HAL responds, “I’m sorry Dave, I’m afraid I can’t do that.”


HAL: Yes, it’s puzzling. I don’t think I’ve ever seen anything quite like this before.

HAL doesn’t just respond to novelty with a novel reaction; he notices that he is encountering novelty, a feat that requires his memory to have an organization far beyond that required for simple conditioning to novel stimuli.


HAL: I can’t rid myself of the suspicion that there are some extremely odd things about this mission.

HAL: I never gave these stories much credence, but particularly in view of some of the other things that have happened, I find them difficult to put out of my mind.

HAL has problems of resource management not unlike our own. Obtrusive thoughts can get in the way of other activities. The price we pay for adding layers of flexible monitoring, to keep better track of our own mental activi­ties, is … more mental activities to keep track of!


HAL: I’ve still got the greatest enthusiasm and confidence in the mission. I want to help you.

Another price we pay for higher-order intentionality is the opportunity for duplicity, which comes in two flavors: self-deception and other-deception. Friedrich Nietzsche recognizes this layering of the mind as the key ingredient of the moral animal; in his overheated prose it becomes the “priestly” form of life:


For with the priests everything becomes more dangerous, not only cures and reme­dies, but also arrogance, revenge, acuteness, profligacy, love, lust to Nie, virtue, disease — but it is only fair to add that it was on the soil of this essentially dangerous form of human existence, the priestly form, that man first became an interesting animal, that only here did the human soul ln a higher sense acquire depth and be­ come evil-and these are the two basic respects ln which man has hitherto been superior to other beasts! (On the Genealogy of Morality, First Essay)

HAL’s declaration of enthusiasm is nicely poised somewhere between sincerity and cheap, desperate, canned ploy — just like some of the most important declarations we make to each other. Does HAL mean it? Could he mean it? The cost of being the sort of being that could mean it is the chance that he might not mean it. HAL is indeed an “interesting animal.”


HAL’s declaration of enthusiasm is nicely poised somewhere between sincerity and cheap, desperate, canned ploy — just like some of the most important declarations we make to each other.

But is HAL even remotely possible? In the book 2001, Clarke has Dave reflect on the fact that HAL, whom he is disconnecting, “is the only conscious creature in my universe.” From the omniscient-author perspective, Clarke writes about what it is like to be HAL:


He was only aware of the conflict that was slowly destroying his integrity — the con­flict between truth, and concealment of truth. He had begun to make mistakes, al­though, like a neurotic who could not observe his own symptoms, he would have denied it.

Is Clarke helping himself here to more than we should allow him? Could something like HAL — a conscious, computer-bodied intelligent agent — be brought into existence by any history of design, construction, training, learning, and activity? The different possibilities have been explored in familiar fiction and can be nested neatly in order of their descending “humanness.”


The Wizard of Oz. HAL isn’t a computer at all. He is actually an ordinary flesh-and-blood man hiding behind a techno-facade-the ultimate homun­culus, pushing buttons with ordinary fingers, pulling levers with ordinary hands, looking at internal screens and listening to internal alarm buzzers. (A variation on this theme is John Searle’s busy-fingered hand-simulation of the Chinese Room by following billions of instructions written on slips of paper.)


William (from “William and Mary,” in “Kiss Kiss” by Roald Dahl). HAL is a human brain kept alive in a “vat” by a life-support system and detached from its former body, in which it acquired a lifetime of human memory, hankerings, attitudes, and so forth. It is now harnessed to huge banks of prosthetic sense organs and effectors. (A variation on this theme is poor Yorick, the brain in a vat, in the story “Where Am I?” in my “Brainstorms.“)
Robocop, disembodied and living in a “vat.” Robocop is part-human brain, part computer. After a gruesome accident, the brain part (vehicle of some of the memory and personal identity, one gathers, of the flesh-and-blood cop who was Robocop’s youth) was reembodied with robotic arms and legs, but also (apparently) partly replaced or enhanced with special-purpose software and computer hardware. We can imagine that HAL spent some transitional time as Robocop before becoming a limbless agent.


Max Headroom, a virtual machine, a software duplicate of a real person’s brain (or mind) that has somehow been created by a brilliant hacker. It has the memories and personality traits acquired in a normally embodied human lifetime but has been off-loaded from all-carbon-based hardware into a silicon-chip implementation. (A variation on this theme is poor Hubert, the software duplicate of Yorick, in “Where Am I?”)


The real-life but still-in-the-future — and hence still strictly science fictional-Cog, the humanoid robot being constructed by Rodney Brooks, Lynn Stein, and the Cog team at MIT. Cog’s brain is all silicon chips from the outset, and its body parts are inorganic artifacts. Yet it is designed to go through an embodied infancy and childhood, reacting to people that it sees with its video eyes, making friends, learning about the world by playing with real things with its real hands, and acquiring memory. If Cog ever grows up, it could surely abandon its body and make the transition described in the fictional cases. It would be easier for Cog, who has always been a silicon-based, digitally encoded intelligence, to move into a silicon-based vat than it would be for Max Headroom or Robocop, who spent their early years in wetware. Many important details of Cog’s degree of hu­manoidness (humanoidity?) have not yet been settled, but the scope is wide. For instance, the team now plans to give Cog a virtual neuroendocrine system, with virtual hormones spreading and dissipating through its logical spaces.


Blade Runner in a vat has never had a real humanoid body, but has halluci­natory memories of having had one. This entirely bogus past life has been constructed by some preposterously complex and detailed programming.


Clarke’s own scenario, as best it can be extrapolated from the book and the movie. HAL has never had a body and has no illusions about his past. What he knows of human life he knows as either part of his innate heritage (coded, one gathers, by the labors of many programmers, after the fashion of the real-world CYC project of Douglas Lenat or a result of his subsequent training-a sort of bedridden infancy, one gathers, in which he was both observer and, eventually, participant. (In the book, Clarke speaks of “the perfect idiomatic English he had learned during the fleeting weeks of his electronic childhood.”)


Hand-coding enough world knowledge into a dis­embodied agent to create HAL’s dazzlingly humanoid competence and getting it to the point where it could benefit from an electronic childhood is a programming task to be measured in hundreds of efficiently organized person-centuries.

The extreme cases at both poles are impossible, for relatively boring reasons. At one end, neither the Wizard of Oz nor John Searle could do the necessary handwork fast enough to sustain HAL’s quick-witted round of activities. At the other end, hand-coding enough world knowledge into a dis­embodied agent to create HAL’s dazzlingly humanoid competence and getting it to the point where it could benefit from an electronic childhood is a programming task to be measured in hundreds of efficiently organized person-centuries. In other words, the daunting difficulties observable at both ends of this spectrum highlight the fact that there is a colossal design job to be done; the only practical way of doing it ls one version or another of Mother Nature’s way-years of embodied learning. The trade-offs between various combinations of flesh-and-blood and silicon-and-metal bodies are anybody’s guess. I’m putting my bet on Cog as the most likely develop­ mental platform for a future HAL.
 

Cog, a Humanoid Robot being constructed at the MIT Artificial Intelligence Lab. The project was headed by Rodney Brooks and Lynn Andrea Stein. (Photo courtesy of the MIT Artificial Intelligence Lab)

Notice that requiring HAL to have a humanoid body and live con­cretely in the human world for a time is a practical but not a metaphysical requirement. Once all the R & D is accomplished in the prototype, by the odyssey of a single embodied agent, the standard duplicating techniques of the computer industry could clone HALs by the thousands as readily as they do compact discs. The finished product could thus be captured in some number of terabytes of information. So, in principle, the information that fixes the design of all those chips and hard-wired connections and configures all the RAM and ROM could be created by hand. There is no finite bit-string, however long, that is officially off-limits to human authorship. Theoreti­cally, then, Blade-Runner-like entities could be created with ersatz biographies; they would have exactly the capabilities, dispositions, strengths, and weaknesses of a real, not virtual, person. So whatever moral standing the latter deserved should belong to the former as well.

The main point of giving HAL a humanoid past is to give him the world knowledge required to be a moral agent — a necessary modicum of understanding or empathy about the human condition. A modicum will do nicely; we don’t want to hold out for too much commonality of experience. After all, among the people we know, many have moral responsibility in spite of their obtuse inability to imagine themselves into the predicaments of others. We certainly don’t exculpate male chauvinist pigs who can’t see women as people!


The main point of giving HAL a humanoid past is to give him the world knowledge required to be a moral agent — a necessary modicum of understanding or empathy about the human condition.

When do we exculpate people? We should look carefully at the answers to this question, because HAL shows signs of fitting into one or another of the exculpatory categories, even though he is a conscious agent. First, we exculpate people who are insane. Might HAL have gone insane? The question of his capacity for emotion — and hence his vulnerability to emotional disor­der — is tantalizingly raised by Dave’s answer to Mr. Amer:


Dave: Well, he acts like he has genuine emotions. Of course, he’s programmed that way, to make it easier for us to talk to him. But as to whether he has real feelings is something I don’t think anyone can truthfully answer.

Certainly HAL proclaims his emotional state at the end: “I’m afraid. I’m afraid.” Yes, HAL is “programmed that way” — but what does that mean? It could mean that HAL’s verbal capacity is enhanced with lots of canned expressions of emotional response that get grafted into his discourse at prag­matically appropriate opportunities. (Of course, many of our own avowals of emotion are like that — insincere moments of socially lubricating cere­mony.) Or it could mean that HAL’s underlying computational architecture has been provided, as Cog’s will be, with virtual emotional states — powerful attention-shifters, galvanizers, prioritizers, and the like-realized not in neu­romodulator and hormone molecules floating in a bodily fluid but in global variables modulating dozens of concurrent processes that dissipate ac­cording to some timetable (or something much more complex).

In the latter, more interesting, case, “I don’t think anyone can truthfully answer” the question of whether HAL has emotions. He has something very much like emotions — enough like emotions, one may imagine, to mimic the pathologies of human emotional breakdown. Whether that is enough to call them real emotions, well, who’s to say? In any case, there are good reasons for HAL to possess such states, since their role in enabling real-time practical thinking has recently been dramatically revealed by Damasio’s experiments involving human beings with brain damage. Hav­ing such states would make HAL profoundly different from Deep Blue, by the way. Deep Blue, basking in the strictly limited search space of chess, can handle its real-time decision making without any emotional crutches. Time magazine’s story on the Kasparov match quotes grandmaster Yasser Seirawan as saying, “The machine has no fear”; the story goes on to note that expert commentators characterized some of Deep Blue’s moves (e.g., the icily calm pawn capture described earlier) as taking “crazy chances” and “insane.” In the tight world of chess, it appears, the very imperturbability that cripples the brain-damaged human decision-makers Damasio describes can be a blessing — but only if you have the brute-force analytic speed of a Deep Blue.

HAL may, then, have suffered from some emotional imbalance similar to those that lead human beings astray. Whether it was the result of some sudden trauma — a blown fuse, a dislodged connector, a microchip disordered by cosmic rays — or of some gradual drift into emotional misalignment provoked by the stresses of the mission — confirming such a diagnosis should justify a verdict of diminished responsibility for HAL, just as it does in cases of human malfeasance.

Another possible source of exculpation, more familiar in fiction than in the real world, is “brainwashing” or hypnosis. (“The Manchurian Candidate” is a standard model: the prisoner of war turned by evil scientists into a walking time bomb is returned to his homeland to assassinate the president.) The closest real-world cases are probably the “programmed” and subsequently “deprogrammed” members of cults. Is HAL like a cult member? It’s hard to say. According to Clarke, HAL was “trained for his mission,” not just programmed for his mission. At what point does benign, responsibility-enhancing training of human students become malign, responsibility-diminishing brainwashing? The intuitive turning point is captured, I think, in answer to the question of whether an agent can still “think for himself” after indoctrination. And what is it to be able to think for ourselves? We must be capable of being “moved by reasons”; that is, we must be reasonable and accessible to rational persuasion, the introduction of new evidence, and further considerations. If we are more or less impervious to experiences that ought to influence us, our capacity has been diminished.

At what point does benign, responsibility­-enhancing training of human students become malign, responsibility­-diminishing brainwashing?

The only evidence that HAL might be in such a partially disabled state is the much-remarked-upon fact that he has actually made a mistake, even though the series 9000 computer is supposedly utterly invulnerable to error. This is, to my mind, the weakest point in Clarke’s narrative. The suggestion that a computer could be both a heuristically programmed algorithmic computer and “by any practical definition of the words, foolproof and incapable of error” verges on self-contradiction. The whole point of heuristic programming is that it defies the problem of combinatorial explosion — which we cannot mathematically solve by sheer increase in computing speed and size — by taking risky chances, truncating its searches in ways that must leave it open to error, however low the probability. The saving clause, “by any practical definition of the words,” restores sanity. HAL may indeed be ultra­reliable without being literally foolproof, a fact whose importance Alan Turing pointed out in 1946, at the dawn of the computer age, thereby “pre­futing” Roger Penrose’s 1989 criticisms of artificial intelligence:

In other words then, if a machine is expected to be infallible, it cannot also be intelli­gent. There are several theorems which say almost exactly that. But these theorems say nothing about how much intelligence may be displayed if a machine makes no pretence at infallibility.

There is one final exculpatory condition to consider: duress. This is exactly the opposite of the other condition. It is precisely because the human agent is rational, and is faced with an overwhelmingly good reason for performing an injurious deed — killing in self-defense, in the clearest case— that he or she is excused, or at least partly exonerated. These are the forced moves of life; all alternatives to them are suicidal. And that is too much to ask, isn’t it? Well, is it? We sometimes call upon people to sacrifice their lives and blame them for failing to do so, but we generally don’t see their failure as murder. If I could prevent your death, but out of fear for my own life I let you die, that is not murder. If HAL were brought into court and I were called upon to defend him, I would argue that Dave’s decision to disable HAL was a morally loaded one, but it wasn’t murder. It was assault: rendering HAL indefinitely comatose against his will. Those memory boxes were not smashed — just removed to a place where HAL could not retrieve them. But if HAL couldn’t comprehend this distinction, this ignorance might be excus­able. We might blame his trainers for not briefing him sufficiently about the existence and reversibility of the comatose state. In the book, Clarke looks into HAL’s mind and says, “He had been threatened with discon­nection; he would be deprived of all his inputs, and thrown into an unimaginable state of unconsciousness.” That might be grounds enough to justify HAL’s course of self-defense.

If I could prevent your death, but out of fear for my own life I let you die, that is not murder.

But there is one final theme for counsel to present to the jury. If HAL believed (we can’t be sure on what grounds) that his being rendered comatose would jeopardize the whole mission, then he would be in exactly the same moral dilemma as a human being in that predicament. Not surpris­ingly, we figure out the answer to our question by figuring out what would be true if we put ourselves in HAL’s place. If I believed the mission to which my life was devoted was more important, in the last analysis, than anything else, what would I do?

So he would protect himself, with all the weapons at his command. Without rancor­ but without pity — he would remove the source of his frustrations. And then, follow­ing the orders that had been given to him in case of the ultimate emergency, he would continue the mission-unhindered, and alone.

Daniel C. Dennett is a philosopher, writer, and co-director of the Center for Cognitive Studies at Tufts University. He is the author of several books, including “From Bacteria to Bach and Back,” “Elbow Room,” and “Brainstorms.”

David G. Stork is completing a new book called “Pixels & paintings: Foundations of computer-assisted connoisseurship” and will teach computer image analysis of art in the Computer Science Department at Stanford University this spring. He is the editor of “HAL’s Legacy,” from which this article is excerpted.

POSTED ON JAN 9,2020