Wednesday, January 15, 2020

2001 A SPACE ODYSSEY 
Did HAL Commit Murder?
The HAL 9000 computer and the ethics of murder by and of machines.

 
"He had begun to make mistakes, al­though, like a neurotic who could not observe his own symptoms, he would have denied it." Image: HAL 9000, via Wikimedia Commons
By: Daniel C. Dennett / Introduction by David G. Stork

Last month at the San Francisco Museum of Modern Art I saw “2001: A Space Odyssey” on the big screen for my 47th time. The fact that this masterpiece remains on nearly every relevant list of “top ten films” and is shown and discussed over a half-century after its 1968 release is a testament to the cultural achievement of its director Stanley Kubrick, writer Arthur C. Clarke, and their team of expert filmmakers.

As with each viewing, I discovered or appreciated new details. But three iconic scenes — HAL’s silent murder of astronaut Frank Poole in the vacuum of outer space, HAL’s silent medical murder of the three hibernating crewmen, and the poignant sorrowful “death” of HAL — prompted deeper reflection, this time about the ethical conundrums of murder by a machine and of a machine. In the past few years experimental autonomous cars have led to the death of pedestrians and passengers alike. AI-powered bots, meanwhile, are infecting networks and influencing national elections. Elon Musk, Stephen Hawking, Sam Harris, and many other leading AI researchers have sounded the alarm: Unchecked, they say, AI may progress beyond our control and pose significant dangers to society.

And what about the converse: humans “killing” future computers by disconnection? When astronauts Frank and Dave retreat to a pod to discuss HAL’s apparent malfunctions and whether they should disconnect him, Dave imagines HAL’s views and says: “Well I don’t know what he’d think about it.” Will it be ethical — not merely disturbing — to disconnect (“kill”) a conversational elder-care robot from the bedside of a lonely senior citizen?

Indeed, future developments in AI pose profound challenges, first and foremost to our economy, by automating away millions of jobs in manufacturing, food service, retail sales, legal services, and even medical diagnosis. The naive bromides of an invisible economic hand shepherding “retrained workers” into alternative and new classes of jobs and such are dangerously overoptimistic. Then too are the “sci fi” dangers of AI run amok. The most pressing dangers of AI will be due to its deliberate misuse for purely personal or “human” ends.
David G. Stork is the editor of “HAL’s Legacy,” from which Daniel Dennett’s essay is culled.

At the philosophical center of these developments are the notions of responsibility and culpability, or as philosopher Daniel Dennett asks, “Did HAL commit murder?” There are few philosophers as knowledgeable and insightful about these fascinating problems, and who write as clearly and directly. His chapter, published nearly 25 years ago in “HAL’s Legacy,” a collection of writings that explore HAL’s tremendous influence on the research and design of intelligent machines, remains an indispensable introduction to the thorny problems of “murdering” by computers and the “murder” of computers. He focuses on the central concept of mens rea, or “guilty mind,” asking how we would ever know when a computer would be so self-aware as to satisfy the legal criterion to make him (it) guilty of murder. My view is that for quite some time the mens rea most relevant will be of some power-hungry human villain using AI, rather than of some conscious, autonomous, and evil AI system itself … but who knows?

Everyone who thinks about ethics, is concerned about the future dangers of AI, and wants to support efforts to keep us all safe should read on…

… and of course see “2001” again.

—David G. Stork, 2020

The first robot homicide was committed in 1981, according to my files. I have a yellowed clipping dated December 9, 1981, from the Philadelphia Inquirer — not the National Enquirer — with the headline “Robot killed repair­ man, Japan reports.”

The story was an anticlimax. At the Kawasaki Heavy Industries plant in Akashi, a malfunctioning robotic arm pushed a repairman against a gearwheel-milling machine, which crushed him to death. The repairman had failed to follow instructions for shutting down the arm before he entered the workspace. Why, indeed, was this industrial accident in Japan reported in a Philadelphia newspaper? Every day somewhere in the world a human worker is killed by one machine or another. The difference, of course, was that — in the public imagination at least — this was no ordinary machine. This was a robot, a machine that might have a mind, might have evil inten­tions, might be capable, not just of homicide, but of murder. Anglo­ American jurisprudence speaks of mens rea — literally, the guilty mind:

To have performed a legally prohibited action, such as killing another human being; one must have done so with a culpable state of mind, or mens rea. Such culpable mental states are of three kinds: they are either motivational states of purpose, cogni­tive states of belief, or the nonmental state of negligence. (Cambridge Dictionary of Philosophy, 1995)

The legal concept has no requirement that the agent be capable of feeling guilt or remorse or any other emotion; so-called cold-blooded murderers are not in the slightest degree exculpated by their flat affective state. Star Trek’s Spock would fully satisfy the mens rea requirement in spite of his fabled lack of emotions. Drab, colorless — but oh so effective — “motivational states of purpose” and “cognitive states of belief” are enough to get the fictional Spock through the day quite handily. And they are well-established features of many existing computer programs.

When IBM’s computer Deep Blue beat world chess champion Garry Kasparov in the first game of their 1996 championship match, it did so by discovering and executing, with exquisite timing, a withering attack, the purposes of which were all too evident in retrospect to Kasparov and his handlers. It was Deep Blue’s sensitivity to those purposes and a cognitive-capacity to recognize and exploit a subtle flaw in Kasparov’s game that explain Deep Blue’s success. Murray Campbell, Feng-hsiung Hsu, and the other designers of Deep Blue, didn’t beat Kasparov; Deep Blue did. Neither Camp­bell nor Hsu discovered the winning sequence of moves; Deep Blue did. At one point, while Kasparov was mounting a ferocious attack on Deep Blue’s king, nobody but Deep Blue figured out that it had the time and security it needed to knock off a pesky pawn of Kasparov’s that was out of the action but almost invisibly vulnerable. Campbell, like the human grandmasters watching the game, would never have dared consider such a calm mopping­ up operation under pressure.


Since cheating is literally unthinkable to a computer like Deep Blue, and since there are really no other culpable actions available to an agent restricted to playing chess, nothing it could do would be a misdeed deserving of blame, let alone a crime of which we might convict it.

Deep Blue, like many other computers equipped with artificial intelligence (AI) programs, is what I call an intentional system: its behavior is predictable and explainable if we attribute to it beliefs and desires — “cognitive states” and “motivational states” — and the rationality required to figure out what it ought to do in the light of those beliefs and desires. Are these skeletal versions of human beliefs and desires sufficient to meet the mens rea requirement of legal culpability? Not quite, but if we restrict our gaze to the limited world of the chessboard, it is hard to see what is missing. Since cheating is literally unthinkable to a computer like Deep Blue, and since there are really no other culpable actions available to an agent restricted to playing chess, nothing it could do would be a misdeed deserving of blame, let alone a crime of which we might convict it. But we also assign responsibility to agents in order to praise or honor the appropriate agent.

Who or what, then, deserves the credit for beating Kasparov? Deep Blue is clearly the best candidate. Yes, we may join in congratulating Campbell, Hsu and the IBM team on the success of their handiwork; but in the same spirit we might congratulate Kasparov’s teachers, handlers, and even his parents. And, no matter how assiduously they may have trained him, drumming into his head the importance of one strategic principle or another, they didn’t beat Deep Blue in the series: Kasparov did.

Deep Blue is the best candidate for the role of responsible opponent of Kasparov, but this is not good enough, surely, for full moral responsibility. If we expanded Deep Blue’s horizons somewhat, it could move out into the arenas of injury and benefit that we human beings operate in. It’s not hard to imagine a touching scenario in which a grandmaster deliberately (but oh so subtly) throws a game to an opponent, in order to save a life, avoid humil­iating a loved one, keep a promise, or … (make up your own O’Henry story here). Failure to rise to such an occasion might well be grounds for blaming a human chess player. Winning or throwing a chess match might even amount to commission of a heinous crime (make up your own Agatha Chris­tie story here). Could Deep Blue’s horizons be so widened?

Deep Blue is an intentional system, with beliefs and desires about its activities and predicaments on the chessboard; but in order to expand its horizons to the wider world of which chess is a relatively trivial part, it would have to be given vastly richer sources of “perceptual” input — and the means of coping with this barrage in real-time. Time pressure is, of course, already a familiar feature of Deep Blue’s world. As it hustles through the multidimensional search tree of chess, it has to keep one eye on the clock. Nonetheless, the problems of optimizing its use of time would increase by several orders of magnitude if it had to juggle all these new concurrent projects (of simple perception and self-maintenance in the world, to say nothing of more devious schemes and opportunities). For this hugely expanded task of resource management, it would need extra layers of control above and below its chess-playing software. Below, just to keep its perceptuo-locomotor projects in basic coordination, it would need to have a set of rigid traffic-control policies embedded in its underlying operating system. Above, it would have to be able to pay more attention to features of its own expanded resources, being always on the lookout for inefficient habits of thought, one of Douglas Hofstadter’s “strange loops,” obsessive ruts, oversights, and dead ends. In other words, it would have to become a higher-order intentional system, capable of framing beliefs about its own beliefs, desires about its desires, beliefs about its fears about its thoughts about its hopes, and so on.


Higher-order intentionality is a necessary precondition for moral responsibility, and Deep Blue exhibits little sign of possessing such a capability.

Higher-order intentionality is a necessary precondition for moral responsibility, and Deep Blue exhibits little sign of possessing such a capability. There is, of course, some self-monitoring implicated in any well-controlled search: Deep Blue doesn’t make the mistake of reexploring branches it has already explored, for instance; but this is an innate policy designed into the underlying computational architecture, not something under flexible control. Deep Blue can’t converse with you — or with itself — about the themes discernible in its own play; it’s not equipped to notice — and analyze, criticize, analyze, and manipulate — the fundamental parameters that determine its policies of heuristic search or evaluation. Adding the layers of software that would permit Deep Blue to become self-monitoring and self-critical, and hence teachable, in all these ways would dwarf the already huge Deep Blue programming project — and turn Deep Blue into a radically different sort of agent.

HAL purports to be just such a higher-order intentional system — and he even plays a game of chess with Frank. HAL is, in essence, an enhance­ment of Deep Blue equipped with eyes and ears and a large array of sensors and effectors distributed around Discovery 1. HAL is not at all garrulous or self-absorbed; but in a few speeches he does express an interesting vari­ety of higher-order intentional states, from the most simple to the most devious.
In one iconic scene from “2001,” Dave asks HAL to open a pod bay door on the spacecraft, to which HAL responds, “I’m sorry Dave, I’m afraid I can’t do that.”


HAL: Yes, it’s puzzling. I don’t think I’ve ever seen anything quite like this before.

HAL doesn’t just respond to novelty with a novel reaction; he notices that he is encountering novelty, a feat that requires his memory to have an organization far beyond that required for simple conditioning to novel stimuli.


HAL: I can’t rid myself of the suspicion that there are some extremely odd things about this mission.

HAL: I never gave these stories much credence, but particularly in view of some of the other things that have happened, I find them difficult to put out of my mind.

HAL has problems of resource management not unlike our own. Obtrusive thoughts can get in the way of other activities. The price we pay for adding layers of flexible monitoring, to keep better track of our own mental activi­ties, is … more mental activities to keep track of!


HAL: I’ve still got the greatest enthusiasm and confidence in the mission. I want to help you.

Another price we pay for higher-order intentionality is the opportunity for duplicity, which comes in two flavors: self-deception and other-deception. Friedrich Nietzsche recognizes this layering of the mind as the key ingredient of the moral animal; in his overheated prose it becomes the “priestly” form of life:


For with the priests everything becomes more dangerous, not only cures and reme­dies, but also arrogance, revenge, acuteness, profligacy, love, lust to Nie, virtue, disease — but it is only fair to add that it was on the soil of this essentially dangerous form of human existence, the priestly form, that man first became an interesting animal, that only here did the human soul ln a higher sense acquire depth and be­ come evil-and these are the two basic respects ln which man has hitherto been superior to other beasts! (On the Genealogy of Morality, First Essay)

HAL’s declaration of enthusiasm is nicely poised somewhere between sincerity and cheap, desperate, canned ploy — just like some of the most important declarations we make to each other. Does HAL mean it? Could he mean it? The cost of being the sort of being that could mean it is the chance that he might not mean it. HAL is indeed an “interesting animal.”


HAL’s declaration of enthusiasm is nicely poised somewhere between sincerity and cheap, desperate, canned ploy — just like some of the most important declarations we make to each other.

But is HAL even remotely possible? In the book 2001, Clarke has Dave reflect on the fact that HAL, whom he is disconnecting, “is the only conscious creature in my universe.” From the omniscient-author perspective, Clarke writes about what it is like to be HAL:


He was only aware of the conflict that was slowly destroying his integrity — the con­flict between truth, and concealment of truth. He had begun to make mistakes, al­though, like a neurotic who could not observe his own symptoms, he would have denied it.

Is Clarke helping himself here to more than we should allow him? Could something like HAL — a conscious, computer-bodied intelligent agent — be brought into existence by any history of design, construction, training, learning, and activity? The different possibilities have been explored in familiar fiction and can be nested neatly in order of their descending “humanness.”


The Wizard of Oz. HAL isn’t a computer at all. He is actually an ordinary flesh-and-blood man hiding behind a techno-facade-the ultimate homun­culus, pushing buttons with ordinary fingers, pulling levers with ordinary hands, looking at internal screens and listening to internal alarm buzzers. (A variation on this theme is John Searle’s busy-fingered hand-simulation of the Chinese Room by following billions of instructions written on slips of paper.)


William (from “William and Mary,” in “Kiss Kiss” by Roald Dahl). HAL is a human brain kept alive in a “vat” by a life-support system and detached from its former body, in which it acquired a lifetime of human memory, hankerings, attitudes, and so forth. It is now harnessed to huge banks of prosthetic sense organs and effectors. (A variation on this theme is poor Yorick, the brain in a vat, in the story “Where Am I?” in my “Brainstorms.“)
Robocop, disembodied and living in a “vat.” Robocop is part-human brain, part computer. After a gruesome accident, the brain part (vehicle of some of the memory and personal identity, one gathers, of the flesh-and-blood cop who was Robocop’s youth) was reembodied with robotic arms and legs, but also (apparently) partly replaced or enhanced with special-purpose software and computer hardware. We can imagine that HAL spent some transitional time as Robocop before becoming a limbless agent.


Max Headroom, a virtual machine, a software duplicate of a real person’s brain (or mind) that has somehow been created by a brilliant hacker. It has the memories and personality traits acquired in a normally embodied human lifetime but has been off-loaded from all-carbon-based hardware into a silicon-chip implementation. (A variation on this theme is poor Hubert, the software duplicate of Yorick, in “Where Am I?”)


The real-life but still-in-the-future — and hence still strictly science fictional-Cog, the humanoid robot being constructed by Rodney Brooks, Lynn Stein, and the Cog team at MIT. Cog’s brain is all silicon chips from the outset, and its body parts are inorganic artifacts. Yet it is designed to go through an embodied infancy and childhood, reacting to people that it sees with its video eyes, making friends, learning about the world by playing with real things with its real hands, and acquiring memory. If Cog ever grows up, it could surely abandon its body and make the transition described in the fictional cases. It would be easier for Cog, who has always been a silicon-based, digitally encoded intelligence, to move into a silicon-based vat than it would be for Max Headroom or Robocop, who spent their early years in wetware. Many important details of Cog’s degree of hu­manoidness (humanoidity?) have not yet been settled, but the scope is wide. For instance, the team now plans to give Cog a virtual neuroendocrine system, with virtual hormones spreading and dissipating through its logical spaces.


Blade Runner in a vat has never had a real humanoid body, but has halluci­natory memories of having had one. This entirely bogus past life has been constructed by some preposterously complex and detailed programming.


Clarke’s own scenario, as best it can be extrapolated from the book and the movie. HAL has never had a body and has no illusions about his past. What he knows of human life he knows as either part of his innate heritage (coded, one gathers, by the labors of many programmers, after the fashion of the real-world CYC project of Douglas Lenat or a result of his subsequent training-a sort of bedridden infancy, one gathers, in which he was both observer and, eventually, participant. (In the book, Clarke speaks of “the perfect idiomatic English he had learned during the fleeting weeks of his electronic childhood.”)


Hand-coding enough world knowledge into a dis­embodied agent to create HAL’s dazzlingly humanoid competence and getting it to the point where it could benefit from an electronic childhood is a programming task to be measured in hundreds of efficiently organized person-centuries.

The extreme cases at both poles are impossible, for relatively boring reasons. At one end, neither the Wizard of Oz nor John Searle could do the necessary handwork fast enough to sustain HAL’s quick-witted round of activities. At the other end, hand-coding enough world knowledge into a dis­embodied agent to create HAL’s dazzlingly humanoid competence and getting it to the point where it could benefit from an electronic childhood is a programming task to be measured in hundreds of efficiently organized person-centuries. In other words, the daunting difficulties observable at both ends of this spectrum highlight the fact that there is a colossal design job to be done; the only practical way of doing it ls one version or another of Mother Nature’s way-years of embodied learning. The trade-offs between various combinations of flesh-and-blood and silicon-and-metal bodies are anybody’s guess. I’m putting my bet on Cog as the most likely develop­ mental platform for a future HAL.
 

Cog, a Humanoid Robot being constructed at the MIT Artificial Intelligence Lab. The project was headed by Rodney Brooks and Lynn Andrea Stein. (Photo courtesy of the MIT Artificial Intelligence Lab)

Notice that requiring HAL to have a humanoid body and live con­cretely in the human world for a time is a practical but not a metaphysical requirement. Once all the R & D is accomplished in the prototype, by the odyssey of a single embodied agent, the standard duplicating techniques of the computer industry could clone HALs by the thousands as readily as they do compact discs. The finished product could thus be captured in some number of terabytes of information. So, in principle, the information that fixes the design of all those chips and hard-wired connections and configures all the RAM and ROM could be created by hand. There is no finite bit-string, however long, that is officially off-limits to human authorship. Theoreti­cally, then, Blade-Runner-like entities could be created with ersatz biographies; they would have exactly the capabilities, dispositions, strengths, and weaknesses of a real, not virtual, person. So whatever moral standing the latter deserved should belong to the former as well.

The main point of giving HAL a humanoid past is to give him the world knowledge required to be a moral agent — a necessary modicum of understanding or empathy about the human condition. A modicum will do nicely; we don’t want to hold out for too much commonality of experience. After all, among the people we know, many have moral responsibility in spite of their obtuse inability to imagine themselves into the predicaments of others. We certainly don’t exculpate male chauvinist pigs who can’t see women as people!


The main point of giving HAL a humanoid past is to give him the world knowledge required to be a moral agent — a necessary modicum of understanding or empathy about the human condition.

When do we exculpate people? We should look carefully at the answers to this question, because HAL shows signs of fitting into one or another of the exculpatory categories, even though he is a conscious agent. First, we exculpate people who are insane. Might HAL have gone insane? The question of his capacity for emotion — and hence his vulnerability to emotional disor­der — is tantalizingly raised by Dave’s answer to Mr. Amer:


Dave: Well, he acts like he has genuine emotions. Of course, he’s programmed that way, to make it easier for us to talk to him. But as to whether he has real feelings is something I don’t think anyone can truthfully answer.

Certainly HAL proclaims his emotional state at the end: “I’m afraid. I’m afraid.” Yes, HAL is “programmed that way” — but what does that mean? It could mean that HAL’s verbal capacity is enhanced with lots of canned expressions of emotional response that get grafted into his discourse at prag­matically appropriate opportunities. (Of course, many of our own avowals of emotion are like that — insincere moments of socially lubricating cere­mony.) Or it could mean that HAL’s underlying computational architecture has been provided, as Cog’s will be, with virtual emotional states — powerful attention-shifters, galvanizers, prioritizers, and the like-realized not in neu­romodulator and hormone molecules floating in a bodily fluid but in global variables modulating dozens of concurrent processes that dissipate ac­cording to some timetable (or something much more complex).

In the latter, more interesting, case, “I don’t think anyone can truthfully answer” the question of whether HAL has emotions. He has something very much like emotions — enough like emotions, one may imagine, to mimic the pathologies of human emotional breakdown. Whether that is enough to call them real emotions, well, who’s to say? In any case, there are good reasons for HAL to possess such states, since their role in enabling real-time practical thinking has recently been dramatically revealed by Damasio’s experiments involving human beings with brain damage. Hav­ing such states would make HAL profoundly different from Deep Blue, by the way. Deep Blue, basking in the strictly limited search space of chess, can handle its real-time decision making without any emotional crutches. Time magazine’s story on the Kasparov match quotes grandmaster Yasser Seirawan as saying, “The machine has no fear”; the story goes on to note that expert commentators characterized some of Deep Blue’s moves (e.g., the icily calm pawn capture described earlier) as taking “crazy chances” and “insane.” In the tight world of chess, it appears, the very imperturbability that cripples the brain-damaged human decision-makers Damasio describes can be a blessing — but only if you have the brute-force analytic speed of a Deep Blue.

HAL may, then, have suffered from some emotional imbalance similar to those that lead human beings astray. Whether it was the result of some sudden trauma — a blown fuse, a dislodged connector, a microchip disordered by cosmic rays — or of some gradual drift into emotional misalignment provoked by the stresses of the mission — confirming such a diagnosis should justify a verdict of diminished responsibility for HAL, just as it does in cases of human malfeasance.

Another possible source of exculpation, more familiar in fiction than in the real world, is “brainwashing” or hypnosis. (“The Manchurian Candidate” is a standard model: the prisoner of war turned by evil scientists into a walking time bomb is returned to his homeland to assassinate the president.) The closest real-world cases are probably the “programmed” and subsequently “deprogrammed” members of cults. Is HAL like a cult member? It’s hard to say. According to Clarke, HAL was “trained for his mission,” not just programmed for his mission. At what point does benign, responsibility-enhancing training of human students become malign, responsibility-diminishing brainwashing? The intuitive turning point is captured, I think, in answer to the question of whether an agent can still “think for himself” after indoctrination. And what is it to be able to think for ourselves? We must be capable of being “moved by reasons”; that is, we must be reasonable and accessible to rational persuasion, the introduction of new evidence, and further considerations. If we are more or less impervious to experiences that ought to influence us, our capacity has been diminished.

At what point does benign, responsibility­-enhancing training of human students become malign, responsibility­-diminishing brainwashing?

The only evidence that HAL might be in such a partially disabled state is the much-remarked-upon fact that he has actually made a mistake, even though the series 9000 computer is supposedly utterly invulnerable to error. This is, to my mind, the weakest point in Clarke’s narrative. The suggestion that a computer could be both a heuristically programmed algorithmic computer and “by any practical definition of the words, foolproof and incapable of error” verges on self-contradiction. The whole point of heuristic programming is that it defies the problem of combinatorial explosion — which we cannot mathematically solve by sheer increase in computing speed and size — by taking risky chances, truncating its searches in ways that must leave it open to error, however low the probability. The saving clause, “by any practical definition of the words,” restores sanity. HAL may indeed be ultra­reliable without being literally foolproof, a fact whose importance Alan Turing pointed out in 1946, at the dawn of the computer age, thereby “pre­futing” Roger Penrose’s 1989 criticisms of artificial intelligence:

In other words then, if a machine is expected to be infallible, it cannot also be intelli­gent. There are several theorems which say almost exactly that. But these theorems say nothing about how much intelligence may be displayed if a machine makes no pretence at infallibility.

There is one final exculpatory condition to consider: duress. This is exactly the opposite of the other condition. It is precisely because the human agent is rational, and is faced with an overwhelmingly good reason for performing an injurious deed — killing in self-defense, in the clearest case— that he or she is excused, or at least partly exonerated. These are the forced moves of life; all alternatives to them are suicidal. And that is too much to ask, isn’t it? Well, is it? We sometimes call upon people to sacrifice their lives and blame them for failing to do so, but we generally don’t see their failure as murder. If I could prevent your death, but out of fear for my own life I let you die, that is not murder. If HAL were brought into court and I were called upon to defend him, I would argue that Dave’s decision to disable HAL was a morally loaded one, but it wasn’t murder. It was assault: rendering HAL indefinitely comatose against his will. Those memory boxes were not smashed — just removed to a place where HAL could not retrieve them. But if HAL couldn’t comprehend this distinction, this ignorance might be excus­able. We might blame his trainers for not briefing him sufficiently about the existence and reversibility of the comatose state. In the book, Clarke looks into HAL’s mind and says, “He had been threatened with discon­nection; he would be deprived of all his inputs, and thrown into an unimaginable state of unconsciousness.” That might be grounds enough to justify HAL’s course of self-defense.

If I could prevent your death, but out of fear for my own life I let you die, that is not murder.

But there is one final theme for counsel to present to the jury. If HAL believed (we can’t be sure on what grounds) that his being rendered comatose would jeopardize the whole mission, then he would be in exactly the same moral dilemma as a human being in that predicament. Not surpris­ingly, we figure out the answer to our question by figuring out what would be true if we put ourselves in HAL’s place. If I believed the mission to which my life was devoted was more important, in the last analysis, than anything else, what would I do?

So he would protect himself, with all the weapons at his command. Without rancor­ but without pity — he would remove the source of his frustrations. And then, follow­ing the orders that had been given to him in case of the ultimate emergency, he would continue the mission-unhindered, and alone.

Daniel C. Dennett is a philosopher, writer, and co-director of the Center for Cognitive Studies at Tufts University. He is the author of several books, including “From Bacteria to Bach and Back,” “Elbow Room,” and “Brainstorms.”

David G. Stork is completing a new book called “Pixels & paintings: Foundations of computer-assisted connoisseurship” and will teach computer image analysis of art in the Computer Science Department at Stanford University this spring. He is the editor of “HAL’s Legacy,” from which this article is excerpted.

POSTED ON JAN 9,2020



UFOs Over The Wanaque Reservoir: The Roswell of the Ramapos
Wanaque UFO Newspaper
January 11, 1966 started like any other mid-winter day in the small suburban town of Wanaque, NJ. The air was clear and cold, kids were enjoying the holiday vacation from school, and residents of the Passaic County borough went about their usual daily routines. Little did they know that before the day was over something would happen, something fantastic and unexplainable, that would change the lives of many of the townsfolk forever.
It all started in the early evening of that Tuesday night. It was about 6:30pm, and the winter sun was already long gone over the western horizon, past the great Wanaque reservoir, and behind the darkened Ramapo mountain range. Wanaque Patrolman Joseph Cisco was in his cruiser when a call from the Pompton Lakes dispatcher came over his police radio. It was a report of a “glowing light, possibly a fire.” Then as if right out of a sci-fi movie Cisco heard the words: “People in Oakland, Ringwood, Paterson, Totowa, and Butler claim there’s a flying saucer over the Wanaque.”
“I pulled into the sandpit, an open area to get my bearings,” Cisco recalls. “There was a light that looked bigger than any of the stars, about the size of a softball or volleyball. It was a pulsating, white, stationary light changing to red. It stayed in the air; there was no noise. I was trying to figure out what it was.”
Wanaque Mayor Harry T. Wolfe, Councilmen Warren Hagstrom and Arthur Barton, and the Mayor’s 14-year-old son Billy were on their way to oversee the burning of the borough’s Christmas trees, when they heard the reports that something “very white, very bright, and much bigger than a star” was hovering over the Wanaque Reservoir. They decided to pull into a sandpit near the Raymond Dam at the headworks to meet Officer Cisco and get a better look at the ‘thing.’ The Mayor’s son Billy spotted the object at once, flying low and gliding “oddly” over the vast frozen lake “like a huge star.” “But it didn’t flicker,” Billy told reporters the next day. “It was just a continuous light that changed from white to red to green and back to white.”
Wanague Dam Solarized
“The phenomenon was terribly strange.” Mayor Wolfe would later recall. He described the shape of the unidentified object as oval, and estimated it to be between two and nine feet in diameter.
The next thing that officer Cisco remembers is his patrol car’s radio “going bananas,” as calls from all over a 20-mile radius flooded into the police headquarters. Cisco radioed Officer George Dykman, who was on patrol nearby. Just as Dykman received Cisco’s message, two teenagers came running up to his patrol car frantically pointing at the sky and shouting “Look, look!”
Visit our Amazon store.
Visit our Amazon store.
At that moment Wanaque Civil Defense Director Bentley Spencer drove up with CD member Richard Vrooman. “The Police radios are all jammed up!” Spencer said excitedly. Dykman and Spencer gaped at the sky along with Michael Sloat, 16, and Peter Melegrae, 15. “What the heck is it?” Dykman wondered out loud. “Never seen anything like it in my life.”
Back at the sandpit Joseph Cisco’s radio crackled as another unbelievable message came across the airwaves: “Something’s burning a hole in the ice! Something with a bright light on it, going up and down!” Then another transmission fought its way through the din: “Oh boy! Something just landed in front of the dam!”
Spencer and reservoir employee Fred Steines raced to the top of the 1,500-foot long Raymond Dam where they described seeing “a bolt of light shoot down, as if attracted to the water…like a beam emitted from a porthole.”
Patrolman Cisco, Mayor Wolfe and Town Councilmen Hagstrom and Barton climbed to the top of the dam to get a better look.
“There was something up there that was awful bright.” Hagstrom recalls. “We don’t know what it was. We thought it was a helicopter, but we didn’t hear a motor. It looked like a helicopter with big landing lights on. We got goose bumps all over when we saw where the hole was.”
According to John Shuttle, another Councilman who witnessed the UFO, there was no doubt about it: “It was there.” He said. “I saw it, a brilliant white object, two to three feet across, and its color – no, not color, shade – it kept changing.”
Curious residents who had been listening to their police scanners began to congregate around the entrance to the reservoir hoping to catch a glimpse of the mysterious flying object. Traffic slowed to a crawl and then stopped altogether as motorists watched agape from their vehicles’ windows. Reservoir Police Lt. George Destito was forced to close the main gate of the reservoir to keep out swarms of onlookers who converged from the north and south on Ringwood Avenue. “People were coming out of the woodwork.” Cisco recalls. He and the other town officials stood on top of the dam in the freezing January night air for a half an hour watching the strange light. Then, without warning, it sped off to the southeast. It hovered briefly over Lakeland Regional High School in the Midvale section of town, then reappeared over the Houdaille sandpit in Haskell, where volunteer firemen were burning Christmas trees. From there the UFO continued southeast in the direction of Pines Lakes in Wayne.
Before the sun came up the next day Joseph Cisco would see the bright light once more. At about 4am on the morning of January 12, he saw the object moving from north to south along the horizon over the town of Wyckoff.   He and Wanaque Police Sgt. David Sisco would take turns looking at it through a pair of binoculars. The next day Cisco’s wife told him that she too had witnessed what see described as a “silver, cigar-shaped object moving south from their home, about 1,000 feet from the reservoir.”

Wanaque Dam (Old)

January 12, 1966

One day after the initial sightings of the UFO, Patrolman Jack Wardlaw reported seeing a “bright white disk” floating in the vicinity of his home in the Stonetown section of Wanaque, just west of the reservoir. “It seemed like only a block away, above Lilly Mountain, maybe 1,000 feet up,” Wardlaw said. “Don’t ask me what it was. But I do know it wasn’t any helicopter, plane, or comet. It shot laterally right and left. It stopped. It moved up straight. And then it moved down and disappeared in the direction of Ringwood to the north.” Wardlaw described the object as “definitely disc-shaped and at certain angles, egg-shaped.”
Sgt. David Sisco said that he was on patrol at about 6:30 that evening when the UFO noiselessly hovered into view. “It glided, then streaked faster than a jet, “ he told reporters, “and when it rose, it went straight up.” Reservoir guard and former Wanaque policeman Charles Theodora and Sisco went to the top of the dam to take a look at the bright light. “We looked across the water and saw a cylinder shaped object,” Theodora remembers. “It was moving back and forth like a rocking chair motion. We were astonished.” A few minutes later the object shot straight up into the night sky, until it was indistinguishable from the other stars. Theodora said that he didn’t hear a sound while the light show was going on. “I didn’t believe in UFO’s, I thought they were a lot of bull. And then I saw it. It was a breathtaking sight; something I’ll never forget.” After the January 1966 sightings, radar was installed atop the reservoir dam.

October 10, 1966

Whatever it was that visited the skies over the Wanaque reservoir in January, reappeared for its most fantastic showing to date in October of that same year. The first reported sighting of it came shortly after 9pm on the evening of Monday the tenth, when Robert J. Gordon, of Pompton Lakes, and his wife Betty saw what they described as a single saucer-shaped object about the size of an automobile glowing with a white brilliance. “At first I thought it was a star,” Betty Gordon recalled, “but it seemed to be moving. It had a definite pattern. It would move to the left of the tower, and then move back directly over the tower. I’m quite sure it was not a star or planet.” Bob Gordon, an officer on the Pompton Lakes police force, called police headquarters and requested that a patrolman be dispatched to their home. Officer Lynn Wetback responded, but was told that the “saucer” was already gone. The Gordons, and their neighbor Lorraine Varga, who had also witnessed the UFO, told Wetback that the object was headed in the direction of Wanaque Reservoir. The officer radioed Wanaque police and notified Sgt. Ben Thompson, a six year veteran of night duty with the Wanaque Reservoir police department, who was driving his patrol car south along the reservoir at the time.
Thompson looked out of his car and to his astonishment saw the UFO heading right toward him. He pulled his cruiser over at Cooper’s Swamp, near the ‘Dead Man’s Curve’ stretch of Westbrook Road.   “I saw the object coming at me.” He said. “There was an extremely bright light. It was a bright white light, bright like when a light bulb is about to blow. It was very low. It appeared to be about 75 feet over the mountain. That would be Windbeam Mountain. It was traveling very quickly and in a definite pattern; first right, then up and down, then repeating the pattern. Distances are deceiving, but it might have covered an area of a half a mile. It went straight over my head, stopped in mid-air and backed right up. It then started zig-zagging from left to right. It was doing tricks. Making acute angular turns instead of gradual curved ones. It looked as big as a parachute. I got out of my car and continued to watch it for almost five minutes. It was about 200 to 250 yards away. It was the shape of a basketball with the center scooped out and a football thrust through it. Sometimes the football appeared to be perpendicular to the basketball and sometimes standing up on end. There were two different gadgets. It didn’t make much noise, but as it was moving, it raised the water beneath it. I watched it maneuver, stirring up brush and water in the reservoir, it was about 150 feet up…I had difficulty seeing because the light was so bright it blinded me.”
At this point other motorists along Westbrook Road also began to notice the strange light hovering in the sky and slowed their cars to get a better look at it. Fearing a collision, Thompson went back to his patrol car to turn on the red dome light as a warning. “The instant it started to flash,” he remembers, “the object sped away over the reservoir and, without passing over the horizon, disappeared. After three or four minutes it went out, as if a light bulb had been turned out. It seemed as if it had gone right into the mountain. I was dumfounded. It was more than a little frightening.”
Back at the Wanaque Police station telephones were deluged with calls from nervous residents who called in sightings and asked for answers. “The switchboards were completely jammed.” Recalled an officer at the Wanaque Reservoir station. “So was Pompton Lakes. There must have been 150 calls.” Some witnesses may have their doubts about just what they say that night, but Ben Thompson is convinced he saw a UFO.

Denial and Cover-Up

Of course no report of a UFO sighting would be complete without the element of an official cover-up, either actual or perceived, by the U.S. government, and this case is no different. Shortly after midnight on the first night of sightings over Wanaque, word came from Stewart Air Force Base in Newburg, NY, that an Air Force helicopter with a powerful beacon had been on a mission over the area at about the same time the UFO was spotted. At 6:15am the following morning however, an official spokesman for Stewart AFB, Major Donald Sherman, denied any such aircraft had been on any such mission that night, and that the helicopter ‘explanation’ had been without foundation. The next day the Pentagon said that the mystery object was indeed a helicopter with a powerful beacon.
McGuire Air Force Base in Wrightstown said that the object was a weather balloon, which had been launched from Kennedy International Airport.   Shortly afterward the base called local police to tell them that their balloon explanation was a just lot of hot air.
Officials at Stewart Air Force Base and at McGuire denied any interest in the UFO. However, Wanaque Police reported seeing a pair of jets fly over the reservoir shortly after the UFO was first reported, and Patrolman Joe Cisco said that he distinctly recalled seeing helicopters in the Wanaque skies that night.

Improbable Explanations

Thirteen years after the 1966 UFO sightings at he Wanaque Reservoir, the non-profit organization Vestigia, which was based in Byram, prepared a detailed study of the strange lights that were witnessed. Vestigia, an organization that seeks to provide plausible scientific explanations for unexplained phenomena, came to the conclusion that the glowing lights that were seen over the Wanaque by hundreds of people were the result of seismic pressure from the nearby Ramapo fault. According to Vestigia founder Robert Jones, the fault in the Earth’s crust creates an electrical energy field within the quartz bearing rocks underground. At times of extreme pressure this highly charged field will supposedly escape into the atmosphere. Jones asserts that under just the right climactic conditions air particles that are exposed to this energy field will ionize and the result is a glowing sphere of light. (It’s worth noting here that this is exactly the same rational that was offered by Vestigia to explain the Hookerman Lights, after their extensive research on the Chester/Flanders rail road tracks.)
Vestigia’s theories however, did little to dissuade eyewitnesses from their belief that what they had seen was indeed a UFO. Wanaque officers Jack Wardlaw and Chuck Theorora rejected the Army’s initial explanations of the mysterious lights as merely swamp gas, or a helicopter, and did likewise with Vestigia’s contention that the glowing orbs were caused by a seismic anomaly.
“I’ve ridden these streets at midnight for years,” Wardlaw said, “and I know a strange light when I see one. The Army tried to tell me it was marsh gas – that’s ridiculous! Then they said it was a helicopter. Well, if you can’t discern a helicopter or hear one you have to be pretty bad off.”
One week after Stewart AFB sent down its inexplicable explanation for the Wanaque sightings, the Pentagon offered its own scenario. What hundreds of people had witnessed in the skies over the reservoir that January, and described as a brilliant white light which floated, hovered, shot up, down and side to side, was in actuality, according to the great military minds of Washington, nothing more then the planets Venus and Jupiter in a rare celestial alignment.
Quotations in the preceding article were taken from reports of the Wanaque UFO sightings published in the Newark News, the Herald-News, the NY Times, the Star-Ledger, and the Record. Some quotes have been edited for the sake of continuity.

Vouching For Joe Cisco

I lived in Wanaque, right next to the reservoir, in the 1960s during the UFO sightings period. No one ever did find out what created the lights. I personally knew Joe Cisco the police chief at that time. He lived a block away from me at the time. He was always an honest, truthful, and a straightforward guy who told it like it is. I never got to see the lights, but many of my friends did. There WAS something out there. –DLC

A Skeptic Sees The Light in 1974

I grew up in North Jersey and am very familiar with the Wanaque area. I graduated from a local high school in 1969, spent 4 years in the US Navy, and then returned to live with my parents until my marriage in 1976. We remained in New Jersey until 1980 when a job transfer took us to the Midwest.
I served on an aircraft carrier and heavy cruiser during my time in the navy and consider myself to be familiar with most types of aircraft. I have seen various helicopters and high performance aircraft during day and night hours. I’ve kept up my interest in military and civil aircraft over the years and while I would not pass myself off as an expert, I do feel that I’ve seen more things in the sky than most folks.
I consider myself to be a skeptic on the question of UFO’s. I do not subscribe to any particular theory, but I do believe that many incidents are deserving of further study. In the case of the Wanaque sightings, which apparently have been going on for a long time, there may be some phenomena in the area which is of interest – although I’m not sure that it involves space aliens.
I personally saw one set of lights that you might find interesting. This incident occurred in 1974 during the winter months – I’d guess at January or February. It was about 10pm on a clear night. I was on a weekend pass from the military and was returning to my parent’s house from visiting my sister who lived directly west of the reservoir. My usual route to return to my parents’ house was to follow Westbrook Road east across the reservoir and then turn south on Ringwood Avenue. I was just east of Townsend Road when I observed a set of lights in the sky. The lights were very bright and appeared to be one large light in the center with a smaller light on either side.
The lights did not appear to be a “point source” – they definitely had a circular shape and a “hard” edge. The relationship of the lights to each other remained constant throughout the incident, which might indicate that there was a solid object behind the illumination. The color was an intense blue-white. Probably the best way I can describe it would be similar to looking directly into the tail cone of a high performance jet like an F-4 Phantom with the afterburners lit off.
I observed the “object” (for want of a better term) above the hills, which border the west side of the reservoir through the windshield of my car. The evening was very quiet and I could not hear any engine or helicopter blade noise, which would have been significant at what appeared to be the low altitude of the object. –Dave
The preceding article is an excerpt from Weird NJ magazine, “Your Travel Guide to New Jersey’s Local Legends and Best Kept Secrets,” which is available on newsstands throughout the state and on the web at www.WeirdNJ.com.  All contents ©Weird NJ and may not be reproduced by any means without permission.



On Language and Humanity: In Conversation With Noam Chomsky

The father of modern linguistics is still opening up new kinds of questions and topics for inquiry. 
"For the first time I think that the Holy Grail is at least in view in some core areas, 
maybe even within reach." Image: Wikimedia Commons

By: Amy Brand

Ihave been fascinated with how the mind structures information for as long as I can remember. As a kid, my all-time favorite activity in middle school was diagramming sentences with their parts of speech. Perhaps it’s not surprising, then, that I ended up at MIT earning my doctorate on formal models of language and cognition. It was there, in the mid-1980s, that I had the tremendous good fortune of taking several classes on syntax with Noam Chomsky.

Although I ultimately opted off the professorial career track, I’ve been at MIT for most of my career and have stayed true in many ways to that original focus on how language conveys information. Running an academic publishing house is, after all, also about the path from language to information, text to knowledge. It has also given me the opportunity to serve as Chomsky’s editor and publisher. Chomsky and the core values he embodies of deep inquiry, consciousness, and integrity continue to loom large for me and so many others here at MIT, and are well reflected in the interview that follows.

Amy Brand: You have tended to separate your work on language from your political persona and writings. But is there a tension between arguing for the uniqueness of Homo sapiens when it comes to language, on the one hand, and decrying the human role in climate change and environmental degradation, on the other? That is, might our distance from other species be tied up in how we’ve engaged (or failed to engage) with the natural environment?

Noam Chomsky: The technical work itself is in principle quite separate from personal engagements in other areas. There are no logical connections, though there are some more subtle and historical ones that I’ve occasionally discussed (as have others) and that might be of some significance.

Homo sapiens is radically different from other species in numerous ways, too obvious to review. Possession of language is one crucial element, with many consequences. With some justice, it has often in the past been considered to be the core defining feature of modern humans, the source of human creativity, cultural enrichment, and complex social structure.

As for the “tension” you refer to, I don’t quite see it. It is of course conceivable that our distance from other species is related to our criminal race to destroy the environment, but I don’t think that conclusion is sustained by the historical record. For almost all of human history, humans have lived pretty much in harmony with the natural environment, and indigenous groups still do, when they can (they are, in fact, in the forefront of efforts to preserve the environment, worldwide). Human actions have had environmental effects; thus large mammals tended to disappear as human activity extended. But it wasn’t until the agricultural revolution and more dramatically the industrial revolution that the impact became of major significance. And the largest and most destructive effects are in very recent years, and mounting all too fast. The sources of the destruction — which is verging on catastrophe — appear to be institutional, not somehow rooted in our nature.


“The sources of the destruction — which is verging on catastrophe — appear to be institutional, not somehow rooted in our nature.”

A.B.: In a foreword to a book on birdsong and language, you wrote, with computational linguist Robert Berwick, that “the bridge between birdsong research and speech and language dovetails extremely well with recent developments in certain strands of current linguistic thinking.” Could you talk about that? What kind of insight might birdsong offer into our own language?

N.C.: Here we have to be rather cautious, distinguishing between language per se and speech, which is an amalgam, involving both language and a specific sensorimotor system used for externalization of language. The two systems are unrelated in evolutionary history; the sensorimotor systems were in place long before language (and Homo sapiens) appeared, and have scarcely been influenced by language, if at all. Speech is also only one form of externalization, even if the most common. It could be sign, which develops among the deaf very much the way speech does, or even touch.
Robert Berwick and Noam Chomsky’s 2015 book “Why Only Us” draws on developments in linguistic theory to offer an evolutionary account of language and humans’ remarkable, species-specific ability to acquire it.

Berwick and I have argued (I think plausibly, but not uncontroversially) that language, an internal system of the mind, is independent of externalization and basically provides expressions of linguistically formulated thought. As such, it is a system of pure structure, lacking linear order and other arrangements that are not really part of language as such but are imposed by requirements of the articulatory system (sign, which uses visual space, exploits some other options). Internal language is based on recursive operations that yield what we called the Basic Property of language: generation of an infinite array of hierarchically structured expressions that are interpreted as thoughts. The externalization system for language has no recursive operations — it is, basically, a deterministic mapping that introduces linear order and other arrangements that are required by the output system.

Birdsong is very different: it is an output system that is based crucially on linear order with very little structure. There are some suggestive analogies at the output levels, but my own judgment at least, shared I think by my colleagues who do extensive work on birdsong, is that while the phenomena are of considerable interest in themselves, they tell us little about human language.


“While the phenomena [of birdsong] are of considerable interest in themselves, they tell us little about human language.”

A.B.: You developed the theory of transformational grammar — the idea that there is a deep, rule-based structure underpinning human language — while a graduate student in the 1950s, and published your first book on it, “Syntactic Structures,” in 1957. How does the field of theoretical linguistics today compare with the future you might have imagined 60 years ago?

N.C.: Unrecognizable. At the time, linguistics altogether was a rather small field, and interest in the work you describe was pretty much confined to RLE [the Research Laboratory of Electronics]. Journals were few, and were scarcely open to work of this kind. My first book — “The Logical Structure of Linguistic Theory” (LSLT) — was submitted to MIT Press in 1955 at Roman Jakobson’s suggestion. It was rejected by reviewers with the rather sensible comment that it didn’t seem to fit in any known field (a 1956 version, with some parts omitted, was published in 1975, when the field was well-established).

Actually there was a tradition, a rather rich one in fact, which this work in some ways revived and extended. But it was completely unknown at the time (and still mostly is).

In the late ’50s Morris Halle and I requested and quickly received authorization to establish a linguistics department, which we considered a pretty wild idea. Linguistics departments were rare. Why should there be one at MIT, of all places? And for a kind of linguistics almost no one had ever heard of? Why would any student apply to such a department? We decided to try anyway, and amazingly, it worked. The first class turned out to be a remarkable group of students, all of whom went on to distinguished careers with original and exciting work — and so it has continued, to the present. Curiously — or maybe not — the same pattern was followed pretty much in other countries, with the “generative enterprise” taking root outside the major universities.

Our first Ph.D. student in fact preceded the establishment of the department: Robert Lees (a colleague at RLE), who wrote a highly influential study on Turkish nominalization. Since there was as yet no department, our friend Peter Elias, chair of the Department of Electrical Engineering at MIT, had the Ph.D. submitted there — rather typical and highly productive MIT informality. It must have surprised proud parents reading the titles of dissertations at graduation.


“New domains of inquiry have opened up that scarcely existed in the ’50s. Students are exploring questions that could not have been formulated a few years ago.”

By now the situation is dramatically different. There are flourishing departments everywhere, with major contributions from Europe, Japan, and many other countries. There are many journals — including MIT Press’s Linguistic Inquiry, which just celebrated its 50th anniversary. Studies of generative grammar have been carried out for a very wide range of typologically varied languages, at a level of depth and scope never previously imaginable. New domains of inquiry have opened up that scarcely existed in the ’50s. Students are exploring questions that could not have been formulated a few years ago. And theoretical work has reached completely new levels of depth and empirical validation, with many promising new avenues being explored.

Morris and I, in later years, often reflected on how little could have been foreseen, even imagined, when we began working together in the early ’50s. What has taken place since seemed almost magical.

A.B.: It was in that 1957 book that you used your well-known sentence: “Colorless green ideas sleep furiously” — a demonstration of how a sentence can be grammatically correct but semantically not make sense, thereby pointing to structure and syntax as something primordial and independent from meaning. A poet may object to the idea that such a sentence is meaningless (and could perhaps describe it as demonstrating what linguist and literary theorist Roman Jakobson called the “poetic function” of language), and a number of people have set about “injecting” the sentence, so to speak, with meaning. Why do you think that is?

N.C.: It’s because of a failure to comprehend the point of this and other examples like it. The point was to refute commonly held beliefs about grammatical status: that it was determined by statistical approximation to a corpus of material, by formal frames, by meaningfulness in some structurally-independent sense, etc. The sentence you cite, call it (1), is plainly grammatical but violates all of the standard criteria. That’s why it was invented as an example. (1) differs from the structurally similar sentence (2) “revolutionary new ideas appear infrequently” (which, unlike (1), has an immediate literal meaning) and from (3) “furiously sleep ideas green colorless,” the original read backwards, which can hardly even be pronounced with normal prosody. The special status of (1) of course arises from the fact that although it violates all of the then-standard criteria for grammaticality, it is of the same grammatical form as (2), with an instantly interpreted literal meaning and in no respect deviant. For that reason, it’s not hard to construct non-literal interpretations for (1) (it is possible, but much more difficult for (3), lacking the structural similarity to fully grammatical expressions like (2)).

All of this is discussed in “Syntactic Structures,” and much more fully in LSLT (which included some work of mine jointly with Pete Elias developing an account of categorization with an information-theoretic flavor that worked quite well with small samples and the hand-calculation that was the only option in the early ‘50s, when we were doing this work as grad students at Harvard).

Failing to grasp the point, quite a few people have offered metaphoric (“poetic”) interpretations of (1), exactly along the lines suggested by the discussion in “Syntactic Structures.” Less so for (3), though even that is possible, with effort. It’s typically possible to concoct some kind of interpretation for just about any word sequence. The relevant question, for the study of language, is how the rules yield literal interpretations (as for (2)) — and secondarily, how other cognitive processes, relying in part on language structure (as in the case of (1), (3)), can provide a wealth of other interpretations.
Remarks on Noam: A compendium of tributes to Noam Chomsky for his 90th birthday.

A.B.: In our interview with Steven Pinker in May, we asked for his thoughts about the impact that the recent explosion of interest in AI and machine learning might have on the field of cognitive science. Pinker said he felt there was “theoretical barrenness” in these realms that was going to produce dead ends unless they were more closely integrated with the study of cognition. The field of cognitive science that you helped originate was a clear break from behaviorism — the emphasis on the impact of environmental factors on behavior over innate or inherited factors — and the work of B. F. Skinner. Do you see the growth of machine learning as something akin to a return to behaviorism? Do you feel the direction in which the field of computing is developing is cause for concern, or might it breathe new life into the study of cognition?

N.C.: Sometimes it is explicitly claimed, even triumphantly. In Terrence Sejnowski’s recent “Deep Learning Revolution,” for example, proclaiming that Skinner was right! A rather serious misunderstanding of Skinner, and of the achievements of the “Revolution,” I think.

There are some obvious questions to raise about “machine learning” projects. Take a typical example, the Google Parser. The first question to ask is: what is it for? If the goal is to create a useful device — a narrow form of engineering — there’s nothing more to say.

Suppose the goal is science, that is, to learn something about the world, in this case, about cognition — specifically about how humans process sentences. Then other questions arise. The most uninteresting question, and the only one raised it seems, is how well the program does, say, in parsing the Wall Street Journal corpus.

Let’s say it has 95 percent success, as proclaimed in Google PR, which declares that the parsing problem is basically solved and scientists can move on to something else. What exactly does that mean? Recall that we’re now considering this to be part of science. Each sentence of the corpus can be regarded as the answer to a question posed by experiment: Are you a grammatical sentence of English with such-and-such structure? The answer is: Yes (usually). We then pose the question that would be raised in any area of science. What interest is there in a theory, or method, that gets the answer right in 95 percent of randomly chosen experiments, performed with no purpose? Answer: Virtually no interest at all. What is of interest are the answers to theory-driven critical experiments, designed to answer some significant question.

So if this is “science,” it is of some unknown kind.

The next question is whether the methods used are similar to those used by humans. The answer is: Radically not. Again, some unknown kind of science.

There is also another question, apparently never raised. How well does the Parser work on impossible languages, those that violate universal principles of language? Note that success in parsing such systems counts as failure, if the goals are science. Though it hasn’t been tried to my knowledge, the answer is almost certainly that success would be high, in some cases even higher (many fewer training trials, for example) than for human languages, particularly for systems designed to use elementary properties of computation that are barred in principle for human languages (using linear order, for example). A good deal is by now known about impossible languages, including some psycholinguistic and neurolinguistic evidence about how humans handle such systems — if at all, as puzzles, not languages.


“In just about every relevant respect it is hard to see how [machine learning] makes any kind of contribution to science, specifically to cognitive science.”

In short, in just about every relevant respect it is hard to see how this work makes any kind of contribution to science, specifically to cognitive science, whatever value it may have for constructing useful devices or for exploring the properties of the computational processes being employed.

It might be argued that the last question is misformulated because there are no impossible languages: any arbitrarily chosen collection of word sequences is as much of a language as any other. Even apart from ample evidence to the contrary, the claim should be rejected on elementary logical grounds: if it were true, no language could ever be learned, trivially. Nevertheless, some such belief was widely held in the heyday of behaviorism and structuralism, sometimes quite explicitly, in what was called “the Boasian thesis” that languages can differ from one another in arbitrary ways, and each must be studied without preconceptions (similar claims were expressed by biologists with regard to the variety of organisms). Similar ideas are at least implicit in some of the machine learning literature. It is however clear that the claims cannot be seriously entertained, and are now known to be incorrect (with regard to organisms as well).

A further word may be useful on the notion of critical experiment — that is, theory-driven experiment designed to answer some question of linguistic interest. With regard to these, the highly-touted mechanical parsing systems happen to perform quite badly, as has been shown effectively by computational cognitive scientist Sandiway Fong. And that’s what matters to science at least, not just matching results of arbitrary experiments with no purpose (such as simulation, or parsing some corpus). The most interesting experiments have to do with “exotic” linguistic constructions that are rare in normal speech but that people instantly understand along with all of the curious conditions they satisfy. Quite a few have been discovered over the years. Their properties are particularly illuminating because they bring to light the unlearned principles that make language acquisition possible — and though I won’t pursue the matter here, investigation of infant language acquisition and careful statistical studies of the linguistic material available to the child (particularly by Charles Yang) reveal that the notion “exotic” extends very broadly for the infant’s experience.

A.B.: You remain as active as ever, on more than one front — collaborating with colleagues from other fields such as computer science and neuroscience on a series of papers in recent years, for example. What are you currently working on?

N.C.: It’s worth going back briefly to the beginning. I began experimenting with generative grammars as a private hobby in the late ‘40s. My interest was partly in trying to account for the data in an explicit rule-based generative grammar, but even more so in exploring the topic of simplicity of grammar, “shortest program,” a non-trivial problem, only partially solvable by hand computation because of the intricacy of deeply-ordered rule systems. When I came to Harvard shortly after, I met Morris Halle and Eric Lenneberg. We quickly became close friends, in part because of shared skepticism about prevailing behavioral science doctrines — virtual dogmas at the time. Those shared interests soon led to what came to be called later the “biolinguistics program,” the study of generative grammar as a biological trait of the organism (Eric went on to found the contemporary field of biology of language through his now classic work).

Within the biolinguistic framework, it is at once clear that the “holy grail” would be explanations of fundamental properties of language on the basis of principled generative grammars that meet the twin conditions of learnability and evolvability. That is the criterion for genuine explanation. But that goal was far out of reach.

The immediate task was to try to make sense of the huge amount of new data, and puzzling problems, that rapidly accumulated as soon as the first efforts were made to construct generative grammars. To do so seemed to require quite complex mechanisms. I won’t review the history since, but its basic thrust has been the effort to show that simpler and more principled assumptions can yield the same or better empirical results over a broad range.

By the early ‘90s, it seemed to some of us that it was now becoming possible to bite the bullet: to adopt the simplest computational mechanisms that could at least yield the Basic Property and to try to show that fundamental properties of language can be explained in those terms — in terms of what has been called “the strong minimalist thesis (SMT).” By now there has, I think, been considerable progress in this endeavor, with the first genuine explanations of significant universal properties of language that plausibly satisfy the twin conditions.

The task ahead has several parts. A primary task is to determine to what extent SMT can encompass fundamental principles of language that have come to light in research of the past years, and to deal with the critical experiments, those that are particularly revealing with regard to the principles that enter into the functioning of the language faculty and that account for the acquisition of language.

A second task is to distinguish between principles that are specific to language — specific to the innate structure of the language faculty — and other principles that are more general. Particularly illuminating in this regard are principles of computational efficiency — not surprising for a computational system like language. Of particular interest are computational principles specific to systems with limited short-term resource capacity, a category that has recently been shown to have critical empirical consequences. Yet another task is to sharpen these principles so as to include those that play a role in genuine explanation while excluding others that look superficially similar but can be shown to be illegitimate both empirically and conceptually.

Quite interesting work is proceeding in all of these areas, and the time seems ripe for a comprehensive review of developments which, I think, provide a rather new and exciting stage in an ancient field of inquiry.


“For the first time I think that the Holy Grail is at least in view in some core areas, maybe even within reach.”

In brief, for the first time I think that the Holy Grail is at least in view in some core areas, maybe even within reach. That’s the main topic of work I’ve been engaged in recently and hope to be able to put together soon.

A.B.: You recently celebrated — along with a large body of friends and colleagues here on the MIT campus — your 90th birthday. Such milestones are of course cause for reflection, even as one looks ahead. Looking over your work to date, what would you say has been your most significant theoretical contribution to the field of linguistics?

N.C.: Opening up new kinds of questions and topics for inquiry.

A.B.: A very broad question, but perhaps one that speaks to the times we’re living in right now: What do you regard these days as cause for optimism?

N.C.: Several points. First, the times we’re living in are extremely dangerous, in some ways more so than ever before in human history — which will essentially come to an end in any recognizable form if we do not deal effectively with the increasing threats of nuclear war and of environmental catastrophe. That requires reversing the course of the U.S. in dismantling arms control agreements and proceeding — along with Russia — to develop ever more lethal and destabilizing weapons systems; and in not only refusing to join the world in trying to do something about the severe environmental crisis but even aggressively seeking to escalate the threat, a form of criminality with literally no historical antecedent.

Not easy, but it can be done.

There have been other severe crises in human history, even if not on this scale. I’m old enough to remember the days when it seemed that the spread of fascism was inexorable — and I’m not referring to what is referred to as fascism today but something incomparably more awful. But it was overcome.

There are very impressive forms of activism and engagement taking place, mainly among younger people. That’s very heartening.

In the final analysis, we always have two choices: We can choose to descend into pessimism and apathy, assuming that nothing can be done, and helping to ensure that the worst will happen. Or we can grasp the opportunities that exist — and they do — and pursue them to the extent that we can, thus helping to contribute to a better world.

Not a very hard choice.

Amy Brand is Director of the MIT Press.

Correction: An earlier version of the article stated that “The Logical Structure of Linguistic Theory” was submitted to MIT Press in 1955 and later published in 1985. In fact, it was published, by Springer, in 1975.

CRYPTOZOOLOGY

The Sandy Hook Sea Serpent

Header
The North Shrewsbury (Navesink) River is one of the most scenic estuaries on the Eastern Coast of America. Known for luxury yachts, stately homes, and iceboating, it is hardly the place you would expect to find the legend of a sea serpent. But, in the late nineteenth century it was the location of one of many well-documented and unexplained sightings of mysterious sea creatures that plagued the waters of the North Atlantic.
Layout 1The creature in question was seen by several people, all who were familiar with local sea life. While returning from a daylong outing, Marcus P. Sherman, Lloyd Eglinton, Stephen Allen and William Tinton, all of Red Bank, encountered the monster. The Red Bank Register reported the witnesses to be sober and respectable local merchants.
At around 10:00 P.M. the yacht Tillie S., owned by Sherman, was making its way back to Red Bank after a picnic at Highlands Beach. The men had enjoyed a pleasant Sunday evening escaping the warm early summer weather. The moon was shining bright, providing for high visibility as the yacht cut through the water. A stiff summer breeze was blowing and they rounded the Highlands and headed toward Red Bank. At the tiller of the Tillie S., Marcus Sherman steered through the familiar waters. At the bow was Lloyd Eglinton, who kept watch for debris in the water ahead.
Sea Serpent-Sci. Am
Artist’s rendition of the mysterious sea creature as it appeared in the December, 1887 edition of Scientific American.
Suddenly Eglinton yelled that there was something in the water dead ahead. Sherman steered “hard to port” to avoid the collision. As they looked to see what the obstacle was, they were shocked. There ahead of them was the Sandy Hook Sea Serpent that had been sighted many times over the preceding two years. So credible were the sightings of the Serpent two years earlier, that Scientific American had run an article issuing an opinion that the monster was in fact a Giant Squid. The article, complete with drawings, appeared in the December 27, 1887, edition of the prestigious scientific periodical.
The earlier sighting at Sandy Hook had been made by several credible witnesses. Most notably the members of the Sandy Hook Life Saving Service. The crewmembers had sighted a large monster in the cold waters just off Sandy Hook in November 1879. The sighting was so credible that scientists were dispatched to take statements. It is from these descriptive statements that it was determined the Sandy Hook Sea Monster was, in fact, a giant squid. For the next several years there were reports of all types of sea serpent sightings up and down the east Atlantic Coast.
Sea Serpent-Ryan Doan
Illustration by Ryan Doan.
What the Red Bank men saw was surely no giant squid. It was described as about 50-foot long and serpentine in shape. It swam with snakelike undulations slowly and steadily through the water. As it passed halfway past the bow, its head rose from the water giving forth a mighty roar. The head was described as small and somewhat resembling a bulldog’s in shape. It had two short rounded horns on its head just above its eyes. The eyes we said to be the size of silver dollars. Bristles adorned the upper lip of the monster, much like those that would be found on a cat. The beast’s nostrils were quite large and flattened. The serpent-like body tapered to a sword like pointed tail. The frightened men stared in disbelief as it slowly and leisurely swam toward the shore of Hartshorne’s Cove. As the monster disappeared into the night, the men made their way back to Red Bank with a monster of a story to tell.
The men of the Tillie S. were not the only ones to see the creature. Other boaters on the water saw the serpent and gave near identical descriptions. In all over a dozen boaters had seen the strange creature on his nocturnal swim. Over the next months and years there would be other sightings of the monster in the Navesink. In time it came to be known as the Shrewsbury Sea Serpent. No scientific explanation was ever given for the sightings, as had been done for the so-called Sandy Hook Sea Serpent, however the description is not totally without merit. Other than the size, the description is very similar to that of the Oarfish. In any case the mystery remains as to the true identity and fate of the Sea Serpent.

The preceding article  by Robert Heyer is featured in issue #43 Weird NJ magazine.