Saturday, February 29, 2020



The Tyranny of the Minority, from Iowa Caucus to Electoral College

Corey Robin
Jessica Rinaldi/The Boston Globe via Getty Images
A book stall sign at a store in Des Moines, Iowa, the day of 
the Iowa caucuses, February 3, 2020

It has been more than two weeks since the Iowa caucuses, and we still don’t know who won. That should give us pause. We don’t know in part because of a combination of technological failing and human error. But we’re also in the dark for a political reason. That should give us further pause.

No one disputes that Bernie Sanders won the most votes in Iowa. Yet Pete Buttigieg has the most delegates. While experts continue to parse the flaws in the reporting process, the stark and simple fact that more voters supported Sanders than any other candidate somehow remains irrelevant, obscure.

America’s democratic reflexes have grown sluggish. Not only has the loser of the popular vote won two out of the last five presidential elections, but come November, he may win a third. Like the children of alcoholics, we’ve learned to live with the situation, adjusting ourselves to the tyranny of its effects. We don’t talk anymore about who will win the popular vote in the coming election. We calculate which candidate will win enough votes in the right states to secure a majority in the Electoral College. Perhaps that’s why the scandal coming out of Iowa is the app that failed and the funky math of the precinct counters—and not the democratic embarrassment that the winner of the most votes doesn’t automatically win the most delegates.

In the original edition of his definitive history The Right to Vote, which came out two months before the 2000 election, Harvard scholar Alexander Keyssar never mentioned the Electoral College. Trying, in an afterword he wrote later, to account for his omission, Keyssar explained that before the election, he didn’t think the Electoral College had much to do with voting rights. He thought of it simply as a “device for aggregating” the popular vote.

At the time, this was an understandable belief. (Keyssar has since devoted himself to writing what is likely to be the definitive history of the Electoral College. It is scheduled to be published in June.) Not since 1888 had a candidate won the presidential election while losing the popular vote. So insignificant had the Electoral College become by 2000 that CBS News had to remind people on the eve of the election that the Electoral College wasn’t an “institution of higher learning” but the means by which the president is chosen. A relic of the nineteenth century, the divergence between the electoral vote and the popular vote was unlikely in the twentieth and twenty-first.

Like the Electoral College, the Iowa caucuses don’t aggregate people’s votes; they weigh them, assigning different values to the votes depending on where they are cast. Much has been made of the alchemy whereby support for a candidate is converted into “state delegate equivalents,” or SDEs. Less attention has been paid to the fact that the number of SDEs each caucus is assigned and distributes among the candidates depends not on the size of the local population or turnout at the caucus but on a formula biased toward rural parts of the state. In 2016, forty-five citizens in remote, sparsely populated Fremont County could effectively select one SDE. In more populous Jefferson County, it required two hundred and thirteen citizens to select one SDE. That’s how it’s possible to win the popular vote and lose Iowa—even if the app works fine and the caucus chairs are good at math.

“The basic principle of representative government,” the US Supreme Court declared in 1964, is that “the weight of a citizen’s vote cannot be made to depend on where he lives.” In Iowa, as in other parts of the American constitutional order, that principle is not in effect.


Even with their acceptance of slavery and a highly restricted franchise, many of the Framers were uneasy about the notion that some people’s votes might count more than others. When one group of delegates proposed that each state, regardless of the size of its population, should have an equal vote in Congress, James Madison denounced the plan as “confessedly unjust,” comparing it to the scheme of “vicious representation in Great Britain.” State-based apportionment, claimed Pennsylvania’s James Wilson, would only reproduce the inequality of Britain’s rotten boroughs, where a nearly depopulated Old Sarum—described at the time as sixty acres without a home—had two representatives in Parliament, while London, with 750,000 to one million residents, had four.

Madison and Wilson lost that debate; the United States Senate is the result. Within a year of the ratification of the Constitution, the 50,000 free residents of Delaware, the least populous state in the nation, had the same number of senators as the 455,000 free residents of Virginia, the most populous state. That makes for a ratio of power of nine to one. Today, according to a recent report by the Roosevelt Institute, that ratio has expanded to sixty-seven to one. Wyoming’s 583,000 residents enjoy as much power in the Senate as the nearly 40 million residents of California. (In the Electoral College, the power ratio is four to one.) 
Bill Clark/CQ Roll Call via Getty Images
Placards at the Women’s March, the day after the Inauguration of Donald Trump as president, Washington, D.C., January 21, 2017

Eighteen percent of the American population—on average, whiter and older than the rest of the population—can elect a majority of the Senate. If those senators are not united in their opposition to a piece of legislation, the filibuster enables an even smaller group of them, representing 10 percent of the population, to block it. Should legislation supported by a vast majority of the American people somehow make it past these hurdles, the Supreme Court, selected by a president representing a minority of the population and approved by senators representing an even smaller minority, can overturn it.

The problem of minority rule, in other words, isn’t Trumpian or temporary; it’s bipartisan and enduring. It cannot be overcome by getting rid of the filibuster or racist gerrymanders—neither of which have any basis in the Constitution—though both of these reforms would help. It’s not an isolated embarrassment of “our democracy,” restricted to newly problematic outliers like the Electoral College and the Iowa caucuses. Minority rule is a keystone of the constitutional order—and arguably, given the constitutional provision that “no state, without its consent, shall be deprived of its equal suffrage in the Senate,” not eliminable, at least not without a huge social upheaval.

In the second half of the twentieth century, the United States had two extremely close presidential elections: 1960 and 1968. Given how effectively the Soviet Union propagandized about American apartheid, prodding US officials to push for desegregation in order to avoid bad publicity in the decolonizing world, it’s not inconceivable that the cold war also may have helped prevent those elections from producing awkward splits between the electoral vote and the popular vote. If American elites feared that the tyranny of a white majority couldn’t withstand scrutiny in the Global South, what kind of legitimation crisis might the tyranny of a white minority have provoked? As the United States today approaches a multiracial majority, the tyranny of a white minority is precisely what we are hurtling toward, without the countervailing pressure of an ideologically challenging superpower to shame us into stopping it.

It was also during the cold war that the United States grew increasingly uncomfortable with the rotten boroughs of the American South. “To say that a vote is worth more in one district than in another,” the Supreme Court declared in Wesberry v. Sanders (1964), would “run counter to our fundamental ideas of democratic government.” Though malapportionment in the South produced power ratios—three to one in Georgia, forty-one to one in Alabama—that seem quaint in light of today’s Senate, they were sufficiently large to provoke Chief Justice Earl Warren, in Reynolds v. Sims (1964), to spell out, in simple numerical terms, the challenge they posed to basic principles of democratic morality.

“It would appear extraordinary,” wrote the chief justice, if the votes of citizens in one part of a state were “given two times, or five times, or ten times the weight of votes of citizens” in another part of the state. That would mean that the first group of citizens “could vote two, five, or ten times for their legislative representatives” while the second group “could vote only once.” Under such a scheme of representation, the right to vote would not be “the same right” for all citizens. “To sanction minority control” of a legislative body “would appear to deny majority rights in a way that far surpasses any possible denial of minority rights.”

Warren clearly was worried about the implications of his argument for federal institutions like the Electoral College and the Senate. That is why he devoted several pages of his opinion to a preemptive rebuttal of “the so-called federal analogy.” Much of Warren’s rebuttal depended on the invocation of history. The apportionment scheme of the Senate and the Electoral College was a necessary compromise “arising from unique historical circumstances” of thirteen sovereign states contracting to form a national government. That claim from history sits uneasily with Warren’s claim, later in the opinion, that “history alone” is not a “permissible factor” in justifying departures from one-person, one-vote, that “citizens, not history or economic interests, cast votes.”

More important, however, is the history Warren invoked. Like the court’s other liberal justices, Warren cast the constitutional settlement over representation as a compromise between large and small states. As the Yale legal scholar Akhil Reed Amar has noted, this is “the stodgy version” of constitutional history that many of us grew up with. The less stodgy and increasingly accepted version holds that the battles over representation at the Convention had more to do with slavery than the size of states. As Madison pointed out, “the real difference of interests” at the Convention “lay, not between large and small but between Northern and Southern States. The institution of slavery and its consequences formed the line of discrimination.”

It’s clear why such historical truths could not be acknowledged during the cold war. Given the centrality of the Senate to Jim Crow and the near impossibility of eliminating the Senate, as well as the improbability of reforming the Electoral College, it made sense to describe these institutions as musty and ancient compromises between large and small states. In the 1960s, ironically, anachronism was less threatening than relevance. Now that the cold war is over, however, we can admit these truths. Now that the cold war is over, that may not matter.

We are at a strange moment in American history. On the one hand, the country has never been more interested in, and desperate to know, what the majority wants. As the rise of data geeks like the two Nates—Silver of fivethirtyeight.com and Cohn of The New York Times—and outlets like Vox show, our appetite for polling is ravenous; our capacity to digest the results, prodigious. On the other hand, we have an electoral system that makes it ever more difficult to determine the will of the majority, and a political system that makes that will ever more difficult to enact. Something’s gotta give.

Or not. In her 1997 collection of critical essays The End of the Novel of Love, Vivian Gornick remarks on that “climactic moment” in a John Cheever story “when the husband realizes his wife holds him in contempt, or the wife knows the husband is committing adultery.” With mounting dread, the reader wonders how either character can go on after this moment of truth. What makes the story truly “large, awesome, terrible,” however, is when the reader realizes that the characters do “go on like this.” That moment of truth leaves the reader “staring into space, the void opening at her feet.”

February 21, 2020, 8:38 am
SCREWBALL!: The Cartoonists Who Made the Funnies Funny
March 12, 2020
Art Spiegelman
MARCH 12, 2020 ISSUE

Foolish Questions
SCREWBALL!: The Cartoonists Who Made the Funnies Funny
by Paul C. Tumey
Library of American Comics, 303 pp., $59.99


The Art of Rube Goldberg
an exhibition at the Museum of Pop Culture, Seattle, February 11–April 23, 2017; the Grand Rapids Art Museum, May 21–August 27, 2017; Citadelle Art Foundation and Museum, Canadian, Texas, September 15–November 26, 2017; the Contemporary Jewish Museum, San Francisco, March 15–July 8, 2018; the Portland Public Library, Portland, Maine, August 3–September 22, 2018; the National Museum of American Jewish History, Philadelphia, October 12, 2018–January 21, 2019; the Evansville Museum of Arts, History and Science, April 28–July 21, 2019; and the Queens Museum, October 6, 2019–February 9, 2020

King Features Syndicate/The Library of American Comics
Panels from Count Screwloose of Tooloose by Milt Gross, April 5, 1931

Two ladies on an outing to the Queens Museum one weekend last fall wander into “The Art of Rube Goldberg” exhibition. They enter casually and chuckle at a monitor playing a few moments from Charlie Chaplin’s Modern Times. A factory worker is immobilized in a complicated lunch-feeding contraption inspired by Rube Goldberg, a pal of Chaplin’s. It shovels some soup into his mouth, then short-circuits as it rams a whirring cob of corn up against his teeth, force-feeds him a couple of loose bolts, shoves a slice of cream pie into his dazed mug and then smears it with an automated napkin. Next, there’s a clip from a 1930 comedy, Soup to Nuts, written by Goldberg. (It includes a memorable antiburglar contraption but today is better known for featuring Larry, Moe, and Shemp before they became the Three Stooges.) The women glance at some of the original art on the walls as they drift out and one says, “Gosh, I never knew he was a cartoonist, too!”

Being a cartoonist too was the price of immortality for a cartoonist so famous that he became an adjective in Merriam-Webster’s dictionary as early as 1931: “accomplishing by complex means what seemingly could be done simply.” The adjective still has currency, as in a recent Foreign Policy opinion piece that describes the electoral college as “that cockamamie Rube Goldberg mechanism that never quite worked as intended.” (It shows up often in discussions of government policy and single-payer-health-care math.)

Rube Goldberg was the Christopher Columbus of the screwball contraption, finding a way to get from point A to point B by traveling through all the other letters of the alphabet. And, like Columbus, a number of other intrepid explorers had gotten there first. At least two years before Goldberg, the renowned British illustrator and cartoonist Heath Robinson began publishing deadpan-droll tableaux that featured useless inventions, as did Denmark’s hidden treasure, cartoonist and humorist Storm Petersen. Both “Heath Robinson” and “Storm P.” were adjectivized in their own nation’s lexicons. None of this has anything to do with plagiarism; it’s a marker of the disorienting Machine Age these artists were born into, and of cartooning’s singular role as a Zeitgeist barometer.

Goldberg was born on July 4, 1883, to a Prussian-Jewish immigrant father who became a fixture in San Francisco Republican politics. Fearful that his son would become an artist, Max Goldberg insisted that Rube study to be an engineer at UC Berkeley. He graduated in 1904 to a job mapping out sewer mains for the city of San Francisco but bailed just four weeks later to become a sports cartoonist for the San Francisco Chronicle. Goldberg’s engineering background allowed his ingratiatingly lumpy cartoons to retain the diagrammatic clarity both comics and patent drawings demand. So “Father Was Right”!—to quote one of the many pre-Internet memes Goldberg generated in the more than sixty series he drew in his lifetime. Others include “No matter how thin you slice it, it’s still baloney,” “Mike and Ike they look alike!” (identical twins, one Irish and one Jewish), and his first big hit, “Foolish Questions,” from 1908 (as in Foolish Questions—No. 40,976: “Son, are you smoking that pipe again?” “No, Dad,” says the son sucking a pipe larger than his head, “this is a portable kitchenette and I’m frying a smelt for dinner”).

Goldberg is said to have produced about 50,000 drawings in his lifetime, and his inventions made up only a small part of his vast and mixed-up mix of features. He was himself a master of reinvention: in his early days a vaudeville performer, then an animator, song lyricist, radio personality, short story and essay writer for popular magazines, toastmaster, and star of his own TV show. His last long goodbye as a cartoonist was drawing political cartoons from 1939 to 1964, before he “retired” and became a sculptor until his death at eighty-seven in 1970. His editorial cartoons were drawn in the style of Herblock, but with regrettable anti–New Deal and occasional pro-McCarthy stances (perhaps shaped by Goldberg’s class interests—his cartoons had made him wealthy, he was married to the White Rose Tea heiress, and apparently he had inherited his father’s Republicanism; as I mentioned, “Father Was Right”!). 

Abrams ComicArts/© 2020 Heirs of Rube Goldberg
‘Peace Today’; cartoon by Rube Goldberg from The New York Sun, July 22, 1947

The traveling retrospective at the Queens Museum left out Goldberg’s more embarrassing political cartoons, and didn’t show even a tear-sheet of his powerful 1948 Pulitzer Prize–winning emblem of cold war anxiety, “Peace Today.” It depicts a suburban American family lounging on the lawn next to their two-story home, sitting atop a giant A-bomb that teeters over an abyss labeled “World Destruction.”

The exhibition supplemented the dozens of comic art originals on the walls with full broadsheet-size Sunday comic pages, vitrines over-stuffed with book covers, licensed games, postcards, buttons, and other ephemera, all to show the artist as an observer of social foibles with an acute sense of the absurd. The visitor was encouraged to linger over Goldberg’s deft yet humble grotesqueries and also to savor the rhythms of his copious prose. Back in the golden age of newspaper comics, there used to be space and time for written language.

The Art of Rube Goldberg, the definitive coffee table book from 2013 that served as the catalyst for this exhibit, provides over seven hundred well-selected images and several valuable historical and biographical essays. In the spirit of excess that the artist was known for, it even comes with a paper-engineered moving contraption operated by a pull-tab on its cover that will make the book enticing to any child near that coffee table. Whatever childhood pleasures Goldberg’s work may offer, as Adam Gopnik points out in his introduction, there seems, to adult eyes, to be in [Goldberg’s] work some fatal, almost unconscious, commentary on the madness of science and the insanity of modern invention…. He doubtless would have laughed, or shaken his head in disbelief, if asked how his work related to Duchamp’s machine aesthetic, or to Dada—and yet every mark an artist makes takes place in a moment of time, and within a common frame of meaning.1

Duchamp and Man Ray embraced Goldberg as a fellow Dada traveler by putting one of his cartoons in their 1921 issue of New York Dada, but the feeling wasn’t exactly mutual. Like many American cartoonists of his day, Goldberg was dismissive of nonobjective art. As Peter Marzio, his biographer, wrote in 1973, “Rube believed that fine art was good only if it won public acceptance. Sales were Rube’s test of beauty.”2 Still, the cartoonist’s inventions showed up in MoMA’s landmark 1936 exhibit, “Fantastic Art, Dada and Surrealism,” and were also part of its 1968 show “The Machine As Seen at the End of the Mechanical Age.”
Abrams ComicArts/© 2020 Heirs of Rube Goldberg
Drawing by Rube Goldberg from Cosmopolitan, 1928

In 1970, two weeks before his death, a full retrospective of his work, entitled “Do It the Hard Way: Rube Goldberg and Modern Times,” was unveiled at the Smithsonian Institution. The show, according to Marzio, who cocurated the exhibition, was something of a blockbuster, with over two thousand guests attending the opening. Goldberg’s cartoons were also something of a “block buster” in the sense of opening up the redlined ghetto of “low” art, welcoming it into the hallowed precincts of High Culture. They were among the earliest examples of comic art ever to be displayed on art museum walls.

Over on the comics side of the collapsing high-low divide, Goldberg’s influence can be found in the work of generations of influential cartoonists, including Dr. Seuss, Harvey Kurtzman, and Robert Crumb—all of whom have now been exhibited in museums. In fact, with categories of every kind crumbling around us daily, seeing comic art on walls has become delightfully commonplace, though the celebration of Goldberg’s pioneering art in Queens was met with smaller crowds than he deserves. (The museum organizers tried to entice its audience, even having a well-intended if less-than-successful “Machine for Introducing an Exhibition” built to stand in front of the wall into the first gallery. The press of a button sets off a chain reaction involving an electric fan, a windmill, a die-cut Rube Goldberg drawing of a boot, a watering can, and three separate computer screens, each with simple animations of animals in a process that eventually unfurls a welcome banner. Mixing analog and digital technologies could have provoked thoughts about what the creator of useless complexity might have thought about life in our age of sleek electronics, but the whole device—barely a gizmo, let alone a contraption—looked minimalist and wan rather than deliriously tangible and maximalist, like the artist it was meant to introduce. The Saturday I visited, I pressed the green start button, and nothing happened. Then I noticed a sign on a stand nearby: “This work is temporarily out of order. We apologize for the inconvenience.”) 
Abrams ComicArts/© 2020 Heirs of Rube Goldberg
A 1995 US postage stamp adapted from artwork by Rube Goldberg in Collier’s, September 26, 1931

*Foolish Question #25,743,000: “So are you somehow trying to say that Rube Goldberg was a serious Fine Artist???”

“No, you Boob! I’m pointing out to the uninitiated that Rube Goldberg was a fine Screwball Artist!” 
*

Now that comics have put on long pants and started to strut around with the grownups by calling themselves graphic novels, it’s important to remember that comics have their roots in subversive joy and nonsense. For the first time in the history of the form, comics are beginning to have a history. Attractively designed collections of Little Nemo, Krazy Kat, Thimble Theater, Barnaby, Pogo, Peanuts, and so many more—all with intelligent historical appreciations—are finding their way into libraries.

Paul Tumey, the comics historian who co-edited The Art of Rube Goldberg book seven years ago, has recently put together a fascinating and eccentric addition to the expanding shelves of comics history.3 The future of comics is in the past, and Tumey does a heroic job of casting a fresh light on the hidden corners of that past in Screwball!: The Cartoonists Who Made the Funnies Funny. It’s a lavish picture book with over six hundred comics, drawings, and photos, many of which haven’t been seen since their twenty-four-hour life-spans in newspapers around a century ago. The book is a collection of well-researched short biographies of fifteen artists from the first half of the twentieth century, accompanied by generous helpings of their idiosyncratic cartoons. Goldberg—whose name schoolchildren learn when their STEM studies bump into chain reactions—is the perfect front man to beckon you toward the other less celebrated newspaper cartoonists who worked in the screwball vein that Tumey explores.

Screwball is an elusive attitude in the language of laughs and, like pornography, it’s hard to define but easy to recognize. Tumey prowls for common denominators and trails of influence that connect these odd ducks and their droppings. But the closer one looks, the less they seem to have in common. Virtually all the earliest newspaper comics were designed to be funny, but not all the funnies were screwball. The book is a survey, not in the sense of a Comics 101 history course serving up a knowledgeable overview, but more like a deep exploratory mining dig that samples underground specimens to assay what’s of value. The project is hardly arbitrary, but it doesn’t seem exactly definitive, either. It’s actually sort of, well, screwy—and it may just be that screwball is its own shortest definition.

One foot of the slippery screwball stretches back through vaudeville to commedia dell’arte with its stock situations and characters; the other foot strides forward toward Dada, surrealism, and the theater of the absurd—while the third foot of this ungainly creature remains firmly balanced on a banana peel. Screwball comics tend toward the manic, excessive, over-the-top, obsessive, irrational, anarchic, and grotesque; they can veer toward parody or satire, but at their core they are an assault on reason and its puny limitations. They wage a gleeful war on civilization and its discontents—armed mostly with water-pistols, stink bombs, and laughing gas.

The cinematic analogs of screwball comics would include the Marx Brothers’ Duck Soup, Olsen and Johnson’s Hellzapoppin, as well as the early animated shorts of the Fleischer brothers, Tex Avery, et al. Screwball comics have little to do with the more attended-to genre of romantic “screwball comedy”—movies like Howard Hawks’s Bringing Up Baby or Frank Capra’s It Happened One Night—except in their velocity. Screwball strips, designed for family newspapers, had even fewer hints of sex than those screwball romantic comedies, though their punchlines did elicit sublimated climaxes with the so-called straight man flying out of the last box, feet in the air. 
King Features Syndicate
Thimble Theater by E.C. Segar, January 4, 1934

In Smokey Stover, Bill Holman’s essence-of-screwball fireman strip, rapid-fire puns rage through all the panels like kindling for a four-alarm newsprint conflagration. “Plop-take” feet sail out of their shoes, revealing toes that poke through sock holes; bowties pop off shirt collars while mustaches, hairpieces, eyeglasses, false teeth, and even ears explode clear off of heads. Another symptom of this approach—rechanneled id erupting in the release of a belly laugh—is what Tumey dubs “the screwball spin,” a blurry mandala of repeating heads and limbs that form a proto-Futurist pinwheel of frenzied slapstick action. It was, for example, how Elzie Segar drew Popeye pummeling an adversary in the ring—like a rapidly rotating phénakistiscope.
Tribune Content Agency
Panels from Smokey Stover by Bill Holman, November 15, 1936

I can only tour you through a few of the giant screwballs spinning around in this treasure chest of salvaged newsprint, and will start as the book does, with Frederick Burr Opper. A founding father of the funnies, he’s credited with making speech balloons a regular part of the comics’ formal vocabulary. He was already a seasoned and highly regarded artist of forty-one by the time he was recruited by William Randolph Hearst’s Journal in 1899 as a big gun in the epic newspaper war between Hearst and Joseph Pulitzer.

Pulitzer had developed a high-speed four-color newspaper press, hoping to bring the great art of the ages to his masses. When he found that the imprecise, out-of-register printing made the Old Masters look like blurry Impressionists, he settled on a comics supplement with black outlines containing flat colors. The funnies became a major weapon in the battle for circulation—and Hearst soon set up his own color supplement, announcing it as “eight pages of iridescent polychromous effulgence that makes the rainbow look like a lead pipe!”

Opper, a formidable draftsman, had become a star in Puck, the color-lithographed, Progressive Era satirical weekly. Instead of bringing gravitas to Hearst’s paper, Opper remade himself as a king of comedy, working in a playful, casual mode. His first and longest-lived hit, Happy Hooligan, featured a hapless hobo with a tin can for a hat; his well-meaning but dimwitted attempts to be helpful brought swirls of multipanel havoc that often ended with a screwball spin of cops brandishing nightsticks and dragging our hero off to the slammer. He was a precursor of Chaplin’s tramp, Goldberg’s long-lived Boob McNutt, and, a half century later, another beautiful loser named Charlie Brown. Happy Hooligan is a genial version of the xenophobic caricatures of simian-featured Irish immigrants that Thomas Nast had angrily drawn for Harper’s Weekly and that Opper, following in Nast’s footsteps, had produced for Puck.
King Features Syndicate/Library of American Comics
A panel from Happy Hooligan by Frederick Burr Opper, May 23, 1909

Ah, stereotypes! Cartoons are a visual language of simplification and exaggeration whose vocabulary was entirely premised on them. It’s as if the N-word was the only word in the dictionary to describe people of color, and even the poetry that comics can offer had to be written in this debased language. We humans are hard-wired toward stereotyping, and, alas, comics echo the way we think. It’s part of the medium’s danger and its power. Tumey’s collection of historical material comes with a trigger warning:


These comic strips were created in an earlier time and may include racial and other stereotypes; we reproduce them in historical context with the understanding that they reflect a thankfully bygone era.

He scrupulously tries to depict the work of the era accurately without grinding our eyeballs into an overdose of toxic images. Still, it’s hard to guide an uninitiated reader to distinguish between intentional insults and images that—considering the form and our nation’s history—are only ambiently offensive, reflecting the time in which they were made.

Which brings us to Eugene “Zim” Zimmerman, who was one of America’s most famous cartoonists at the turn of the twentieth century. A consummate graphic artist, Zim had an unfortunate predilection for the ethnic and racial themes that were especially popular at the time, and—though this material may represent only, say, 30 percent or so of his prolific output—he was brutally skillful at it. Zim once joked that he and his fellow cartoonists at Puck treated the various races and creeds that made up America with gloves, the kind boxers wear. It may explain why—despite the large influence he had on other cartoonists of his time—Zim has been more or less canceled from comics histories. Still, Rube Goldberg deeply admired Zim’s art and eulogized him as “the dean of grotesque pictorial humor.”
Judge Company of New York
‘Getting There’; cartoon by Eugene Zimmerman from Judge, September 20, 1902

George Herriman, the creator of Krazy Kat, sits far from Zim on the screwball spectrum, on as high a throne as a comics canon can offer. Krazy Kat’s relentless vaudeville variations of a kat getting whacked by a mouse hurling a brick might make the work the ultimate expression of screwball, but its ethereal and gentle subtlety beckons the strip into a transcendent world of its own. Knowing that Krazy Kat is now widely available again, Tumey has chosen instead to offer a slice of Herriman’s far more obscure Stumble Inn. Krazy was getting an unenthusiastic response from most readers and most newspaper editors—Hearst often had to insist that his newspapers run it—so the cartoonist doubled his workload by simultaneously providing his syndicate with a more conventionally funny comic strip. Happily, he didn’t do conventional very well; Stumble Inn is a strip about a fleabag hotel that seems to anticipate John Cleese’s Fawlty Towers. It looks a bit like Mutt and Jeff if that strip had been drawn with the precision of a Renaissance master—some of the most breathtakingly beautiful cartooning I’ve ever seen.

Herriman was born in New Orleans in 1880. A Creole of color, he and his family left the city when he was ten years old and, as Michael Tisserand documents in his meticulous and revelatory biography, relocated to Los Angeles, where they passed for white for the rest of their lives.4 Reading Krazy Kat through that lens adds new layers of complexity to a strip about a black cat and the white mouse (pink on Sundays) who loathes him.5

The most poignant panel in Tumey’s book is in a wonderfully convoluted Stumble Inn sequence in which Mr. Stumble, Owl Eye (the hotel’s house detective), and a deadbeat boarder they’re now trying to lure back to their inn are each disguised in hats, long coats, and false beards. Soda Popp, the sweet young bellboy, with black face and large red lips (Herriman always drew his black humans according to the then standard cartoon physiognomic code), looks at the camouflaged Owl Eye and says, “’at ole ‘Owl Eye’ he’s so disguised I bet he dont know who he is his’sef.”
King Features Syndicate/Library of American Comics
Stumble Inn by George Herriman, April 5, 1924

While Stumble Inn sits in a quiet and conventional suburb of Coconino County, in a dingy small town elsewhere on the comics pages we find Our Boarding House, established in 1921 by Gene Ahern. The homely daily panel orbits around the landlady’s lazy gasbag of a husband, Major Hoople, a hybrid of Falstaff, Munchausen, and W.C. Fields. He chases one hopeless get-rich-quick scheme after another and regales the other lodgers with tall tales about his big-game hunting or his heroism as a prisoner in the Boer War sneaking messages out hidden in alphabet soup.

Syndicates back in the day often required their artists to provide “toppers” for their Sunday pages—small “throwaway” strips that could independently sit atop the main feature so papers could brag about having, say, thirty-two strips in their supplements rather than sixteen, or, even better, they could replace the feature with an ad. Comics always existed in the interstice between art and commerce, and Ahern turned the “minor” toppers into something simultaneously ridiculous and sublime. The Nut Bros., Ches and Wal, sat above Our Boarding House, breaking the fourth wall by offering their pun-laden old chestnuts with a self-aware wink and a surrealistic edge—goofy sight gags and costume changes from one panel to the next.

In 1936 Ahern moved to a larger syndicate at twice the pay but had to leave Major Hoople and the strip’s title behind. A near clone, Judge Puffle, now lived under a new logo, Room and Board. The Squirrel Cage, his topper for Room and Board, developed into one of the underappreciated hidden glories in the history of comics. It started as a direct continuation of The Nut Bros. but transformed into a strip that didn’t just have a surreal edge—it was surreal to its core. 
The Squirrel Cage by Gene Ahern, January 5, 1947

A bearded Little Hitchhiker—the direct model for R. Crumb’s Mr. Natural a generation later—started to pop up in the strip’s shifting landscapes with thumb extended, inscrutably asking: “Nov shmoz ka pop?” The vaguely Slavic-sounding gibberish was pretty much the only thing he ever said, and it became an unanswerable catchphrase, the kind that screwball strips were often able to wormhole into readers’ brains through satisfying repetition.
King Features Syndicate/Library of American Comics
A panel from The Squirrel Cage by Gene Ahern, July 1, 1945

Ahern’s shifting backgrounds and props are less graceful than those in Krazy Kat. The characters, no matter how odd, seem to walk through their uncanny environment and impossible situations with the same resignation as if they were waiting for Godot. It all makes the irreal seem…real. The unearthly world of the top strip exists in a dialectical relationship to the drab boarding house in the strip below that contains the outsized fantasies of Judge Puffle.

Of all the wise guys6 gathered in Screwball!, Milt Gross is perhaps the essence of the idiom—cartooning distilled into precious drops of Banana Oil. (“Banana Oil,” for the uninitiated, was one of those aforementioned wormhole phrases, Gross’s equivalent of Rube Goldberg’s “Baloney!”) Gross was born to Russian-Jewish immigrants in 1895 and raised in the Bronx. In the early 1920s he created Banana Oil, among other strips, as well as an illustrated syndicated weekly newspaper column called Gross Exaggerations that crosscut conversations heard through the dumbwaiter of a small tenement building. Talk about finding one’s voice! It was written in Gross’s fractured Yiddishized English (Is diss a lengwitch? Dunt esk!) and gathered into a best-selling book called Nize Baby in 1926 before becoming a comic strip. His malapropisms and phonetic spelling ache to be read out loud for comprehension. His 1927 skirmish in the war against Christmas was a retelling of the Clement Clark Moore poem, “De Night in De Front from Chreesmas,” which starts:


’Twas de night befurr Chreesmas und hall troo de house
Not a critchure was slipping—not ivvin de souze,
Wot he leeved in de basement high-het like a Tsenator,
Tree gasses whooeezit—dot’s right—it’s de jenitor!

Gross was doubly gifted: an irresistibly risible writer and visually a comics genius. His cartoons are pure doodle: effortless and effervescent. The art looks like he was giggling uncontrollably while the cartoons just shpritzed out of his pen—and his laugh is infectious, bouncing off the page so you laugh too. Of his many creations, Count Screwloose of Tooloose, a Sunday page that launched in 1929, may be his screwiest. It reveals the thematic heart of all the screwball works in this book and beyond: The Count, a half-pint, sausage-nosed, cross-eyed resident of Nuttycrest Sanitarium, has an even smaller companion, a yellow dog named Iggy who wears a Napoleon hat. In each episode the Count devises a nutty new way to escape the institution and reenter the world outside its walls (see illustration on page 9). When he sees just how out of their minds the people out there are, he flees back to Nuttycrest, where his pup rapturously welcomes him home, as the Count exclaims, “Iggy, keep an eye on me.” Gross’s theme is an inverted way of expressing what Salvador Dalí famously said a few years later: “There is only one difference between a madman and me. I am not mad.” Count Screwloose deploys Gross’s spontaneous and flexible pen line to search for the difference between the delusional and the rational. 
King Features Syndicate/Library of American Comics
Count Screwloose of Tooloose by Milt Gross, January 25, 1931

The clearest expression of that search can be found in the subversive work of Harvey Kurtzman, the cartoonist who founded Mad. He is not included in Screwball!, since Tumey felt he had to limit himself to newspaper cartoons from the late nineteenth century through World War II to keep his project manageable—and Kurtzman’s Mad, originally a comic book launched in 1952, falls outside those parameters. But in a short afterword, Tumey writes, “Much of the material in Mad belongs to the lineage traced in this book. In fact, this book could be seen as the road to Mad.”

Indeed, the early Mad is the apotheosis of the aesthetic presented in Screwball! Kurtzman’s precisely timed comics look like a slower, more methodical and cerebral take on Gross’s mishegoss. The core tropes of the Smokey Stover take-no-prisoners chaos—its wacky signage that fills up all extra white space along with backgrounds that burst with sight gags—deeply informed the Mad that Kurtzman wrote and edited. Those “Easter eggs” in the backgrounds (what he and his lifelong collaborator, Bill Elder, called “chicken fat,” and my generation of underground cartoonists called “eyeball kicks”) are clear symptoms of a cartoonist irrepressibly interested in amusing himself as well as the reader.
William M. Gaines Agent, Inc.
‘Mickey Rodent!’; cartoon by Harvey Kurtzman and Bill Elder from MAD no. 19, 1955

If the road to Mad was a loopy rollercoaster, the road from it has been riddled with potholes and has finally run into a wall. Mad was a revolutionary comic book. (Kurtzman transformed it into a magazine in 1954 and left in 1956 after an altercation with the publisher.) Its pointed parodies and satires, its anarchic questioning of authority, and its class-clown silliness shaped the generation that grew up to protest the Vietnam War. Kurtzman was concerned not only with being funny but with interrogating and deconstructing his subjects with a self-reflexive irony: he needed to locate something he could say that was true. (In Mad’s parody of Mickey Mouse, Kurtzman and Elder find something sinister in Disney’s Magic Kingdom—“Mickey Rodent” has stubble on his face and rat-traps on his nose and finger. In the splash panel, the Disney police are seen dragging off “Horace Horseneck” for not wearing the mandatory white gloves.) Reflecting on his work in 1977, Kurtzman said, “Truth is beautiful. What is false offends.”7 Even after Kurtzman left the magazine, Mad retained just enough of its promethean spark to wise up the generation or two after who found it. It has influenced American comedy—from Saturday Night Live to The Simpsons and Colbert’s Late Show—where the spark continues to glow.

But, alas, revolutions grow old and die. I was once told that Rudolph Giuliani grew up with a complete set of Mad. It may have been “fake news,” but the information crushed me: the vaccine that inoculated us against the suffocating 1950s was not a panacea.

This past October, the geriatric remains of Mad were put into cryonic deep freeze, to exist mostly as bimonthly specialty-shop reprints with a planned annual of new material to keep it in half-life in case any swell merchandising opportunities come along. The death knell was sounded last May, when Trump, hoping to tar the Democratic candidate Pete Buttigieg with one of his sophomoric and indelible zingers, announced that “Alfred E. Neuman cannot become president of the United States.”8 Asked about it, the thirty-seven-year-old mayor responded (either cannily or candidly, or both), “I’ll be honest, I had to Google that. I guess it’s just a generational thing. I didn’t get the reference.”

Yet the legacy of Mad is still with us. Trump is often referred to in the press as a “screwball,” but “screwball”—an ironic term of endearment, a synonym for “lovable eccentric”—just won’t do for a pathological, lying narcissist with dangerous sociopathic tendencies.

The existential threat facing screwball humor today comes from a “screwball” president who has weaponized postmodernism. Mad taught me to be skeptical of all mass media and to question reality (including my beloved Mad), but the lesson requires a belief that there might actually be something like consensual reality. Nonsense assumes there’s such a thing as sense and puts it in relief by denying reality’s power even if just for a moment.

*Foolish Question #25,743,001: “So, is screwball humor dead?”
“Sorry, I can’t hear you, I have a banana in my ear.”
*

In early December, a banana duct-taped to a wall—Maurizio Cattelan’s Comedian—sold for $120,000 at Art Basel in Miami. It made headlines all around the world for a minute, as either an immortal work of twenty-first-century art or an event destined to be more ephemeral than any of the pages in Screwball! A few days later a New York City–based performance artist pulled the banana off the wall and ate it, declaring the installation “very delicious.”

This caper brings to mind Goldberg’s “warning” at the front of Chasing the Blues, his first anthology of cartoons, in 1912:


I must burden you with a terrible confession. This is not a work of art!…


My artistic deficiencies remove me far from the sphere of Rembrandt and Michael Angelo. My ever-present realization of the material virtues of kidney stew and gorgonzola cheese has permanently destroyed whatever of the ethereal that may have been born within me…. A touch of art may nourish the soul, but a good laugh always aids the digestion.

*


Foolish Question #25,743,001.75: “So, can screwball comics ever be art?”


“No, you sap. Humor is the last thing one can take seriously—it’s priceless.”


1

Jennifer George, The Art of Rube Goldberg: (A) Inventive (B) Cartoon (C) Genius (Harry N. Abrams), p. 16.
2

Peter C. Marzio, Rube Goldberg: His Life and Work (Harper and Row), p. 305.
3

Full disclosure: the tribe of obsessive comics scholars interested in this sort of thing is a small one. Tumey and I became friends through a screwball blog that he started in 2012 to contemplate the subject. See screwballcomics.blogspot .com.
4

Michael Tisserand, KRAZY: George Herriman, a Life in Black and White (Harper, 2015).
5

Chris Ware wrote about Herriman and race for the NYR Daily: “To Walk in Beauty,” January 29, 2017.
6

And in my best post–David Foster Wallace footnote mode, might I add what may not need saying at all: every one of these wise guys was a guy. While there have been many female screwballs in the history of the performing arts—Fanny Brice, Beatrice Lillie, Carole Lombard, and Gracie Allen come to mind, as do the two broads in Broad City—there don’t appear to have been any female screwballs at all in the overwhelmingly male domain of early-twentieth-century newspaper comics. I refer interested readers to historian and comics artist Trina Robbins’s several books devoted to casting light on women cartoonists and their accomplishments.
7

Bill Schelly, Harvey Kurtzman: The Man Who Created Mad and Revolutionized Humor in America (Fantagraphics, 2015), p. ix.
8

For those kiddies too young to know, the venerable “What—Me Worry?” gap-toothed grinning idiot served as Mad’s mascot from 1956 until its demise.

Was Psychedelic Guru Terence McKenna Goofing About 2012 Prophecy?
By John Horgan on June 6, 2012 
Image result for terence mckenna
Rational Scientific American readers surely scoff at claims—based on ancient Mayan calendars and other esoterica—that life as we know it will end this December, especially now that NASA experts have "crushed" the prophecy. But many folks out there are reportedly worried. Perhaps I can allay their anxieties by relating my encounter with a prominent popularizer of the 2012-doomsday meme, psychedelic guru Terence McKenna.

In his books and lectures, McKenna extolled psychedelic drugs as a spiritual path superior to that of any mainstream religion. His book The Food of the Gods (Bantam 1992) was a rigorous argument—complete with footnotes and bibliography—that mind-expanding plants and fungi catalyzed the transformation of our brutish ancestors into cultured modern humans. The visions inspired in our ancestors by these substances—and particularly by plants containing psilocybin, dimethytryptamine (DMT) and other psychedelics--were the seeds from which language sprung, followed by the arts, religion, philosophy, science and all of human culture, McKenna asserted. By outlawing psychedelics, he said, we have cut ourselves off from the wellspring of our humanity.

Food of the Gods showed that McKenna could play the serious scholar when he chose. But he was truer to himself in True Hallucinations (HarperSanFrancisco, 1993), a memoir packed with psychedelic tall tales and wild riffs on the nature of reality. McKenna was less a scientist or even philosopher than a performance artist or jester, and I mean that as a compliment. What follows is an edited excerpt from Rational Mysticism (Houghton Mifflin, 2003), which describes my 1999 meeting with McKenna and my attempt to find out what he really thought would happen in December 2012.
Image result for terence mckenna

The Man in the Purple Sparkly Suit

Raised in a Colorado ranching town, Terence McKenna discovered psychedelics in 1965 when he enrolled at the University of California, Berkeley, where he studied ecology and shamanism (ah, the Sixties). He became an itinerant scholar-adventurer, traveling to the Far East and South America in search of exotic mind-altering philosophies and substances, including two of his favorites, magic mushrooms and ayahuasca, a DMT-laced tea brewed from plants native to Amazonia.

In the early 1970s, McKenna and his younger brother Dennis (who became a slightly more sober authority on psychedelics) trekked into the jungles of Colombia, where they ingested enormous quantities of ayahuasca, marijuana and mushrooms. McKenna's visions persuaded him that tryptamines--a class of chemicals that includes DMT and psilocybin—were messages from an alien civilization, or "overmind."

McKenna's trips also inspired his "timewave" theory, which holds that existence and even time itself emerge from the interaction between two opposing forces, one conservative and the other creative. McKenna devised a mathematical model that charted the ebbs and surges of creative, "novel" events—including wars, revolutions, famines, plagues and scientific and technological advances--throughout human history. When McKenna extrapolated the model into the future, it predicted a huge spike in novelty in December 2012.

McKenna first suggested that something dramatic might happen in 2012 in his 1975 book The Invisible Landscape, co-written with Dennis, and he elaborated on his prediction—and the timewave theory--in True Hallucinations. In the latter book, McKenna's arch, ultra-hip tone gave way now and then to moments of genuine poignancy. He confessed that, as he was finishing his book, his sixteen-year marriage to his wife Kat, with whom he had two children, was dissolving. All his psychedelic insights, he said, had "done nothing to mitigate or ward off the ordinary vicissitudes of life. Like the Soul in Yeats's poem I am still an eternal thing fastened to the body of a dying animal."

I found this blend of earnestness and irony intriguing. McKenna himself seemed unsure whether his psychedelic visions were genuine revelations or just delusions, whether he was serious when he talked about the timewave and the impending apocalypse or just goofing. I hoped to get a better sense of McKenna’s true beliefs in May 1999, when he left his home in Hawaii and traveled to New York City to give a talk.
Image result for terence mckenna

As I took my tape recorder and pad out of my backpack on the night of McKenna's performance at an auditorium in mid-town Manhattan, a man in front of me, whom I'll call Glen, turned around to chat. He was bearded, about my age, a bit wild-eyed. Glen had never seen "Terence" before, but he owned all his books and two of his taped lectures. He was blown away by Terence's prediction that in December 2012 the shit was going to hit the fan. Glen hoped that tonight Terence would get a little more specific about what was going to happen.

Glen had dropped a lot of acid when he was young. In fact, he liked drugs so much that he became a pharmacologist. Working in a pharmacy got boring after a while, so he became a psychotherapist specializing in "energy healing." Glen hadn’t done acid in a long time. He really wanted to do some again, but he didn’t know where to get it. Glen looked at me expectantly. When I shrugged, he glumly turned his back on me.


Applause erupted as McKenna strode onto the stage and sat in an armchair. Tall, bearded, owlish, with curly brown hair going grey, he looked like a gaunt, Gaelic Allen Ginsberg. His voice had a wheedling, nasal edge. His speech was much like his prose, a bouillabaisse of scientific and high-tech visions, primordial lore, environmental fear-mongering, anarchic political commentary and psychedelic metaphysics.

He kept wandering down apparent digressions that somehow circled back to his main theme: The purpose of existence is novelty-generation, and our culture is generating novelty at an accelerating rate. The internet, nanotechnology, pharmacology, superstring theory, genetic engineering, artificial intelligence are shattering our old paradigms and bearing us toward some fantastic, psychedelic future. And according to his calculations on December 21, 2012, something big would happen, something apocalyptic, that would bring about "the end of life as we know it."


Image result for terence mckenna

One possible catalyst, McKenna said, was artificial intelligence, which would soon produce machines smarter than any human. (Yes, McKenna was an early proponent of the Singularity.) "If the hyper-intelligent AI is not filled with bodhisattva compassion," McKenna warned, "then our ass is probably grass." McKenna presented all of this with a mischievous grin, daring us to take him seriously.

I was still unsure what to make of McKenna when I met him the next day for lunch atop the Millennium Hotel, a gleaming ebony monolith in downtown Manhattan just across from the World Trade Center (which was still standing). We sat in a booth beside a window, McKenna with his back to the glass, beyond which skyscrapers loomed. McKenna looked even more owlish up close than he had on stage. He wore a black t-shirt emblazoned with a bronze dancing figure. The gap between his front teeth enhanced his air of raffishness.

When I said that his talk the previous evening had been quite entertaining, McKenna rolled my descriptor slowly around in his mouth—"En-ter-tain-ing"—as if he wasn’t sure he liked its taste. I added that I thought he had exaggerated the extent to which old scientific paradigms were crumbling and yielding to a radical new "psychedelic" vision of reality. I admitted that I had once been a staff writer for Scientific American, and I still shared that magazine's rather skeptical, conservative perspective.

Scientific American, McKenna assured me, was one of his favorite magazines. It was "incredibly psychedelic," and a major source of his inspiration. Anyone who reads Scientific American, McKenna continued, can see that science is in the throes of "an enormous crisis, or maybe not crisis but turmoil, based on the breakdown of paradigms." Just look at superstrings, parallel universes, hyper-dimensions, time travel and other bizarre notions emerging from physics, and technological advances like artificial intelligence. "Nobody knows what mind is," McKenna said, when it is "operating at multi-gigahertz speeds in virtual realities unconstrained by gravity and economy of any sort."
Image result for terence mckenna


I told McKenna that many of the supposed revolutionary advances he had read about in Scientific American and elsewhere were grossly over-sold. Artificial intelligence, far from being on the verge of creating "hyper-intelligent" machines, is a joke, a failure, with a string of failed prophesies behind it. As for superstrings, they are so small that it would take a particle accelerator the size of the Milky Way to detect them; that is why many physicists believe that superstring theory and its variants will turn out to be a theoretical dead end.

McKenna shrugged. Whether or not superstring theory pans out, no one can deny that physics is "a field in rich ferment, in need of broad new paradigmatic visions. I mean, sort of where physics was circa 1898, when no one could make sense of Planck's black-body studies and Einstein was chasing girls around bars in Basel, and it just hadn't quite frozen out to be what it was." What was most impressive about McKenna’s riffs was their apparent effortlessness.

When I asked McKenna if he seriously believed that psilocybin mushrooms represent messages from an alien intelligence, he told me that his proposal was not as whimsical as it sounded. Mushroom spores, he said, can survive the cold of outer space; in fact, mushroom cultivators here on earth store the spores in liquid nitrogen. "So if somebody were designing a bio-informational package, a spore is how you would go. Millions of them pushed around by light pressure and gravitational dynamics would percolate throughout the galaxy."

Psilocybin's unusual chemical structure suggested an unusual origin, McKenna added. It is "the only four-phosphorelated indol in all of nature," which indicated "that maybe it came from outside the terrestrial ecosystem." The personality of the mushroom, as revealed by the experiences it triggers in humans, also had an alien, science-fiction quality. "It presents itself as this particular slice of alien, aesthetic motif from Hollywood--the shiny metallic surfaces, the mercuroid forms, the piercing, instantaneous biointelligence." McKenna was enjoying himself; he uttered "aesthetic motif" and "mercuroid forms" with a kind of tactile relish.

I said my impression was that he was often... kidding. McKenna guffawed. "I'm Irish! What's your excuse!" He added soberly, "I'm cynical, and a fair cynic must also be cynical about himself." He called himself a "visionary fool," who "propounds this thing which is a trillion to one shot"—the timewave theory--and then "gets to live out the inevitably humorous implications of that."

He recognized that some people think he is "softheaded" because of his rants about "self-transforming machine elves from hyperspace and all that." Actually, he had a "keen nose for other peoples' bullshit." He despised New Agers who gave credibility to alien-abduction rumors or claimed to be channeling the spirits of long-dead sages. "These things are like intellectual viruses loose in the theater of discourse,” he said scornfully. "And you can't really argue with these people, because they don't understand the rules of argument." McKenna added that "the howling tide of unreason beats against pure fact with incredible fury."

When I told him that his writing sometimes reminded me of one of my favorite authors, the Argentinian fabulist Jorge Luis Borges, McKenna was delighted. He was a Borges fan, too. McKenna shared Borges's intuition that "scattered through the ordinary world there are books and artifacts and perhaps people who are like doorways into impossible realms, of impossible and contradictory truth. DMT being the chief example of this." DMT transports us not to heaven or hell but to a "parallel dimension that is somehow intimately tangled up with our own language processes and how we make reality." Modern science, McKenna remarked, "operates on the assumption that there are no miracles at the macro-physical level. Well, I would put it to you, DMT is a miracle at the macro-physical level. And the smarter you are, the more impressive it is."

I said that his riffs on extraterrestrial psilocybin spores and the coming apocalypse struck me as intellectual performance art, not to be taken too seriously, but I suspected that many of his fans took his ideas literally. I told him about my conversation the previous evening with Glen, the man who sat in front of me at McKenna's lecture. McKenna nodded ruefully. "My function is largely pedagogical," he said, "trying to teach people, first of all, that the world is a weird, weird place. And then, so what do you do about it? Do you become a Scientologist? Do you return to your Irish Catholic roots? What is the response," he said, "to the discovery that the world really is totally weird?"

When I told McKenna that I wasn’t sure exactly how his timewave theory worked, he launched into a vigorous explication of it. The essence of the theory is that existence emerges from the clash of two forces: not good and evil but habit and novelty. Habit is entropic, repetitious, conservative; novelty is creative, disjunctive, progressive. "In all processes at any scale, you can see these two forces grinding against each other. You can also see that novelty is winning."

As novelty increases, so does complexity. From the big bang on, McKenna elaborated, "the universe has been complexifying, and each level of complexity achieved becomes the platform for a further ascent into complexity. So fusion in early stars creates heavy elements and carbon. That becomes the basis for molecular chemistry. That becomes the basis for photobionic life. That becomes the basis for eukaryotic stuff. That becomes the basis for multicellular organisms, that for higher animals, that for culture, that for machine symbiosis, and on and on."

Modern science often depicts humanity as an accident, a bit player in the universe, but the timewave theory puts us at center stage in the cosmic drama, according to McKenna. If he had to define God, he would define it as this novelty-generating process. This definition could serve as the basis for a new moral order. "Anything which destroyed novelty would be bad, and anything which helped build it up and advance it would be good."

What about Nazi Germany? I asked. Wasn’t that novel? Or the hydrogen bomb? Or AIDS? McKenna acknowledged that novelty may be accompanied by increased suffering and death, but in general progress of some kind emerges out of these catastrophes. In the case of Nazi Germany, “the twentieth century had to deal with the issue of fascism. It couldn't close its eyes and waltz past that. And it did! So in that sense Nazi Germany, with its science-fiction production values and its silly rhetoric, served a useful purpose.” McKenna, deep down, was apparently an optimist.

As early as the 1970's, McKenna sought to make his drug-inspired insight precise and quantitative. He discovered that fractals, mathematical objects whose patterns repeat themselves at different scales, provide an excellent model of the entropy-novelty dialectic. "The fall of the dynasty, the collapse of the love affair, the ruin of the corporation, and the death of the amoeba are all somehow dramas with the same energy points and flows imbedded in them."

So what did McKenna really think would happen on December 21, 2012? "If you really understand what I'm saying," he replied, "you would understand it can't be said. It's a prediction of an unpredictable event." The event will be "some enormously reality-rearranging thing." Scientists will invent a truly intelligent computer, or a time-travel machine. Perhaps we will be visited by an alien spaceship, or an asteroid. "I don't know if it's built into the laws of spacetime, or it's generated out of human inventiveness, or whether it's a mile and a half wide and arrives unexpectedly in the center of North America."

But did he really think the apocalypse would arrive on December 21, 2012? "Well..." McKenna hesitated. "No." He had merely created one mathematical model of the flow and ebb of novelty in history. "It's a weak case, because history is not a mathematically defined entity," he said. His model was "just a kind of fantasizing within a certain kind of vocabulary." McKenna still believed in the legitimacy of his project, even if his particular model turned out to be a failure. "I'm trying to redeem history, make it make sense, show that it obeys laws," he said.

But he couldn’t stop there. His eyes glittering, he divulged a "huge--quote unquote—coincidence" involving his prophecy. After he made his prediction that the apocalypse would occur on December 21, 2012, he learned that thousands of years ago Mayan astronomers had predicted the world would end on the very same day. "And now there has been new scholarship that they were tracking the galactic center and its precessional path through the ecliptic plane. What does all this mean?" McKenna leaned toward me, his eyes slitted and his teeth bared. "It means we are trapped in software written by the ghost of Jorge Luis Borges!" He threw his head back and cackled. "Tell that to the National Academy of Sciences!"

Two weeks after I met him in New York, just after he returned to his home in Hawaii, McKenna collapsed in the throes of a seizure. Tests revealed an enormous, malignant tumor deep inside his brain. McKenna's choices were grim. The physician recommended gamma-ray surgery, in which converging beams of radiation bombard the tumor. This treatment might give McKenna another six months to a year, but it could also cause dementia and other side effects. Untreated, McKenna would probably die within a month. McKenna chose the radiation surgery. He made it past the great millennial cusp, but he went downhill rapidly after that. He died on April 3, 2000, less than eleven months after I met him. He was 53.

During my lunch with McKenna at the Millennium Hotel, I had asked him if all his psychedelic excursions had mitigated his fear of death. His reply revealed how hard-headed he was, beneath all the phantasmagoric blarney. "I wouldn't say I have no fear of death," he said. "I am interested. Ultimately, my assumption is that, if I have the opportunity, I would embrace it if I saw it coming. But I'm scientific in my approach to my own knowledge of death. In other words, DMT may show you what the dying brain is like… But dying is not death. Near-death experiences are not death experiences."

Just before his death, another psychedelic advocate told Wired that McKenna’s outrageousness, like that of Timothy Leary, may have harmed the psychedelic cause: "Some people would certainly argue that it doesn’t help to have the most famous second-generation psychedelicist be another man in a purple sparkly suit." McKenna’s attempts to serve as a serious advocate for psychedelics were no doubt undermined by his irony and wit, his penchant for Borgesian fantasy, but those were precisely the qualities that I found so appealing in him. To McKenna wonder was the essence of gnosis. As he told me during our interview, all his confabulations were intended to make us see that the world is "a weird, weird place."

To shake us out of our perceptual torpor, McKenna played the holy fool, the crazy wisdom sage. He pushed our faces in the most exotic, lurid inventions of modern science and technology, including superstring theory, time travel, virtual reality and artificial intelligence. He even stooped to speculating about extraterrestrials and to forecasting the end of life as we know it. What elevated him above most other prophets was that he delivered his prophesies with a wink, an implicit acknowledgement that ultimately reality is stranger than we can say or even imagine.

Image courtesy Wikimedia Commons.

Addendum: Fans of McKenna will want to check out this wonderful series of videos, the Terence McKenna Omnibus, including one in which McKenna says: "Shamanism is just show business and philosophy is just a branch of that vaudevillian impulse." http://www.cognitionfactor.net/cinema/screen-7.aspx

The views expressed are those of the author(s) and are not necessarily those of Scientific American.
ABOUT THE AUTHOR(S)

John Horgan directs the Center for Science Writings at the Stevens Institute of Technology. His books include The End of Science, The End of War and Mind-Body Problems, available for free at mindbodyproblems.com.
Astrology, Tarot Cards and Psychotherapy
Responses to "The Cancer Industry: Hype vs. Real

Friday, February 28, 2020


Brook Mitchell/Stringer/Getty Images
Wildfires near the suburbs of Canberra, Australia, January 2020

This year began with huge bushfires in southeastern Australia that drove one community after another into temporary exile, killed an estimated billion animals, and turned Canberra’s air into the dirtiest on the planet. The temperatures across the continent broke records—one day, the average high was above 107 degrees, and the humidity so low that forests simply exploded into flames. The photos of the disaster were like something out of Hieronymus Bosch, with crowds gathered on beaches under blood-red skies, wading into the water as their only refuge from the flames licking nearby. But such scenes are only a chaotic reminder of what is now happening every hour of every day. This year wouldn’t have begun in such a conflagration if 2019 hadn’t been an extremely hot year on our planet—the second-hottest on record, and the hottest without a big El Niño event to help boost temperatures. And we can expect those numbers to be eclipsed as the decade goes on. Indeed, in mid-February the temperature at the Argentine research station on the Antarctic Peninsula hit 65 degrees Fahrenheit, crushing the old record for the entire continent.

It is far too late to stop global warming, but these next ten years seem as if they may be our last chance to limit the chaos. If there’s good news, it’s that 2019 was also a hot year politically, with the largest mass demonstrations about climate change taking place around the world.

We learned a great deal about the current state of the climate system in December, thanks to the annual confluence of the two most important events in the climate calendar: the UN Conference of the Parties to the Framework Convention on Climate Change, which met for the twenty-fifth time, this year in Madrid (it ended in a dispiriting semi-collapse), and the American Geophysical Union conference, which convened in San Francisco to listen to the newest data from researchers around the world. That latest news should help ground us as we enter this next, critical phase of the crisis.

The first piece of information emerged from a backward look at the accuracy of the models that scientists have been using to predict the warming of the earth. I wrote the Review’s first article about climate change in 1988, some months after NASA scientist James Hansen testified before Congress that what we then called the “greenhouse effect” was both real and underway. Even then, the basic mechanics of the problem were indisputable: burn coal and oil and gas and you emit carbon dioxide, whose molecular structure traps heat in the atmosphere.

Human activity was also spewing other gases with the same effect (methane, most importantly); it seemed clear the temperature would go up. But how much and how fast this would occur was a bewildering problem, involving calculations of myriad interactions across land and sea; we came to fear climate change in the 1980s largely because we finally had the computing power to model it. Critics—many of them mobilized by the fossil fuel industry—attacked those models as crude approximations of nature, and insisted they’d missed some negative feedback loop (the effect of clouds was a common candidate) that would surely moderate the warming.

These climate models got their first real chance to shine in 1991, when Mount Pinatubo erupted in the Philippines, injecting known amounts of various chemicals into the atmosphere, and the models passed with flying colors, accurately predicting the short-term cooling those chemicals produced. But the critique never completely died away, and remains a staple of the shrinking band of climate deniers. In December Zeke Hausfather, a UC Berkeley climate researcher, published a paper showing that the models that guided the early years of the climate debate were surprisingly accurate. “The warming we have experienced is pretty much exactly what climate models predicted it would be as much as 30 years ago,” he said. “This really gives us more confidence that today’s models are getting things largely right as well.”1

We now know that government and university labs were not the only ones predicting the climatic future: over the last five years, great investigative reporting by, among others, the Pulitzer-winning website InsideClimate News unearthed the large-scale investigations carried out in the 1980s by oil companies. Exxon, for instance, got the problem right: one of the graphs their researchers produced predicted with uncanny accuracy what the temperature and carbon dioxide concentration would be in 2019. That this knowledge did not stop the industry from its all-out decades-long war to prevent change is a fact to which we will return.

The rise in temperature should convince any fair-minded critic of the peril we face, and it is worth noting that in December one longtime skeptic, the libertarian writer Ronald Bailey, published a sort of mea culpa in Reason magazine. In 1992, at the first Earth Summit in Rio, he’d mourned that the United States government was “officially buying into the notion that ‘global warming’ is a serious environmental problem,” even as “more and more scientific evidence accumulates showing that the threat of global warming is overblown.” Over the years, Bailey had promoted many possible challenges to scientific orthodoxy—for example, the claim of MIT scientist Richard Lindzen that, as mentioned, clouds would prevent any dangerous rise in temperature—but, to his credit, in his new article he writes:





I have unhappily concluded, based on the balance of the evidence, that climate change is proceeding faster and is worse than I had earlier judged it to be…. Most of the evidence points toward a significantly warmer world by the end of the century.2

If scientists correctly judged the magnitude of the warming—about one degree Celsius, globally averaged, thus far—they were less perceptive about the magnitude of the impact. Given that this infusion of greenhouse gases into the atmosphere is a large-scale experiment never carried out before during human history, or indeed primate evolution, it’s not really fair to complain, but many scientists, conservative by nature, did underestimate the rate and severity of the consequences that would come with the early stages of warming. As a result, the motto for those studying the real-world effects of the heating is probably “Faster Than Expected.”

The warmth we’ve added to the atmosphere—the heat equivalent, each day, of 400,000 Hiroshima-sized bombs—is already producing truly dire effects, decades or even centuries ahead of schedule. We’ve lost more than half the summer sea ice in the Arctic; coral reefs have begun to collapse, convincing researchers that we’re likely to lose virtually all of them by mid-century; sea-level rise is accelerating; and the planet’s hydrologic cycle—the way water moves around the planet—has been seriously disrupted. Warmer air increases evaporation, thus drought in arid areas and as a side effect the fires raging in places like California and Australia. The air also holds more water vapor, which tends to drop back to earth in wet places, increasing the risk of flooding: America has recently experienced the rainiest twelve months in its recorded history.

In late November a European-led team analyzed what they described as nine major tipping points—involving the Greenland and Antarctic ice sheets, the boreal forests and permafrost layer of the north, and the Amazon rainforest and corals of the tropical latitudes. What they found was that the risk of “abrupt and irreversible changes” was much higher than previous researchers had believed, and that exceeding critical points in one system increases the risk of speeding past others—for instance, melting of Arctic sea ice increases the chance of seriously slowing the ocean currents that transport heat north from the equator, which in turn disrupt monsoons. “What we’re talking about is a point of no return,” Will Steffen, one of the researchers, told reporters. Earth won’t be the same old world “with just a bit more heat or a bit more rainfall. It’s a cascading process that gets out of control.”

That all of this has happened with one degree of warming makes clear that the targets set in the Paris climate accords—to try to hold temperature increases to 1.5 degrees Celsius, and no more than 2 degrees—are not “safe” in any usual sense of the word. Already, according to an Oxfam report released in December,3 people are three times more likely to be displaced from their homes by cyclones, floods, or fires than by wars. Most of those people, of course, did nothing to cause the crisis from which they suffer; the same is true for those feeling the health effects of climate change, which a December report from the World Health Organization said was “potentially the greatest health threat of the 21st century.”

What’s worse, we’re nowhere close to meeting even those modest goals we set in Paris. Indeed, the most depressing news from December is that the world’s emissions of greenhouse gases rose yet again. Coal use has declined dramatically, especially in the developed world—the US has closed hundreds of coal-burning plants since 2010 and halved the amount of power generated by coal. But it’s mostly been replaced by natural gas, which produces not only carbon dioxide but also methane, so our emissions are barely budging; in Asia, continued fast-paced economic growth is outstripping even the accelerating deployment of renewable energy.

The United Nations Environment Programme released its latest annual report on the so-called emissions gap in December, and it was remarkably dire. To meet the Paris goal of limiting temperature increases to 1.5 degrees Celsius, the world would need to cut its emissions by 7.6 percent annually for the next decade.4 Stop and read that number again—it’s almost incomprehensibly large. No individual country, not to mention the planet, has ever cut emissions at that rate for a single year, much less a continuous decade. And yet that’s the inexorable mathematics of climate change. Had we started cutting when scientists set off the alarm, in the mid-1990s, the necessary cuts would have been a percent or two each year. A modest tax on carbon might well have sufficed to achieve that kind of reduction. But—thanks in no small part to the obstruction of the fossil fuel industry, which, as we have seen above, knew exactly what havoc it was courting—we didn’t start correcting the course of the supertanker that is our global economy. Instead, we went dead ahead: humans have released more carbon dioxide since Hansen’s congressional testimony than in all of history before.

That we have any chance at all of achieving any of these targets rests on the progress made by engineers in recent years—they’ve cut the price of renewable energy so decisively that the basic course is pretty clear. Essentially, we need to electrify everything we do, and produce that electricity from the sun and wind, which are now the cheapest ways to produce power around the world.5 Happily, storage batteries for the power thus generated are also dropping quickly in cost, and electric cars grow both more useful and more popular by the month—Tesla is the brand name we know, but the Chinese are already rolling out electric cars in large numbers, and, better yet, electric buses, which could lead to dramatically cleaner and quieter cities. In his State of the City address in early February, New York mayor Bill DeBlasio announced that every vehicle in the city fleet would be electrified in the years ahead. Despite such dramatic announcements, we’re adopting none of these technologies fast enough. In seventy-five years the world will probably run on sun and wind because they are so cheap, but if we wait for economics alone to do the job, it will be a broken world.

Radically speeding up that transition is the goal of the various Green New Deal policies that have emerged over the last year, beginning in the US, where the youthful Sunrise Movement recruited Representative Alexandria Ocasio-Cortez as an early supporter and used a sit-in at House Speaker Nancy Pelosi’s office to draw attention to the legislation. Negotiations have been underway ever since about the exact shape of such a program, but its outlines are clear: extensive support for renewables, with an aim of making America’s electricity supply carbon-neutral by 2030, and a program to make homes and buildings far more efficient, coupled with large-scale social plans like universal health care and free college tuition. At first glance, combining all these goals may seem to make the task harder, but advocates like Naomi Klein have argued persuasively that the opposite is true.

The wide scope of the proposed Green New Deal may make it sound utopian—but it may be better to think of it as anti-dystopian, an alternative to the libertarian hyper-individualism that has left us with economically insecure communities whose divisions will be easy for the powerful to exploit on a degrading planet, where the UN expects as many as a billion climate refugees by 2050. A million Syrian refugees to Europe (driven in part by the deep drought that helped spark the civil war) and a million Central American refugees to our southern border (driven in part by relentless drought in Honduras and Guatemala) have unhinged the politics of both continents; imagine multiplying that by five hundred.

On the campaign trail, the Democratic nominees have mostly embraced the Green New Deal. Its sweeping economic and social ambition fits easily with the other campaign promises of Senators Sanders and Warren, but most of the rest of the field has also backed its promises of dramatic reductions in carbon emissions. For instance, Joe Biden’s climate plan says that “the Green New Deal is a crucial framework for meeting the climate challenges we face. It powerfully captures two basic truths”—first, that “the United States urgently needs to embrace greater ambition…to meet the scope of this challenge,” and second, that “our environment and our economy are completely and totally connected.” Biden has waffled and wavered on the practicalities, at times endorsing a continued reliance on natural gas, but it’s pretty clear that, whoever the eventual nominee, the party will be at least somewhat more progressive on climate issues than in the past. And in one way the nominee will be more progressive even than the Green New Deal legislation. Sanders, Warren, Biden, Pete Buttigieg, Tom Steyer, Michael Bloomberg, and others have all called for an end to oil, gas, and coal production on public lands—something a new president could do by executive action. Some have gone farther, calling for an end to fracking across the nation.

These so-called Keep It in the Ground policies are less popular with labor unions that want to keep building pipelines, and therefore those writing the Green New Deal legislation have not yet included them in their bill, wary of losing congressional support. But the mathematical case for such action was greatly strengthened in November with the publication of the first production gap report, intended as a counterpart to the emissions gap research I described above. For almost thirty years, global warming efforts have focused on controlling and reducing the use of fossil fuel—which is hard, because there are billions of users. But in recent years activists and academics have looked harder at trying to regulate the production of coal, gas, and oil in the first place, reasoning that if it stayed beneath the soil, it would ipso facto not be warming the planet.

The first edition of this new report, issued by a consortium of researchers led by the Stockholm Environment Institute and the UN Environment Programme, makes for startling reading: between now and 2030 the world’s nations plan on producing 120 percent more coal, gas, and oil than would be consistent with limiting warming to 1.5 degrees Celsius and 50 percent more than would let us meet even the 2 degree goal.6 That’s more coal and oil and gas than the world’s nations have told the UN they plan to burn: “As a consequence, the production gap is wider than the emissions gap.” “Indeed,” the authors write, “though many governments plan to decrease their emissions, they are signalling the opposite when it comes to fossil fuel production, with plans and projections for expansion.” Another way to look at it, as the Financial Times calculated in February, is that to meet the 1.5 degree target, the fossil fuel industry would have to leave 84 percent of its known reserves in the ground, writing off their value.

You would think that, compared with the billions of users, it would be easier to take on the handful of petro-states and oil companies that produce fossil fuel; after all, more than half of global emissions since 1988 “can be traced to just 25 corporate and state-owned entities,” according to the Climate Accountability Institute. By definition, those are among the most powerful players in our economic and political systems, and so far they’ve been able to escape any effective regulation. At the very top of the list is the United States, which, according to a December report from the Global Gas and Oil Network, is on track to produce four-fifths of the new supply of oil and gas over the next half decade.

Partly, this is the result of President Trump’s fanatical effort to eliminate any obstacles to new oil and gas production, including recently opening the Arctic National Wildlife Refuge in Alaska—the nation’s largest wildlife preserve—to drilling. But there’s a fairly long lag time in building the necessary infrastructure—the fracking boom really had its roots in the Obama administration, as the former president boasted in a 2018 speech at Rice University in Texas. “I know we’re in oil country,” he told the cheering crowd. “You wouldn’t always know it, but [production] went up every year I was president. That whole, suddenly, America’s, like, the biggest oil producer and the biggest gas…that was me, people,” he said. “Just say thank you please.”

The one cheerful development of the past year has been the continuing rise of a global climate movement, exemplified by the young activists who brought seven million people into the streets for global climate strikes in September. (Greta Thunberg is the best known, and rightly celebrated for her poise, but fortunately there are thousands of Gretas across the planet offering provocative challenges to their local officials.) The question is where to aim all that activism. The natural impulse is to direct it at our political leaders, because in a rational world they would be the ones making decisions and shaping change. This is part of the answer—it’s crucial that this year’s election in the US has the climate crisis at its center, and thanks to the Green New Deal that’s a real possibility.

But political change is uncertain—despite the remarkable activism of Extinction Rebellion across the UK, December’s elections there seemed little affected by the issue—and even when it comes it is slow. A new president and a new Senate would still mean a Washington rusted by influence and inertia. And winning this battle one national capitol at a time is a daunting challenge given the short time physics is allowing us.

A small but growing number of activists are also looking at a second set of targets—not Washington, but Wall Street. Over the past few years a mammoth divestment campaign has persuaded endowments and portfolios worth $12 trillion to sell their stocks in coal, oil, or gas companies, and now that effort is expanding to include the financial institutions (mostly banks, asset managers, and insurance companies) that provide the money that keeps those companies growing. A handful of American banks—Chase, Citi, Wells Fargo, and Bank of America—are the biggest culprits, and incredibly they have increased their lending to fossil fuel companies in the years since the Paris accords. Take Chase Bank, which is the champion in this respect: in the last three years it has provided $196 billion to the fossil fuel industry. If Exxon is a carbon heavy, in other words, Chase is too (and in many ways they’re joined at the hip; Standard Oil heir David Rockefeller led Chase to its current prominence, and former Exxon CEO Lee Raymond is its lead independent director).

This financing—which has included supporting the most extreme oil and gas projects, like the huge pipelines planned in Canada’s uniquely filthy tar sands complex—is perhaps the single least defensible part of the fossil fuel enterprise. You can almost understand the refusal of oil companies to shift their business plans: they really only know how to do one thing. But banks can lend their money in a thousand different directions; they don’t need to fund the apocalypse. Given the trouble banks have already caused, it’s no wonder that environmentalists have begun using the phrase “Make Them Pay”—or at the very least make them invest in the renewables and conservation measures desperately needed to get us on the right track. My colleague at the grassroots campaign 350.org Tamara Toles O’Laughlin has compared this kind of funding to nineteenth-century support by financial institutions of slavery—it’s not the same crime, of course, but “the same instinct to abuse and extract, deplete, discard, and disavow holds.” It’s no surprise that the same demand for reparations—compensation for all those whose lives and communities are being wrecked—is being raised.

There’s no question that taking on one of the biggest parts of the planet’s economy is a daunting task. It’s possible that the Chases of the world can go on lending money to their friends in the oil industry without suffering any consequences. On the other hand, in the same way that the electoral map favors Republicans, the money map favors those who care about the climate. Chase branches, for instance, are concentrated in those small pockets of blue around our big cities (I was arrested in a protest in one of them, in Washington, D.C., in early January). And perhaps these institutions are beginning to bend: in mid-January the world’s largest financial firm, BlackRock, announced that it was taking broad, if still tentative, steps to include climate change in its analyses of potential investments. “Awareness is rapidly changing, and I believe we are on the edge of a fundamental reshaping of finance,” its CEO, Larry Fink, wrote in a letter to CEOs of the world’s largest corporations. That’s perhaps the most encouraging news about climate change since the signing of the Paris climate accords, because if these pillars of global capital could somehow be persuaded to act, that action could conceivably be both swift and global.

Anything is worth a try at this point, because we’re very nearly out of time.


1

Zeke Hausfather et al.,“Evaluating the Performance of Past Climate Model Projections,” Geophysical Research Letters, December 4, 2019.
2

Ronald Bailey, “Climate Change: How Lucky Do You Feel?,” Reason, January 2020.
3

Oxfam International, “Forced from Home: Climate-Fuelled Displacement,” December 2, 2019.
4

WHO Health and Climate Change Survey Report, 2019.
5

We’d probably be well advised to keep current nuclear power plants operating where it’s relatively safe to do so until they can be replaced with renewables instead of natural gas—though at the moment new nuclear power is ruinously expensive in most places, existing plants are an important part of the low-carbon power supply. A good summary of the problem came in 2018 from the Union of Concerned Scientists. We definitely need to avoid not only natural gas, which as I have explained previously in these pages is not the “bridge fuel” its proponents contended, but also the burning of trees to generate electricity—the latest science is showing this so-called biomass energy to be more of a problem than a solution, and that by contrast letting mature trees continue to grow allows them to soak up large amounts of carbon.
6

Stockholm Environment Institute et al., The Production Gap, 2019.