Wednesday, March 04, 2020

HERSTORY USA
Abortion rights had a surprisingly hopeful day in the Supreme Court

Louisiana’s lawyer did such a bad job defending an anti-abortion law that she may have lost Chief Justice Roberts.


By Ian Millhiser Mar 4, 2020
Thousands of protesters gathered outside the Supreme Court on Wednesday morning as oral arguments were underway over a restrictive Louisiana abortion law. Melissa Lyttle for Vox

Wednesday morning’s arguments in the biggest threat to abortion rights to reach the Supreme Court in nearly 30 years went so badly for Louisiana Solicitor General Elizabeth Murrill, who was defending Louisiana’s restrictive abortion law, that by the end even Chief Justice John Roberts appeared uncomfortable with her arguments.

Murrill spent 20 awkward minutes appearing to test whether it is possible to botch an argument badly enough to lose a case widely expected to go her way.

Given that conservatives hold the power on the Supreme Court, Louisiana still remains likely to prevail in June Medical Services LLC v. Russo. But Murrill’s performance was so weak, and the liberal justices successfully exposed so many flaws in her argument, that it raised questions about whether Roberts might join his liberal colleagues to strike down Louisiana’s law.
Chief Justice Roberts could join his liberal colleagues after hearing arguments in Louisiana’s restrictive abortion law. Tom Williams/CQ-Roll Call via Getty Images

June Medical involves a Louisiana law that requires abortion doctors to have admitting privileges at a hospital that is within 30 miles of the clinic where the doctor provides abortion care. If that law sounds familiar, it should: Less than four years ago, in Whole Woman’s Health v. Hellerstedt (2016), the Supreme Court struck down a Texas law that is virtually identical to the one at issue in June Medical.

Indeed, the only real distinction between Whole Woman’s Health and June Medical is the makeup of the Supreme Court. Justice Anthony Kennedy, who retired in 2018, was an uneasy defender of the right to an abortion. Though he typically voted to uphold abortion restrictions, he refused to overrule Roe v. Wade (1973) outright. And he joined his liberal colleagues in Whole Woman’s Health.

Kennedy’s replacement, Justice Brett Kavanaugh, has historically been much more skeptical of abortion rights. And his questions at Wednesday’s oral argument left few doubts that he will vote to uphold Louisiana’s law.

Yet the case appeared to turn on Roberts, who joined the dissent in Whole Woman’s Health and who almost always votes to uphold abortion restrictions. Roberts repeatedly asked whether there is any difference between the burden the Texas law struck down in Whole Woman’s Health imposes on people seeking abortions and the burden imposed by the nearly identical Louisiana law.

Neither Murrill nor US Principal Deputy Solicitor General Jeffrey Wall, who defended the law on behalf of the Trump administration, was able to give Roberts a straight answer.
Louisiana tried to restrict abortion through a sham health law

Abortion rights advocates refer to laws like the one at in June Medical as “targeted restrictions on abortion providers,” or “TRAP” laws. TRAP laws superficially appear to make abortions safer, but they do very little to advance patient health — while simultaneously making it much harder to operate an abortion clinic.

Louisiana claims that its admitting privileges law serves two interlocking purposes. It is a credentialing requirement, which supposedly helps screen out incompetent doctors who shouldn’t perform abortions. And it is supposed to ensure that abortion patients who experience complications can be admitted to a nearby hospital by the doctor who performed the abortion.

RELATED
Getting an abortion in “the most pro-life state in America”

Yet, as the four liberal justices took turns pointing out, neither of these goals is meaningfully advanced by this particular law.

As Justice Elena Kagan noted, for example, hospitals often rely on criteria other than the quality of a physician when deciding whether to give a particular doctor admitting privileges. Many hospitals only give such privileges to doctors with a sufficient number of patients, who admit a certain number of patients every year to that hospital. Others outright refuse to give privileges to abortion providers.

Additionally, the law’s requirement that abortion providers have admitting privileges near their clinic undercuts the state’s argument that the law serves to screen out bad doctors. There’s no reason to believe that hospitals near clinics do a better job of screening doctors than hospitals far from clinics.

Similarly, the fact that so many hospitals require doctors to admit a certain number of patients if they want to receive privileges is a special barrier to abortion providers, because abortions are a very safe medical procedure. As Kagan explained, the clinic that brought this lawsuit has performed around 70,000 abortions. It’s transferred only four of those patients to a hospital. These doctors would struggle to meet their quotas because their patients are so unlikely to require medical care.

And there’s another reason an abortion doctor is unlikely to need to admit one of their patients to a nearby hospital. As Justice Ruth Bader Ginsburg repeatedly pointed out, many abortion clinics perform medication abortions — meaning that the patient is given pills to take in the comfort of their own home. Even if a complication does arise, this patient is unlikely to seek care from a hospital near the clinic. They will seek care from a hospital close to their home, which is likely to be outside the 30-mile radius prescribed by the Louisiana law.

The Supreme Court held in Planned Parenthood v. Casey (1992) that “unnecessary health regulations that have the purpose or effect of presenting a substantial obstacle to a woman seeking an abortion impose an undue burden on the right,” and both Murrill and Wall struggled to explain why this particular law isn’t an “unnecessary health regulation.”

These weaknesses in Louisiana’s arguments seemed to trouble Chief Justice Roberts: Twice, Roberts inquired what the “benefits” of such a law were, and he did so in a way that directly contradicted the state’s defense of its law.

The core of the state’s argument, after all, is that its admitting privileges law benefits abortion patients by making abortions safer — and that it does so even though the Supreme Court held in Whole Woman’s Health that a very similar Texas law does not benefit such patients. But Roberts appeared to reject this argument rather explicitly.

“I understand the idea that the impact might be different in different places,” the chief justice told Murrill at one point, “but as far as the benefits of the law, that’s going to be the same in each state, isn’t it?”

Justices Alito and Kavanaugh offered competing arguments in defense of the anti-abortion law

Justice Samuel Alito, for his part, did his best to rescue Louisiana by arguing that the wrong party brought this particular lawsuit. In at least eight previous cases, the Supreme Court has allowed an abortion clinic or an abortion provider to bring a lawsuit challenging an abortion restriction. Alito argued that providers and clinics should be stripped of their ability to do so, meaning that future abortion suits would have to be brought by individual patients who are seeking an abortion.

But no other justice really picked up on this argument. Kavanaugh, meanwhile, suggested that maybe the Court’s decision in Whole Woman’s Health should be limited to just Texas. “What if all doctors in a state could easily get admitting privileges?” he asked at one point. Kavanaugh’s questions seemed to borrow from a federal appeals court opinion, which rather dubiously argued that Whole Woman’s Health should not apply in Louisiana because it is easier for Louisiana doctors to get admitting privileges than it is for Texas doctors to do so.

Roberts, for his part, initially seemed sympathetic to Kavanaugh’s argument. But his sympathy seemed to fade as the argument proceeded. Early in the argument, Roberts asked whether the question of whether a particular law violates Whole Woman’s Health is a “factual one that has to proceed state by state,” or whether all admitting privileges laws should be viewed with skepticism.

But the state was unable to demonstrate that Louisiana doctors will have an easy time getting admitting privileges. At one point, Justice Sonia Sotomayor rattled off individual doctors in Louisiana who struggled to get such privileges. At another point, Justice Stephen Breyer asked Murrill to identify which of the several doctors involved in this case presented the best case that Louisiana abortion providers can, indeed, get admitting privileges.

Murrill named a doctor who, according to the state’s own expert witness at trial, was unlikely to be able to obtain admitting privileges — an error that both Breyer and Sotomayor swiftly pounced on.

Indeed, by the end of the argument, Roberts appeared to explicitly reject Kavanaugh’s attempt to save the Louisiana law — wondering why it would make sense to treat every state differently when the (virtually nonexistent) benefits of an admitting privileges law are the same in every state.
The future of Roe v. Wade remains grim

Wednesday’s oral argument was not a high point for the anti-abortion movement. Murrill appeared unprepared for predictable questions, made tone-deaf arguments, and even argued with Ginsburg about the history of the Supreme Court’s feminist jurisprudence.

When Sotomayor asked Murrill whether a particular abortion provider performs surgical abortions, for example, Murrill did not appear to know the answer to the question — though she eventually replied that “to the best of my knowledge,” the doctor performs surgeries. Murrill contracted her state’s own expert witnesses and on many occasions seemed to contradict facts in the record.

At one point, Murrill got into an argument with Ginsburg, the most significant feminist lawyer in American history, about the facts of Craig v. Boren (1976), a seminal women’s rights decision that was heavily influenced by a brief filed by Ginsburg.

Murrill also came to Court to defend a law that is rooted in an outdated strategy.

TRAP laws made sense in a world where Kennedy, who was unwilling to overrule Roe but willing to uphold many abortion restrictions, held the crucial swing vote on abortion. They were a test of whether Kennedy would uphold severe restrictions on abortion, so long as those restrictions were written to look like a health regulation. That strategy failed in Whole Woman’s Health.

But TRAP laws were always rooted in deception. They depend on judges who are willing to pretend that an anti-abortion law that does nothing to protect patient health is, in fact, a health regulation.

On Wednesday, Roberts appeared uncomfortable with this deception. But even if he does vote to strike down this Louisiana law, there is a much more honest way for anti-abortion advocates to approach the Supreme Court in the future: They can simply ask Roberts to overrule Roe v. Wade. When that day comes, Roberts remains likely to give these advocates what they want.
BOOKS



The Confounding Truth About Frederick Douglass
His champions now span the ideological spectrum, but left and right miss the tensions in his views.

DECEMBER 2018 ISSUE

Frederick Douglass: Prophet of Freedom 
BY DAVID W. BLIGHT SIMON & SCHUSTER

It is difficult to imagine a more remarkable story of self-determination and advancement than the life of Frederick Douglass. Emblematic of the depths from which he rose is the pall of uncertainty that shrouded his origins. For a long time he believed that he had been born in 1817. Then, in 1877, during a visit to a former master in Maryland, Douglass was told that he had actually been born in 1818. Douglass could barely recall his mother, who had been consigned to different households from the one where her baby lived. And he never discovered the identity of his father, who was likely a white man. “Genealogical trees,” Douglass mordantly observed, “do not flourish among slaves.”ARSH RAZIUDDIN

Douglass fled enslavement in 1838, and with the assistance of abolitionists, he cultivated his prodigious talents as an orator and a writer. He produced a score of extraordinary speeches. The widely anthologized “What to the Slave Is the Fourth of July?,” delivered in 1852, is the most damning critique of American hypocrisy ever uttered:
What, to the American slave, is your 4th of July? … a day that reveals to him … the gross injustice and cruelty to which he is the constant victim. To him, your celebration is a sham … your shouts of liberty and equality, hollow mockery … There is not a nation on the earth guilty of practices, more shocking and bloody, than are the people of these United States, at this very hour.”

He wrote analyses of court opinions that deservedly appear in constitutional-law casebooks. He published many arresting columns in magazines and newspapers, including several that he started. He also wrote three exceptional memoirs, Narrative of the Life of Frederick Douglass, an American Slave (1845), My Bondage and My Freedom (1855), and Life and Times of Frederick Douglass (1881). The most celebrated black man of his era, Douglass became the most photographed American of any race in the 19th century. He was the first black person appointed to an office requiring senatorial confirmation; in 1877, President Rutherford B. Hayes nominated him to be the marshal of the District of Columbia.

Throughout his life, however, Douglass repeatedly fell victim to the brutalizations and insults commonly experienced by African Americans of his time. As a slave, he suffered at the hands of a vicious “nigger breaker” to whom he was rented. He fled to the “free” North, only to have his work as a maritime caulker thwarted by racist white competitors. As a traveling evangelist for abolitionism, he was repeatedly ejected from whites-only railroad cars, restaurants, and lodgings. When he died, an admiring obituary in The New York Times suggested that Douglass’s “white blood” accounted for his “superior intelligence.” After his death, his reputation declined precipitously alongside the general standing of African Americans in the age of Jim Crow.


 
My Race Problem RANDALL KENNEDY 
Frederick Douglass: An Appeal to Congress for Impartial Suffrage FREDERICK DOUGLASS 
Frederick Douglass: American Lion TA-NEHISI COATES

Now everyone wants a piece of Frederick Douglass. When a statue memorializing him was unveiled at the United States Capitol in 2013, members of the party of Paul Ryan and Mitch McConnell sported buttons that read frederick douglass was a republican. More recently, the Republican National Committee issued a statement joining President Donald Trump “in honoring Douglass’ lifelong dedication to the principles that define [the Republican] Party and enrich our nation.” Across the ideological divide, former President Barack Obama has lauded Douglass, as has the leftist intellectual Cornel West. New books about Douglass have appeared with regularity of late, and are now joined by David W. Blight’s magnificently expansive and detailed Frederick Douglass: Prophet of Freedom.SIMON & SCHUSTER

A history professor at Yale who has long been a major contributor to scholarship on Douglass, slavery, and the Civil War, Blight portrays Douglass unequivocally as a hero while also revealing his weaknesses. Blight illuminates important facets of 19th-century political, social, and cultural life in America, including the often overlooked burdens borne by black women. At the same time, he speaks to urgent, contemporary concerns such as Black Lives Matter. Given the salience of charges of cultural misappropriation, griping about his achievement would be unsurprising: Blight is a white man who has written the leading biography of the most outstanding African American of the 19th century. His sensitive, careful, learned, creative, soulful exploration of Douglass’s grand life, however, transcends his own identity.

In the wake of Douglass’s death in 1895, it was African Americans who kept his memory alive. Booker T. Washington wrote a biography in 1906. The historian Benjamin Quarles wrote an excellent study in 1948. White historians on the left also played a key role in protecting Douglass from oblivion, none more usefully than Philip Foner, a blacklisted Marxist scholar (and uncle of the great historian Eric Foner), whose carefully edited collection of Douglass’s writings remains essential reading. But in “mainstream”—white, socially and politically conventional—circles, Douglass was widely overlooked. In 1962, the esteemed literary critic Edmund Wilson published Patriotic Gore: Studies in the Literature of the American Civil War, a sprawling (and lavishly praised) commentary on writings famous and obscure that omitted Douglass, and virtually all the other black literary figures of the period.

Keenly attuned to the politics of public memory, Blight shows that the current profusion of claims on Douglass’s legacy bears close scrutiny: Claimants have a way of overlooking features of his complex persona that would be embarrassing for them to acknowledge. Conservatives praise his individualism, which sometimes verged on social Darwinism. They also herald Douglass’s stress on black communal self-help, his antagonism toward labor unions, and his strident defense of men’s right to bear arms. They tiptoe past his revolutionary rage against the United States during his early years as an abolitionist. “I have no patriotism,” he thundered in 1847. “I cannot have any love for this country … or for its Constitution. I desire to see it overthrown as speedily as possible.” Radical as to ends, he was also radical as to means. He justified the violence deployed when a group of abolitionists tried to liberate a fugitive slave from a Boston jail and killed a deputy U.S. marshal in the process. Similarly, he assisted and praised John Brown, the insurrectionist executed for murder and treason in Virginia in 1859.

Many conservatives who claim posthumous alliance with Douglass would abandon him if they faced the prospect of being publicly associated with the central features of his ideology. After all, he championed the creation of a strong post–Civil War federal government that would extend civil and political rights to the formerly enslaved; protect those rights judicially and, if necessary, militarily; and undergird the former slaves’ new status with education, employment, land, and other resources, to be supplied by experimental government agencies. Douglass objected to what he considered an unseemly willingness to reconcile with former Confederates who failed to sincerely repudiate secession and slavery. He expressed disgust, for example, at the “bombastic laudation” accorded to Robert E. Lee upon the general’s death in 1870. Blight calls attention to a speech resonant with current controversies:
We are sometimes asked in the name of patriotism to forget the merits of [the Civil War], and to remember with equal admiration those who struck at the nation’s life, and those who struck to save it—those who fought for slavery and those who fought for liberty … May my tongue cleave to the roof of my mouth if I forget the difference between the parties to that … bloody conflict.

The progressive tradition of championing Douglass runs deeper, not surprisingly, than the conservative adoption of him. As an abolitionist, a militant antislavery Republican, and an advocate for women’s rights, he allied himself with three of the greatest dissident progressive formations in American history. Activists on the left should feel comfortable seeking to appropriate the luster of his authority for many of their projects—solicitude for refugees, the elevation of women, the advancement of unfairly marginalized racial minorities. No dictum has been more ardently repeated by progressive dissidents than his assertion that “if there is no struggle there is no progress … Power concedes nothing without a demand. It never did and it never will.”

But certain aspects of Douglass’s life would, if more widely known, cause problems for many of his contemporary admirers on the left, a point nicely made in Blight’s biography as well as in Waldo E. Martin Jr.’s The Mind of Frederick Douglass. A Republican intra-party contest in an 1888 congressional election in Virginia pitted John Mercer Langston, a progressive black jurist (who had served as the first dean of Howard University Law School), against R. W. Arnold, a white conservative sponsored by a white party boss (who was a former Confederate general). Douglass supported Arnold, and portrayed his decision as high-minded. “The question of color,” he said, “should be entirely subordinated to the greater questions of principles and party expediency.” In fact, what had mainly moved Douglass was personal animosity; he and Langston had long been bitter rivals. Langston was hardly a paragon, but neither was Douglass. Sometimes he could be a vain, selfish, opportunistic jerk, capable of subordinating political good to personal pique.

Douglass promised that he would never permit his desire for a government post to mute his anti-racism. He broke that promise. When Hayes nominated him to be D.C. marshal, the duties of the job were trimmed. Previously the marshal had introduced dignitaries on state occasions. Douglass was relieved of that responsibility. Racism was the obvious reason for the change, but Douglass disregarded the slight and raised no objection. Some observers derided him for his acquiescence. He seemed to think that the benefit to the public of seeing a black man occupy the post outweighed the benefit that might be derived from staging yet another protest. But especially as he aged, Douglass lapsed into the unattractive habit of conflating what would be good for him with what would be good for blacks, the nation, or humanity. In this instance, his detractors were correct: He had permitted himself to be gagged by the prospect of obtaining a sinecure.

Douglass was also something of an imperialist. He accepted diplomatic positions under Presidents Ulysses S. Grant, in 1871, and Benjamin Harrison, in 1889, that entailed assisting the United States in pressuring Santo Domingo (now the Dominican Republic) to allow itself to become annexed and Haiti to cede territory. Douglass acted with good intentions, aiming to stabilize and elevate these black Caribbean countries by tying them to the United States in its slavery-free, post–Civil War incarnation. He liked the idea of Santo Domingo becoming a new state, thereby adding to the political muscle in America of people of African descent, a prospect that frightened or disgusted some white supremacists. When Douglass felt that his solicitude for people of color in the Caribbean was being decisively subordinated to exploitative business and militaristic imperatives, he resigned. But here again, Douglass demonstrated (along with a sometimes condescending attitude toward his Caribbean hosts) a yearning for power, prestige, and recognition from high political authorities that confused and diluted his more characteristic ideological impulses.

Douglass is entitled to and typically receives an honored place in any pantheon dedicated to heroes of black liberation. He also poses problems, however, for devotees of certain brands of black solidarity. White abolitionists were key figures in his remarkable journey to national and international prominence. Without their assistance, he would not have become the symbol of oppressed blackness in the minds of antislavery whites, and without the prestige he received from his white following, he would not have become black America’s preeminent spokesman. That whites were so instrumental in furthering Douglass’s career bothers black nationalists who are haunted by the specter of white folks controlling or unduly influencing putative black leaders.

Douglass’s romantic life has stirred related unease, a subject Blight touches on delicately, exhibiting notable interest in and sympathy for his hero’s first wife. A freeborn black woman, Anna Murray helped her future husband escape enslavement and, after they married, raised five children with him and dutifully maintained households that offered respite between his frequent, exhausting bouts of travel. Their marriage seemed to nourish them both in certain respects, but was profoundly lacking in others. Anna never learned to read or write, which severely limited the range of experience that the two of them could share. Two years after Anna died in 1882, Douglass married Helen Pitts, a Mount Holyoke College–educated white former abolitionist 20 years his junior. They tried to keep the marriage quiet; even his children were unaware of it until the union was a done deal. But soon news of it emerged and controversy ensued. The marriage scandalized many whites, including Helen’s father, who rejected his daughter completely. But the marriage outraged many blacks as well. The journalist T. Thomas Fortune noted that “the colored ladies take [Douglass’s marriage] as a slight, if not an insult, to their race and their beauty.” Many black men were angered, too. As one put it, “We have no further use for [Douglass] as a leader. His picture hangs in our parlor, we will hang it in the stable.” For knowledgeable black nationalists, Douglass’s second marriage continues to vex his legacy. Some give him a pass for what they perceive as an instance of apostasy, while others remain unforgiving.

That Douglass is celebrated so widely is a tribute most of all to the caliber and courage of his work as an activist, a journalist, a memoirist, and an orator. It is a testament as well to those, like Blight, who have labored diligently to preserve the memory of his extraordinary accomplishments. Ironically, his popularity is also due to ignorance. Some who commend him would probably cease doing so if they knew more about him. Frederick Douglass was a whirlwind of eloquence, imagination, and desperate striving as he sought to expose injustice and remedy its harms. All who praise him should know that part of what made him so distinctive are the tensions—indeed the contradictions—that he embraced.

This article appears in the December 2018 print edition with the headline “The Confounding Truth About Frederick Douglass.”

BOOKS

Abraham Lincoln’s Radical Moderation
What the president understood that the zealous Republican reformers in Congress didn’t


ANDREW FERGUSON MARCH 2020 ISSUE

Congress at War: How Republican Reformers Fought the Civil War, Defied Lincoln, Ended Slavery, and Remade America 


BY FERGUS M. BORDEWICH
 KNOPF CECILIA CARLSTEDT

In the opening days of the Civil War, long before Saturday Night Live appropriated the idea, Louis Trezevant Wigfall earned the distinction in Washington, D.C., of being the Thing That Wouldn’t Leave. Elected to the United States Senate from Texas to fill a vacancy in 1859, Wigfall wasted no time in making himself obnoxious to his colleagues and the public alike. He was lavish in his disdain for the legislative body in which he had sought a seat. On the Senate floor, he said of the flag and, especially, the Union for which it stood, “It should be torn down and trampled upon.” As the southern states broke away, Wigfall gleefully announced, “The federal government is dead. The only question is whether we will give it a decent, peaceable, Protestant burial.”

By then Wigfall had been appointed to the Confederate congress, and the only question that occurred to many of his colleagues was why he was still bloviating from the floor of the U.S. Senate. Wigfall was worse than a mere gasbag. As Fergus M. Bordewich points out in his provocative new book, Congress at War, he “passed on military information to his southern friends, bought arms for the Confederacy, and swaggered around encouraging men to enlist in the secessionist forces.” At last, in March 1861, Wigfall quit the U.S. capital and showed up a few weeks later in South Carolina. Commandeering a skiff after Confederate batteries opened fire on Fort Sumter, in Charleston Harbor, he rowed out to present terms for the fort’s surrender. He had no authorization to do such a thing; he was simply following his passion to make trouble and get attention. He went down in history as a triple threat: a traitor, a blowhard, and a shameless buttinsky.

Lincoln's Great Depression JOSHUA WOLF SHENK

From April 1861: A Connecticut Yankee visits Charleston during the Fort Sumter standoff

Wigfall, one of the many strange and colorful characters tossed up by the politics of the Civil War, typifies the time in important respects. The years leading to the Civil War, and the war itself, were political intensifiers; radicalism was rewarded and could be made to pay. This was as true of the Republican reformers who are the heroes of Bordewich’s book as it is of secessionists like Wigfall.

Bordewich’s ungainly subtitle—How Republican Reformers Fought the Civil War, Defied Lincoln, Ended Slavery, and Remade America—telegraphs the grand claims he sets out to make for a group of congressmen who mostly styled themselves as Radical Republicans. In his account, it is they who pressed for aggressive military campaigns when the will for war flagged among Abraham Lincoln’s generals; who invented the financial mechanisms that funded the war; who pushed for punitive measures against the southern slaveholders; and who deserve credit (or blame!) for the birth of big government—achievements more commonly attributed to their far less radical president. A popular historian and journalist blessedly free of academic affiliations, Bordewich is a master of the character sketch, summarizing complicated figures in a few swift phrases. But Lincoln himself never comes alive in his pages. Indeed, he scarcely appears. He lurks just offstage, stepping forward now and then to try, briefly and usually without success, to stymie the righteous zeal that propels the Radicals. The last line of the book declares that “a whole generation of politically heroic Republicans … led Congress to victory in the Civil War.” It’s an odd formulation—you probably thought the North won the war.

From June 1865: The place of Abraham Lincoln in history

Bordewich has chosen to tell his sprawling story of legislative activism and ascendancy mainly through four members of Congress: Senators Benjamin Wade of Ohio and William Pitt Fessenden of Maine, and Representatives Thaddeus Stevens of Pennsylvania and Clement Vallandigham of Ohio. Vallandigham is the only Democrat, a leader of an anti-war faction whose preference for the Union was complicated by his pro-slavery sympathies. The rest are Republicans, and two of them, Stevens and Wade, proudly called themselves Radicals and behaved accordingly. Fessenden, at one time a conservative, grew more sympathetic to the Radicals’ aims as the war dragged on.

Congressional power fell in the lap of Republicans, thanks to the departure of Wigfall and his southern colleagues; their seizing of it seems, in retrospect, less a matter of superior gamesmanship than a law of political gravity. Calling for stronger prosecution of the war, immediate liberation of the enslaved, and confiscation of all property owned by the southern belligerents, Radicals quickly took control of the Republican caucus. Perhaps, Bordewich writes, the Radicals “have something to teach us about how our government can function at its best in challenging times, and how crisis may even make it stronger.” Lesson No. 1: Get most of your opponents to leave town before you try anything.

The Radicals were quick on their feet, exploiting national turmoil to break a legislative logjam. For decades Southern Democrats, their numbers swollen by the Constitution’s infamous three-fifths clause, had blocked a series of domestic programs proposed first by the Whigs and then by their Republican successors. Here was the chance to neutralize the Democratic aversion to centralized power and advance a collectivist vision of the commercial republic, laying the foundation, Bordewich writes, “for the strong activist central government that came fully into being in the twentieth century.”

The flurry of legislating was indeed “transformative,” as Bordewich says. He points in particular to four pieces of legislation as landmarks. The Homestead Act promised 160 acres of federal land to any citizen willing to live on it and farm it for five years. The Pacific Railway Act financed the transcontinental railroad and further opened up the western territories to white settlement. The third bill created the federal Department of Agriculture. And the Morrill Land Grant College Act would distribute federal land to states and localities for the purpose of building public institutions of higher learning dedicated to teaching agriculture and other practical arts—a miracle of democratization in the history of American education.

Yet in Bordewich’s telling, Lincoln had little to do with the ambitious measures, as if the bills were signed by autopen during coffee breaks. In fact, two of them were explicitly endorsed in the Republican platform that Lincoln ran on in 1860; he made a special plea for the Department of Agriculture in his first annual message to Congress. Bordewich also downplays the inevitable unintended consequences that accompany government expansion, even what seem to be the most benign reforms. The railway act, with its crony capitalism and funny-money bond issues, led straight to the Gilded Age and the creation of half a dozen robber-baron fortunes. Those “federal lands” that Washington gave away in the railway and homestead acts were not, except in the sneakiest sense, the federal government’s to give away; the land rush they touched off may have guaranteed the otherwise merely predictable genocide of the Native Americans already living there.

In the name of designating the Radicals as the forerunners of contemporary liberalism, Bordewich tries to draw a continuous line from the Civil War Congress to the New Deal and the Great Society. Yet the line has too many zigs and zags and ups and downs to clinch a causal connection. And in fact, many of the features of big government (19th-century style) fell away before long. Calvin Coolidge, for instance, 60 years after the Civil War and a few years before the New Deal, oversaw a federal government that was in most respects closer in size and scope to the antebellum government than to the modern state that was soon to emerge.

From July/August 2009: Christopher Hitchens on Lincoln’s emancipation

If bordewich oversells the legacy of the Radicals in Congress, his more fundamental misapprehension lies elsewhere: His version of events shortchanges the greatness that humanists of all stripes—not only historians—have found in Lincoln. The problem is partly a failure to appreciate that the Radicals were kibitzers, as many legislators are. But misjudging Lincoln’s role as executive and his commitment to larger obligations is Bordewich’s more telling mistake. Lincoln the executive shouldered the responsibility to lead an entire government and, just as important, an unstable political coalition. From Radicals to reactionaries, Republicans were held together by a single strand: a hostility, varying in degree, to slavery. A collapse of this delicate alliance—brought on by a sudden call for immediate, nationwide abolition, for instance—would have doomed the war effort.

Lincoln was required to be more cautious than a Radical congressman had to be—more serious, in a word. Bordewich credits the Radicals with forcing Lincoln year by year to pursue the war more savagely, culminating in the elevation of General Ulysses S. Grant in 1864. But his evidence is thin that Lincoln paid anything more than lip service to the Radicals’ pleas for bloodshed. Bordewich is a particular admirer of the Joint Committee on the Conduct of the War—“this improvised vigilante committee,” Lincoln called it, “to watch my movements and keep me straight.” It was put together by Benjamin Wade and stocked with his fellow Radicals.

The committee researched and rushed into print massive reports after failed and sometimes catastrophic military engagements. The accounts totaled millions of words and accused officers and bureaucrats of horrifying lapses in military judgment and execution. Some of the accusations were implausible; others were all too real. Historically, the reports are invaluable. At the time, however, their primary effect was to second-guess generals disliked by the committee’s majority and to advance the generals with whom the majority was politically aligned. The committee’s “greatest purpose,” Lincoln told a friend, “seems to be to hamper my action and obstruct military operations.”

Shelby Foote, in his history of the Civil War, tells a story that illustrates why Lincoln and the Radicals were destined to be so often at odds. One evening Wade rushed to the White House to demand that Lincoln fire a weak-willed general who had failed to press the Union advantage. Lincoln asked Wade whom he should enlist to take the general’s place. “Anybody!” Wade cried. “Anybody will do for you,” Lincoln replied, “but I must have somebody.” Lincoln had to be serious.

As Bordewich concedes, the Radicals were as bloody-minded as the Wigfalls of the world. “Nothing but actual extermination or exile or starvation will ever induce [southern rebels] to surrender,” Stevens once said, in a speech Bordewich doesn’t quote. There can, of course, be no moral equivalence between Stevens and a slavery apologist like Wigfall. One of them was on the side of the angels, and it wasn’t Wigfall. But both were radicals.

Radicalism is more than a packet of views or policies. The contents of the packet will change with circumstances and over time. (One reason Bordewich admires the Radical Republicans is that their views on race are so close to current mainstream attitudes; today’s radicals, valorizing group identity above all else, will likely find both the views and the politicians who held them hopelessly retrograde.) Radicalism is a disposition. The same is true of its contrary, moderation. Lincoln’s moderation was so infuriating to the Radicals because it reflected a hierarchy of values different from theirs.

The ultimate concerns for Stevens and his fellows were the liberation of the enslaved, the punishment of the enslavers, and the reorganization of southern society. The ultimate concern for Lincoln was the survival of the Union, to which he had an almost mystical attachment. The old question—was the war fought to preserve the Union or to free the slaves?—underestimates how closely the two causes were entwined in his mind. Lincoln’s goal was to uphold the kind of government under which slavery could not in the end survive. This was a government, as Lincoln said, dedicated to a proposition.

From September 1999: Lincoln’s greatest speech

In a hectoring letter written at a low point in 1863, a Radical senator insisted that Lincoln “stand firm” against conservatives in his government. It was a common complaint of the Radical Republicans that Lincoln was hesitant, easily led, timid—weak. “I hope to ‘stand firm’ enough to not go backward,” Lincoln replied, “and yet not go forward fast enough to wreck the country’s cause.” Lincoln struck this balance with unmatched skill and sensitivity.

It was a feat of leadership peculiar to self-government, captured most famously by the only 19th-century American who could rival him as a prose artist and a statesman. Frederick Douglass was an enthusiastic admirer of Lincoln, once calling him, not long after the assassination, “emphatically the black man’s president: the first to show any respect for their rights as men.” Years later, Douglass’s enthusiasm had cooled—and ripened.

From December 1866: Frederick Douglass on Reconstruction

Lincoln “was preeminently the white man’s President, entirely devoted to the welfare of white men,” Douglass now said. “Viewed from the genuine abolition ground”—the ground, that is, from which Bordewich and many of today’s historians want to judge him, and the ground from which the Radicals did judge him—“Mr. Lincoln seemed tardy, cold, dull, and indifferent.” Douglass knew, though, that Lincoln never claimed to govern as an abolitionist, and Douglass knew why. “But measuring him by the sentiment of his country, a sentiment he was bound as a statesman to consult, he was swift, zealous, radical, and determined.”

The italics are mine, but the insight belongs to Douglass. Lincoln was radical without being a Radical—and never more radical than a leader can afford to be when he leads a government of, by, and for the people.


ANDREW FERGUSON is a staff writer at The Atlantic. He is the author of Fools’ Names, Fools’ Faces; Land of Lincoln; and Crazy U: One Dad’s Crash Course on Getting His Kid Into College.


SEE 

The Questions Sex-Ed Students Always Ask

For 45 years, Deborah Roffman has let students’ curiosities guide her lessons on sexuality and relationships.

DEBORAH ROFFMAN / THE ATLANTIC


Editor's Note: In the next five years, most of America’s most experienced teachers will retire. The Baby Boomers are leaving behind a nation of more novice educators. In 1988, a teacher most commonly had 15 years of experience. Less than three decades later, that number had fallen to just three years leading a classroom. The Atlantic’s “On Teaching” project is crisscrossing the country to talk to veteran educators. This story is the eleventh in our series.

About 25 years ago, a public school in the Baltimore suburbs invited Deborah Roffman to teach a class on puberty to fifth graders. Roffman, who was known as the “Sex Lady” at the private Park School of Baltimore, where she had been teaching for two decades, was flattered. But she was troubled by the restrictions that the public school’s vice principal had given her: She couldn’t use the words fertilization, intercourse, or sex. And she couldn’t answer any student questions related to those subjects. That wasn’t going to work for the Sex Lady.

Eventually, Roffman reached a compromise with the public school: Students would get parental permission to attend her talk, and Roffman could answer any question they asked, even if it meant using the S-word.

Roffman’s title of human-sexuality educator has not changed since she arrived at the Park School in 1975, but the dimensions of her role there have steadily grown. So, too, has her outside work in consulting and teacher training: Over the years, she has advised at nearly 400 schools, most of them private.

Initially, Roffman taught elective classes in sexuality to the juniors and seniors at Park, but within two years, she had expanded to seventh and eighth graders. In the 1980s, she added fourth and fifth graders to her roster. She also meets annually with the parents of students as young as kindergartners, to coach them on how to talk with their children about sexuality, and she leads summer training for the Park’s elementary-school teachers on incorporating sexuality instruction into their classrooms. “There is this knowledge that we keep in a box about sexuality, waiting until kids are ‘old enough,’” Roffman told me. “My job is to change that.”


What School Could Be If It Were Designed for Kids With Autism KRISTINA RIZGA


How to Save a Dying Language ALIA WONG

Every Child Can Become a Lover of Books ASHLEY FETTERS

During her 45 years of teaching, Roffman has witnessed the evolution of the nation’s attitude toward sex education and, as her experience at the public school shows, how uneven that education can be.

Perhaps more than any other subject, sex education highlights the country’s fierce loyalty to local control of schools. Twenty-nine states require public schools to stress abstinence if they teach about sex, according to the latest count by the Guttmacher Institute, a think tank based in Washington, D.C., and New York that promotes reproductive rights. Some of the more outrageous abstinence lessons employ troubling metaphors, such as comparing sexually active, unmarried women to an old piece of tape: useless and unable to bond. Only 17 states require sex education to be medically accurate.

Most research has found that sex education for adolescents in the United States has declined in the past 20 years. Like art and music, the subject is typically not included on state standardized exams and, as the saying goes, “what gets tested gets taught.” In the case of sex education, waning fear about the spread of HIV and AIDS among heterosexual youths has contributed to the decline in instruction, says John Santelli, a professor at Columbia’s Mailman School of Public Health.

But some bright spots do exist, says Jennifer Driver, the vice president of policy and strategic partnerships at the Sexuality Information and Education Council of the United States. For example, in some parts of Mississippi and Texas, there has been a shift away from "abstinence only" to "abstinence plus" curricula, with the latter permitting at least some information about contraception.

Roffman remembers her own sex education while growing up in Baltimore as being limited to a short film in fifth grade about periods and puberty. She began working in sex ed in 1971—when access to birth control was rapidly expanding amid the sexual revolution—helping Planned Parenthood train health-care professionals who were setting up family-planning clinics in the region, and doing broader community outreach.

Four years later, she followed her Planned Parenthood supervisor to the progressive Park School, where students often address teachers by their first name and current tuition runs about $30,000 a year. When she arrived that spring, she heard that the senior-class adviser had recently rushed into the upper-school principal’s office, exclaiming that something had to be done before the seniors’ graduation, because “we forgot to talk to them about sex.”

Read: The case for comprehensive sex ed

During the next several years, Roffman not only made sure the school remembered to talk to students about sex but steadily built up the curriculum. At Park, students learn about standard fare like birth control and sexually transmitted diseases but also delve into issues such as the history of abortion rights, changing conceptions of gender roles, and how to build respectful, intimate relationships.

Students start by learning about the reproductive systems, the importance of open communication, and the fundamentals of puberty in their first classes with Roffman, in the fourth and fifth grades. In seventh grade, they take a deep-dive course on human sexuality, covering everything from pornography to the use of sex in advertising to gender identity and sexual orientation. They see her again for a shorter, related course in eighth grade. During the 2016 presidential campaign, Roffman’s seventh graders spent most of a semester researching the candidates’ differing views on sex, gender, and reproduction. “In the process of doing that, I got to teach about every topic I wanted to teach about,” she said.

In high school, students take a required sexuality-studies seminar. The specific content varies year to year, but it’s always based on what Roffman calls the “eight characteristics of a sexually healthy adult,” which include staying healthy, enjoying pleasure, and relating to others in caring, nonexploitative ways.

The through line of her approach, at any age, is letting students’ queries guide her instruction. So she asks her students to submit anonymous questions at the start of the semester, and makes sure that she answers them as the course progresses.

Regardless of whether they grew up in the ’80s or the aughts, kids of certain ages always ask versions of the same questions, Roffman has found. For instance, middle-school students, she said, want to know if their bodies and behaviors are “normal.” Many older students ask her at what age it’s normal to start masturbating.

High schoolers routinely ask about romantic communication, relationships, and the right time for intimacy: “Who makes the first move?” “How do you know if you or the other person is ready for the ‘next level’?” “How can you let someone down easy when you want to break up?”

But some contemporary questions, Roffman said, are very different from those she heard earlier in her career. Sometimes the questions change when the news does. (More than 30 years ago, Roffman started reading two newspapers a day to keep up with the rapid pace of news about HIV and AIDS; she’s maintained the habit since.)

She said she received a flood of questions about sexual harassment after the Senate confirmation hearings for Supreme Court Justice Clarence Thomas, in the early 1990s. The same decade ended with a spike in student interest in oral sex and behaviors that had previously been considered more taboo, such as anal sex.

Sometimes changing student questions signal broader cultural shifts, like the recent surge in student queries about gender identities. “There would have been questions 20 years ago about sexual orientation, but not about gender diversity,” Roffman said. But one recent eighth-grade cohort submitted questions like “How many genders are there?” “What does ‘gender roles’ mean?” “What is the plus sign for in LGBTQIA+?” and “Why is ‘gay’ called ‘gay’?” She finds a way to answer them all.

Read: What schools should teach kids about sex

Roffman’s students appreciate her blunt and holistic approach. As a sixth grader at a charter school several years ago, Maeve Thistel took a brief unit in sex education. The teacher seemed uncomfortable and nervous, she remembers. The condoms the teacher brought for a demonstration were expired, and split when she took them out of the package. Thistel came away from the class with the impression that sex was both “icky and disturbing.”

Thistel, now a college freshman, transferred to Park for high school, where she found that Roffman presented some of the same material quite differently: Her very first step in the lesson on condoms was to point out that all of them have an expiration date that should be noted and heeded.

Under Roffman’s guidance, sexuality at Park has come to be treated as something closer to social studies, science, or other core subjects. Sex ed is “just another part of the curriculum, not carved out as its own special thing,” says David Sachs, a 1988 graduate who studied with Roffman and whose son, Sebastian, is now in 11th grade at the school and has her as a teacher as well.

Like all Park students, Sebastian Sachs had to complete an eighth-grade project wherein he examined the root cause of a social-justice issue. His team picked sexual assault and, with Roffman as their adviser, focused on consent education and how to introduce it in the youngest grades. Sachs and his teammates created a curriculum for preschoolers that, among other things, encourages them to ask permission before hugging a classmate, borrowing a pencil, or swooping in for a high five.

In Roffman’s ideal world, the school would implement lessons like these, and other age-appropriate sex and relationship education, from the earliest grades. Several of her co-workers agree. “Fourth grade might be too late for us” to begin this kind of education, says Alejandro Hurtado, Park’s Spanish teacher for the lower grades. Last summer, Hurtado participated in a voluntary two-week workshop led by Roffman that aimed to create a sexuality-education curriculum for Park’s elementary-age kids. “It will be subtly woven in,” he says, noting that he plans to talk more explicitly about traditional gender roles and expectations in some Latino cultures as part of his own class.

In her teacher training, Roffman encourages colleagues to be scientifically accurate and use age-appropriate language when answering even the youngest children’s questions. Four-year-olds are beginning to understand place and geography, so they will frequently ask where they came from. “The proper answer is that there’s a place inside a female body called the uterus, and that’s where they grew,” Roffman said.

Sarah Shelton, a Park third-grade teacher who also participated in the summer workshop, says Roffman inspired her to not dodge students’ questions about bodies and sex. In the past she’s deflected sex-related inquiries, such as when a student asked about birth control last year.

“I told her, ‘Great question. Ask your parents,’” Shelton recalls. “If that were to occur again, I would say something like ‘When reproduction happens in the body, there is medication that you can take to stop it so you can have sexual intercourse without creating a baby.’”

Sarah Huss, the director of human development and parent education at the private Campbell Hall school in Los Angeles, says Roffman helped her rethink her school’s sexuality education. Huss reached out to Roffman after reading her book Talk to Me First: Everything You Need to Know to Become Your Kids’ “Go-To” Person About Sex. The ensuing dialogue prompted Campbell Hall to begin sexuality education in third grade and to significantly shore up its middle-school programming. Prior to meeting Roffman, “I had taught sex education as ‘Don’t get hurt, don’t get pregnant, don’t get a disease,’” Huss says. “That wasn’t a hopeful message for the kids.”

Huss admires her colleague’s patient tenacity. “She’s walking into schools where there is so much emotional baggage around a subject,” Huss says. “To suggest doing it differently, you have to confront years and years and years of thinking that talking with young kids about sex is dangerous.”

After decades of striving for change both within and beyond Park’s walls, Roffman is optimistic about the future of sexuality education at progressive private schools like Campbell Hall and Park. “I’ve always believed that independent schools have the responsibility to give back to the larger educational community,” she told me. “It’s up to us to demonstrate that, yes, this can be done well and successfully.”

By contrast, “I see very limited movement in the public sector,” she said. And in a country where only a minority of states require medically accurate sex-education classes, her dream of seamlessly integrating the subject from kindergarten up may be a long way off. But Roffman has lived through one sexual revolution, and she holds out hope for a second, in education.

This article is part of our project "On Teaching," which is supported by grants from the William and Flora Hewlett Foundation, the Spencer Foundation, the Bill & Melinda Gates Foundation, and the Panta Rhea Foundation.

SARAH CARR leads an investigative education reporting team at the Boston Globe and is the author of Hope Against Hope, about New Orleans schools.
The Problem With Telling Sick Workers to Stay Home

Ev
en with the coronavirus spreading, lax labor laws and little sivck leave mean that many people can’t afford to skip work.
ROBERT NICKELSBERG / GETTY
As the coronavirus that has sickened tens of thousands in China spreads worldwide, it now seems like a virtual inevitability that millions of Americans are going to be infected with the flu-like illness known as COVID-19. Public-health officials in the United States have started preparing for what the Centers for Disease Control and Prevention is calling a “significant disruption” to daily life. Because more than 80 percent of cases are mild and many will show no symptoms at all, limiting the disease’s spread rests on the basics of prevention: Wash your hands well and frequently, cover your mouth when you cough, and stay home if you feel ill. But that last thing might prove to be among the biggest Achilles’ heels in efforts to stymie the spread of COVID-19. The culture of the American workplace puts everyone’s health at unnecessary risk.

For all but the independently wealthy in America, the best-case scenario for getting sick is being a person with good health insurance, paid time off, and a reasonable boss who won’t penalize you for taking a few sick days or working from home. For millions of the country’s workers, such a scenario is a nearly inconceivable luxury. “With more than a third of Americans in jobs that offer no sick leave at all, many unfortunately cannot afford to take any days off when they are feeling sick,” Robyn Gershon, an epidemiology professor at the NYU School of Global Public Health, wrote in an email. “People who do not (or cannot) stay home when ill do present a risk to others.” On this count, the United States is a global anomaly, one of only a handful of countries that doesn’t guarantee its workers paid leave of any kind. These jobs are also the kind least likely to supply workers with health insurance, making it difficult for millions of people to get medical proof that they can’t go to work.

MORE STORIES


How a Measles Quarantine Can Lead to Eviction OLGA KHAZAN


20 Seconds to Optimize Hand Wellness JAMES HAMBLIN


Workers Love AirPods Because Employers Stole Their Walls AMANDA MULL

They’re also concentrated in the service industry or gig economy, in which workers have contact, directly or indirectly, with large numbers of people. These are the workers who are stocking the shelves of America’s stores, preparing and serving food in its restaurants, driving its Ubers, and manning its checkout counters. Their jobs tend to fall outside the bounds of paid-leave laws, even in states or cities that have them. Gershon emphasizes that having what feels like a head cold or mild flu—which COVID-19 will feel like to most healthy people—often isn’t considered a good reason to miss a shift by those who hold these workers’ livelihood in their hands.

Read: You’ll likely get the coronavirus

Even if a person in one of these jobs is severely ill—coughing, sneezing, blowing her nose, and propelling droplets of virus-containing bodily fluids into the air and onto the surfaces around her—asking for time off means missing an hourly wage that might be necessary to pay rent or buy groceries. And even asking can be a risk in jobs with few labor protections, because in many states, there’s nothing to stop a company from firing you for being too much trouble. So workers with no good options end up going into work, interacting with customers, swiping the debit cards that go back into their wallets, making the sandwiches they eat for lunch, unpacking the boxes of cereal they take home for their kids, or driving them home from happy hour.

Even for people who have paid sick leave, Gershon noted, the choices are often only marginally better; seven days of sick leave is the American average, but many people get as few as three or four. “Many are hesitant to use [sick days] for something they think is minor just in case they need the days later for something serious,” she wrote. “Parents or other caregivers are also hesitant to use them because their loved ones might need them to stay home and care for them if they become ill.”

For workers with ample sick leave, getting it approved may still be difficult. America’s office culture often rewards those who appear to go above and beyond, even if that requires coughing on an endless stream of people. Some managers believe leadership means forcing their employees into the office at all costs, or at least making it clear that taking a sick day or working from home will be met with suspicion or contempt. In other places, employees bring their bug to work of their own volition, brown-nosing at the expense of their co-workers’ health.

Read: The gig economy has never been tested by a pandemic

Either way, the result is the same, especially in businesses that serve the public or offices with open plans and lots of communal spaces, which combine to form the majority of American workplaces. Even if your server at dinner isn’t sick, she might share a touch-screen workstation with a server who is. Everyone on your side of the office might be hale and healthy, but you might use a tiny phone booth to take a call right after someone whose throat is starting to feel a little sore. “Doorknobs, coffee makers, toilets, common-use refrigerators, sinks, phones, keyboards [can all] be a source of transmission if contaminated with the agent,” Gershon wrote. She advised that workers stay at least three to six feet away from anyone coughing or sneezing, but in office layouts that put desks directly next to one another with no partition in between—often to save money by giving workers less personal space—that can be impossible. No one knows how long COVID-19 can live on a dry surface, but in the case of SARS, another novel coronavirus, Gershon said it was found to survive for up to a week on inanimate objects.

Work culture isn’t the only structure of American life that might make a COVID-19 outbreak worse than it has to be—the inaccessible, precarious, unpredictable nature of the country’s health-care system could also play an important role. But tasking the workers who make up so much of the infrastructure of daily American life, often for low wages and with few resources, with the lion’s share of prevention in an effort to save thousands of lives is bound to fail, maybe spectacularly. It will certainly exact a cost on them, both mentally and physically, that the country has given them no way to bear.



AMANDA MULL is a staff writer at The Atlantic.

Nuclear Tests Marked Life on Earth With a Radioactive Spike

Even as it disappears, the “bomb spike” is revealing the ways humans have reshaped the planet.

Zoe van Djik


Story by Carl Zimmer

MARCH 2, 2020

On the morning of March 1, 1954, a hydrogen bomb went off in the middle of the Pacific Ocean. John Clark was only 20 miles away when he issued the order, huddled with his crew inside a windowless concrete blockhouse on Bikini Atoll. But seconds went by, and all was silent. He wondered if the bomb had failed. Eventually, he radioed a Navy ship monitoring the test explosion.

“It’s a good one,” they told him.

Then the blockhouse began to lurch. At least one crew member got seasick—“landsick” might be the better descriptor. A minute later, when the bomb blast reached them, the walls creaked and water shot out of the bathroom pipes. And then, once more, nothing. Clark waited for another impact—perhaps a tidal wave—but after 15 minutes he decided it was safe for the crew to venture outside.

The mushroom cloud towered into the sky. The explosion, dubbed “Castle Bravo,” was the largest nuclear-weapons test up to that point. It was intended to try out the first hydrogen bomb ready to be dropped from a plane. Many in Washington felt that the future of the free world depended on it, and Clark was the natural pick to oversee such a vital blast. He was the deputy test director for the Atomic Energy Commission, and had already participated in more than 40 test shots. Now he gazed up at the cloud in awe. But then his Geiger counter began to crackle.

“It could mean only one thing,” Clark later wrote. “We were already getting fallout.”

That wasn’t supposed to happen. The Castle Bravo team had been sure that the radiation from the blast would go up to the stratosphere or get carried away by the winds safely out to sea. In fact, the chain reactions unleashed during the explosion produced a blast almost three times as big as predicted—1,000 times bigger than the Hiroshima bomb.

Within seconds, the fireball had lofted 10 million tons of pulverized coral reef, coated in radioactive material. And soon some of that deadly debris began dropping to Earth. If Clark and his crew had lingered outside, they would have died in the fallout.

Clark rushed his team back into the blockhouse, but even within the thick walls, the level of radiation was still climbing. Clark radioed for a rescue but was denied: It would be too dangerous for the helicopter pilots to come to the island. The team hunkered down, wondering if they were being poisoned to death. The generators failed, and the lights winked out.

“We were not a happy bunch,” Clark recalled.

They spent hours in the hot, radioactive darkness until the Navy dispatched helicopters their way. When the crew members heard the blades, they put on bedsheets to protect themselves from fallout. Throwing open the blockhouse door, they ran to nearby jeeps as though they were in a surreal Halloween parade, and drove half a mile to the landing pad. They clambered into the helicopters, and escaped over the sea.

Read: The people who built the atomic bomb

As Clark and his crew found shelter aboard a Navy ship, the debris from Castle Bravo rained down on the Pacific. Some landed on a Japanese fishing boat 70 miles away. The winds then carried it to three neighboring atolls. Children on the island of Rongelap played in the false snow. Five days later, Rongelap was evacuated, but not before its residents had received a near-lethal dose of radiation. Some people suffered burns, and a number of women later gave birth to severely deformed babies. Decades later, studies would indicate that the residents experienced elevated rates of cancer. 

Traveling the world to see microbes, plants, and animals in oceans, grasslands, forests, deserts, the icy poles—and wherever else they may be.
Read more

The shocking power of Castle Bravo spurred the Soviet Union to build up its own nuclear arsenal, spurring the Americans in turn to push the arms race close to global annihilation. But the news reports of sick Japanese fishermen and Pacific islanders inspired a worldwide outcry against bomb tests. Nine years after Clark gave the go-ahead for Castle Bravo, the United States, Soviet Union, and Great Britain signed a treaty to ban aboveground nuclear-weapons testing. As for Clark, he returned to the United States and lived for another five decades, dying in 2002 at age 98.

Among the isotopes created by a thermonuclear blast is a rare, radioactive version of carbon, called carbon 14. Castle Bravo and the hydrogen-bomb tests that followed it created vast amounts of carbon 14, which have endured ever since. A little of this carbon 14 made its way into Clark’s body, into his blood, his fat, his gut, and his muscles. Clark carried a signature of the nuclear weapons he tested to his grave.

I can state this with confidence, even though I did not carry out an autopsy on Clark. I know this because the carbon 14 produced by hydrogen bombs spread over the entire world. It worked itself into the atmosphere, the oceans, and practically every living thing. As it spread, it exposed secrets. It can reveal when we were born. It tracks hidden changes to our hearts and brains. It lights up the cryptic channels that join the entire biosphere into a single network of chemical flux. This man-made burst of carbon 14 has been such a revelation that scientists refer to it as “the bomb spike.” Only now is the bomb spike close to disappearing, but as it vanishes, scientists have found a new use for it: to track global warming, the next self-inflicted threat to our survival.

Sixty-five years after Castle Bravo, I wanted to see its mark. So I drove to Cape Cod, in Massachusetts. I was 7,300 miles from Bikini Atoll, in a cozy patch of New England forest on a cool late-summer day, but Clark’s blast felt close to me in both space and time.

I made my way to the Woods Hole Oceanographic Institute, where I met Mary Gaylord, a senior research assistant. She led me to the lounge of Maclean Hall. Outside the window, dogwoods bloomed. Next to the Keurig coffee maker was a refrigerator with the sign that read store only food in this refrigerator. We had come to this ordinary spot to take a look at something extraordinary. Next to the refrigerator was a massive section of tree trunk, as wide as a dining-room table, resting on a pallet.

The beech tree from which this slab came from was planted around 1870, by a Boston businessman named Joseph Story Fay near his summer house in Woods Hole. The seedling grew into a towering, beloved fixture in the village. Lovelorn initials scarred its broad base. And then, after nearly 150 years, it started to rot from bark disease and had to come down.

“They had to have a ceremony to say goodbye to it. It was a very sad day,” Gaylord said. “And I saw an opportunity.”

Gaylord is an expert at measuring carbon 14. Before the era of nuclear testing, carbon 14 was generated outside of labs only by cosmic rays falling from space. They crashed into nitrogen atoms, and out of the collision popped a carbon 14 atom. Just one in 1 trillion carbon atoms in the atmosphere was a carbon 14 isotope. Fay’s beech took carbon dioxide out of the atmosphere to build wood, and so it had the same one-in-a-trillion proportion.

When Gaylord got word that the tree was coming down in 2015, she asked for a cross-section of the trunk. Once it arrived at the institute, she and two college students carefully counted its rings. Looking at the tree, I could see a line of pinholes extending from the center to the edge of the trunk. Those were the places where Gaylord and her students used razor blades to carve out bits of wood. In each sample, they measured the level of radiocarbon.

“In the end, we got what I hoped for,” she said. What she’d hoped for was a history of our nuclear era.

RELATED STORIES

The 60-Year Downfall of Nuclear Power in the U.S. Has Left a Huge Mess
America's Nuclear-Waste Plan Is a Giant Mess
What Lies Beneath

For most of the tree’s life, they found, the level had remained steady from one year to the next. But in 1954, John Clark initiated an extraordinary climb. The new supply of radiocarbon atoms in the atmosphere over Bikini Atoll spread around the world. When it reached Woods Hole, Fay’s beech tree absorbed the bomb radiocarbon in its summer leaves and added it to its new ring of wood.

As nuclear testing accelerated, Fay’s beech took on more radiocarbon. A graph pinned to the wall above the beech slab charts the changes. In less than a decade, the level of radiocarbon in the tree’s outermost rings nearly doubled to almost two parts per trillion. But not long after the signing of the Partial Test Ban Treaty in 1963, that climb stopped. After a peak in 1964, each new ring of wood in Fay’s beech carried a little less radiocarbon. The fall was far slower than the climb. The level of radiocarbon in the last ring the beech grew before getting cut down was only 6 percent above the radiocarbon levels before Castle Bravo. Versions of the same sawtoothlike peak Gaylord drew had already been found in other parts of the world, including the rings of trees in New Zealand and the coral reefs of the Galapagos Islands. In October 2019, Gaylord unveiled an exquisitely clear version of the bomb spike in New England.

When scientists first discovered radiocarbon, in 1940, they did not find it in a tree or any other part of nature. They made it. Regular carbon has six protons and six neutrons. At UC Berkeley, Martin Kamen and Sam Ruben blasted carbon with a beam of neutrons and produced a new form, with eight neutrons instead of six. Unlike regular carbon, these new atoms turned out to be a source of radiation. Every second, a small portion of the carbon 14 atoms decayed into nitrogen, giving off radioactive particles. Kamen and Ruben used that rate of decay to estimate carbon 14’s half-life at 4,000 years. Later research would sharpen that estimate to 5,700 years.

Soon after Kamen and Ruben’s discovery, a University of Chicago physicist named Willard Libby determined that radiocarbon existed beyond the walls of Berkeley’s labs. Cosmic rays falling from space smashed into nitrogen atoms in the atmosphere every second of every day, transforming those atoms into carbon 14. And because plants and algae drew in carbon dioxide from the air, Libby realized, they should have radiocarbon in their tissue, as should the animals that eat those plants (and the animals that eat those animals, for that matter).

Libby reasoned that as long as an organism is alive and taking in carbon 14, the concentration of the isotope in its tissue should roughly match the concentration in the atmosphere. Once an organism dies, however, its radiocarbon should decay and eventually disappear completely.

To test this idea, Libby set out to measure carbon 14 in living organisms. He had colleagues go to a sewage-treatment plant in Baltimore, where they captured the methane given off by bacteria feeding on the sewage. When the methane samples arrived in Chicago, Libby extracted the carbon and put it in a radioactivity detector.. It crackled as carbon 14 decayed to nitrogen.

Read: Global warming could make carbon dating impossible

To see what happens to carbon 14 in dead tissue, Libby ran another experiment, this one with methane from oil wells. He knew that oil is made up of algae and other organisms that fell to the ocean floor and were buried for millions of years. Just as he had predicted, the methane from ancient oil contained no carbon 14 at all.

Libby then had another insight, one that would win him the Nobel Prize: The decay of carbon 14 in dead tissues acts like an archaeological clock. As the isotope decays inside a piece of wood, a bone, or some other form of organic matter, it can tell scientists how long ago that matter was alive. Radiocarbon dating, which works as far back as about 50,000 years, has revealed to us to when the Neanderthals became extinct, when farmers domesticated wheat, when the Dead Sea Scrolls were written. It has become the calendar of humanity.

Word of Libby’s breakthrough reached a New Zealand physicist named Athol Rafter. He began using radiocarbon dating on the bones of extinct flightless birds and ash from ancient eruptions. To make the clock more precise, Rafter measured the level of radiocarbon in the atmosphere. Every few weeks he climbed a hill outside the city of Wellington and set down a Pyrex tray filled with lye to trap carbon dioxide.

Rafter expected the level of radiocarbon to fluctuate. But he soon discovered that something else was happening: Month after month, the carbon dioxide in the atmosphere was getting more radioactive. He dunked barrels into the ocean, and he found that the amount of carbon 14 was rising in seawater as well. He could even measure extra carbon 14 in the young leaves growing on trees in New Zealand.

The Castle Bravo test and the ones that followed had to be the source. They were turning the atmosphere upside down. Instead of cosmic rays falling from space, they were sending neutrons up to the sky, creating a huge new supply of radiocarbon.

In 1957, Rafter published his results in the journal Science. The implications were immediately clear—and astonishing: Man-made carbon 14 was spreading across the planet from test sites in the Pacific and the Arctic. It was even passing from the air into the oceans and trees.

Other scientists began looking, and they saw the same pattern. In Texas, the carbon 14 levels in new tree rings were increasing each year. In Holland, the flesh of snails gained more as well. In New York, scientists examined the lungs of a fresh human cadaver, and found that extra carbon 14 lurked in its cells. A living volunteer donated blood and an exhalation of air. Bomb radiocarbon was in those, too.

Bomb radiocarbon did not pose a significant threat to human health—certainly not compared with other elements released by bombs, such as plutonium and uranium. But its accumulation was deeply unsettling nonetheless. When Linus Pauling accepted the 1962 Nobel Peace Prize for his campaigning against hydrogen bombs, he said that carbon 14 “deserves our special concern” because it “shows the extent to which the earth is being changed by the tests of nuclear weapons.”

Photos: When we tested nuclear bombs

The following year, the signing of the Partial Test Ban Treaty stopped aboveground nuclear explosions, and ended the supply of bomb radiocarbon. All told, those tests produced about 60,000 trillion trillion new atoms of carbon 14. It would take cosmic rays 250 years to make that much. In 1964, Rafter quickly saw the treaty’s effect: His trays of lye had less carbon 14 than they had the year before.

Only a tiny fraction of the carbon 14 was decaying into nitrogen. For the most part, the atmosphere’s radiocarbon levels were dropping because the atoms were rushing out of the air. This exodus of radiocarbon gave scientists an unprecedented chance to observe how nature works.

Today scientists are still learning from these man-made atoms. “I feel a little bit bad about it,” says Kristie Boering, an atmospheric chemist at UC Berkeley who has studied radiocarbon for more than 20 years. “It’s a huge tragedy, the fact that we set off all these bombs to begin with. And then we get all this interesting scientific information from it for all these decades. It’s hard to know exactly how to pitch that when we’re giving talks. You can’t get too excited about the bombs that we set off, right?”

Yet the fact remains that for atmospheric scientists like Boering, bomb radiocarbon has lit up the sky like a tracer dye. When nuclear triggermen such as John Clark set off their bombs, most of the resulting carbon 14 shot up into the stratosphere directly above the impact sites. Each spring, parcels of stratospheric air gently fell down into the troposphere below, carrying with them a fresh load of carbon 14. It took a few months for these parcels to settle on weather stations on the ground. Only by following bomb radiocarbon did scientists discover this perpetual avalanche.


Once carbon 14 fell out of the stratosphere, it kept moving. The troposphere is made up of four great rings of circulating air. Inside each ring, warm air rises and flows through the sky away from the equator. Eventually it cools and sinks back to the ground, flowing toward the equator again before rising once more. At first, bomb radiocarbon remained trapped in the Northern Hemisphere rings, above where the tests had taken place. It took many years to leak through their invisible walls and move toward the tropics. After that, the annual monsoons sweeping through southern Asia pushed bomb radiocarbon over the equator and into the Southern Hemisphere. 
Zoe van Djik

Eventually, some of the bomb radiocarbon fell all the way to the surface of the planet. Some of it was absorbed by trees and other plants, which then died and delivered some of that radiocarbon to the soil. Other radiocarbon atoms settled into the ocean, to be carried along by its currents.

Carbon 14 “is inextricably linked to our understanding of how the water moves,” says Steve Beaupre, an oceanographer at Stony Brook University, in New York.

In the 1970s, marine scientists began carrying out the first major chemical surveys of the world’s oceans. They found that bomb radiocarbon had penetrated the top 1,000 meters of the ocean. Deeper than that, it became scarce. This pattern helped oceanographers figure out that the ocean, like the atmosphere above, is made up of layers of water that remain largely separate.

The warm, relatively fresh water on the surface of the ocean glides over the cold, salty depths. These surface currents become saltier as they evaporate, and eventually, at a few crucial spots on the planet, these streams get so dense that they fall to the bottom of the ocean. The bomb radiocarbon from Castle Bravo didn’t start plunging down into the depths of the North Atlantic until the 1980s, when John Clark was two decades into retirement. It’s still down there, where it will be carried along the seafloor by bottom-hugging ocean currents for hundreds of years before it rises to the light of day.

Some of the bomb radiocarbon that falls into the ocean makes its way into ocean life, too. Some corals grow by adding rings of calcium carbonate, and they have recorded their own version of the bomb spike. Their spike lagged well behind the one that Rafter recorded, thanks to the extra time the radiocarbon took to mix into the ocean. Algae and microbes on the surface of the ocean also take up carbon from the air, and they feed a huge food web in turn. The living things in the upper reaches of the ocean release organic carbon that falls gently to the seafloor—a jumble of protoplasmic goo, dolphin droppings, starfish eggs, and all manner of detritus that scientists call marine snow. In recent decades, that marine snow has become more radioactive.

In 2009, a team of Chinese researchers sailed across the Pacific and dropped traps 36,000 feet down to the bottom of the Mariana Trench. When they hauled the traps up, there were minnow-size, shrimplike creatures inside. These were Hirondellea gigas, a deep-sea invertebrate that forages on the seafloor for bits of organic carbon. The animals were flush with bomb radiocarbon—a puzzling discovery, because the organic carbon that sits on the floor of the Mariana Trench is thousands of years old. It was as if they had been dining at the surface of the ocean, not at its greatest depths. In a few of the Hirondellea, the researchers found undigested particles of organic carbon. These meals were also high in carbon 14.

Read: A troubling discovery in the deepest ocean trenches

The bomb radiocarbon could not have gotten there by riding the ocean’s conveyor belt, says Ellen Druffel, a scientist at UC Irvine who collaborated with the Chinese team. “The only way you can get bomb carbon by circulation down to the deep Pacific would take 500 years,” she says. Instead, Hirondellea must be dining on freshly fallen marine snow.

“I must admit, when I saw the data it was really amazing,” Dreffel says. “These organisms were sifting out the very youngest material from the surface ocean. They were just leaving behind everything else that came down.”

More than 60 years have passed since the peak of the bomb spike, and yet bomb radiocarbon is telling us new stories about the world. That’s because experts like Mary Gaylord are getting better at gathering these rare atoms. At Woods Hole, Gaylord works at the National Ocean Sciences Accelerator Mass Spectrometry facility (NOSAMS for short). She prepares samples for analysis in a thicket of pipes, wires, glass tubes, and jars of frothing liquid nitrogen. “Our whole life is vacuum lines and vacuum pumps,” she told me.

At NOSAMS, Gaylord and her colleagues measure radiocarbon in all manner of things: sea spray, bat guano, typhoon-tossed trees. The day I visited, Gaylord was busy with fish eyes. Black-capped vials sat on a lab bench, each containing a bit of lens from a red snapper.

The wispy, pale tissue had come to NOSAMS from Florida. A biologist named Beverly Barnett had gotten hold of eyes from red snapper caught in the Gulf of Mexico and sliced out their lenses. Barnett then peeled away the layers of the lenses one at a time. When she describes this work, she makes it sound like woodworking or needlepoint—a hobby anyone would enjoy. “It’s like peeling off the layers of an onion,” she told me. “It’s really nifty to see.”

Eventually, Barnett made her way down to the tiny nub at the center of each lens. These bits of tissue developed when the red snapper were still in their eggs. And Barnett wanted to know exactly how much bomb radiocarbon is in these precious fragments. In a couple of days, Gaylord and her colleagues would be able to tell her.

Gaylord started by putting the lens pieces into an oven that slowly burned them away. The vapors and smoke flowed into a pipe, chased by helium and nitrogen. Gaylord separated the carbon dioxide from the other compounds, and then shunted it into chilled glass tubes. There it formed a frozen fog on the inside walls.

Later, the team at NOSAMS would transform the frozen carbon dioxide into chips of graphite, which they would then load into what looks like an enormous, crooked laser cannon. At one end of the cannon, graphite gets vaporized, and the liberated carbon atoms fly down the barrel. By controlling the magnetic field and other conditions inside the cannon, the researchers cause the carbon 14 atoms to veer away from the carbon 12 atoms and other elements. The carbon 14 atoms fly onward on their own until they strike a sensor.


Ultimately, all of this effort will end up in a number: the number of carbon 14 atoms in the red-snapper lens. For Barnett, every one of those atoms counts. They can tell her the exact age of the red snapper when the fish were caught.

That’s because lenses are peculiar organs. Most of our cells keep making new proteins and destroying old ones. Cells in the lens, however, fill up with light-bending proteins and then die, their proteins locked in place for the rest of our life. The layers of cells at the core of the red-snapper lenses have the same carbon 14 levels that they did when the fish were in their eggs.

Using lenses to estimate the ages of animals is still a new undertaking. But it’s already delivered some surprises. In 2016, for example, a team of Danish researchers studied the lenses from Greenland sharks ranging in size from two and a half to 16 feet long. The lenses of the sharks up to seven feet long had high levels of radiocarbon in them. That meant the sharks had hatched no earlier than the 1960s. The bigger sharks all had much lower levels of radiocarbon in their lenses—meaning that they had been born before Castle Bravo. By extrapolating out from these results, the researchers estimated that Greenland sharks have a staggeringly long life span, reaching up to 390 years or perhaps even more.

Barnett has been developing an even more precise clock for her red snapper, taking advantage of the fact that the level of bomb radiocarbon peaked in the Gulf of Mexico in the 1970s and has been falling ever since. By measuring the level of bomb radiocarbon in the center of the snapper lenses, she can determine the year when the fish hatched.


Knowing the age of fish with this kind of precision is powerful. Fishery managers can track the ages of the fish that are caught each year, information that they can then use to make sure their stocks don’t collapse. Barnett wants to study fish in the Gulf of Mexico to see how they were affected by the Deepwater Horizon oil spill of 2010. Their eyes can tell her how old they were when they were hit by that disaster.

When it comes to carbon, we are no different than red snapper or Greenland sharks. We use the carbon in the food we eat to build our body, and the level of bomb radiocarbon inside of us reflects our age. People born in the early 1960s have more radiocarbon in their lenses than people born before that time. People born in the years since then have progressively less.

For forensic scientists who need to determine the age of skeletal remains, lenses aren’t much help. But teeth are. As children develop teeth, they incorporate carbon into the enamel. If people’s teeth have a very low level of radiocarbon, it means that they were born well before Castle Bravo. People born in the early 1960s have high levels of radiocarbon in their molars, which develop early, and lower levels in their wisdom teeth, which grow years later. By matching each tooth in a jaw to the bomb curve, forensic scientists can estimate the age of a skeleton to within one or two years.

Even after childhood, bomb radiocarbon chronicles the history of our body. When we build new cells, we make DNA strands out of the carbon in our food. Scientists have used bomb radiocarbon in people’s DNA to determine the age of their cells. In our brains, most of the cells form around the time we’re born. But many cells in our hearts and other organs are much younger.
We also build other molecules throughout our lives, including fat. In a September 2019 study, Kirsty Spalding of the Karolinska Institute, near Stockholm, used bomb radiocarbon to study why people put on weight. Researchers had long known that our level of fat is the result of how much new fat we add to our body relative to how much we burn. But they didn’t have direct evidence for exactly how that balance influences our weight over the course of our life.

Spalding and her colleagues found 54 people from whom doctors had taken fat biopsies and asked if they could follow up. The fat samples spanned up to 16 years. By measuring the age of the fat in each sample, the researchers could estimate the rate at which each person added and removed fat over their lives.

The reason we put on weight as we get older, the researchers concluded, is that we get worse at removing fat from our bodies. “Before, you could intuitively believe that the rate at which we burn fat decreases as we age,” Spalding says, “but we showed it for the first time scientifically.”

Unexpectedly, though, Spalding discovered that the people who lost weight and kept it off successfully were the ones who burned their fat slowly. “I was quite surprised by that data,” Spalding said. “It adds new and interesting biology to understanding how to help people maintain weight loss.”

Children who are just now going through teething pains will have only a little more bomb radiocarbon in their enamel than children born before Castle Bravo did. Over the past six decades, the land and ocean have removed much of what nuclear bombs put into the air. Heather Graven, a climate scientist at Imperial College London, is studying this decline. It helps her predict the future of the planet.

Graven and her colleagues build models of the world to study the climate. As we emit fossil fuels, the extra carbon dioxide traps heat. How much heat we’re facing in centuries to come depends in part on how much carbon dioxide the oceans and land can remove. Graven can use the rise and fall of bomb radiocarbon as a benchmark to test her models.

In a recent study, she and her colleagues unleashed a virtual burst of nuclear-weapons tests. Then they tracked the fate of her simulated bomb radiocarbon to the present day. Much to Graven’s relief, the radiocarbon in the atmosphere quickly rose and then gradually fell. The bomb spike in her virtual world looks much like the one recorded in Joseph Fay’s beech tree.

Graven can keep running her simulation beyond what Fay’s beech and other records tell us about the past. According to her model, the level of radiocarbon in the atmosphere should drop in 2020 to the level before Castle Bravo.

“It’s right around now that we’re crossing over,” Graven told me.

Graven will have to wait for scientists to analyze global measurements of radiocarbon in the air to see whether she’s right. That’s important to find out, because Graven’s model suggests that the bomb spike is falling faster than the oceans and land alone can account for. When the ocean and land draw down bomb radiocarbon, they also release some of it back into the air. That two-way movement of bomb radiocarbon ought to cause its concentration in the atmosphere to level off a little above the pre–Castle Bravo mark. Instead, Graven’s model suggests, it continues to fall. She suspects that the missing factor is us.


We mine coal, drill for oil and gas, and then burn all that fossil fuel to power our cars, cool our houses, power our factories. In 1954, the year that John Clark set off Castle Bravo, humans emitted 6 billion tons of carbon dioxide into the air. In 2018, humans emitted about 37 billion tons. As Willard Libby first discovered, this fossil fuel has no radiocarbon left. By burning it, we are lowering the level of radiocarbon in the atmosphere, like a bartender watering down the top-shelf liquor.

If we keep burning fossil fuels at our accelerating rate, the planet will veer into climate chaos. And once more, radiocarbon will serve as a witness to our self-destructive actions. Unless we swiftly stop burning fossil fuels, we will push carbon 14 down far below the level it was at before the nuclear bombs began exploding.

To Graven, the coming radiocarbon crash is just as significant as the bomb spike has been. “We're transitioning from a bomb signal to a fossil-fuel-dilution signal,” she said.

The author Jonathan Weiner once observed that we should think of burning fossil fuels as a disturbance on par with nuclear-weapon detonations. “It is a slow-motion explosion manufactured by every last man, woman and child on the planet,” he wrote. If we threw up our billions of tons of carbon into the air all at once, it would dwarf Castle Bravo. “A pillar of fire would seem to extend higher into the sky and farther into the future than the eye can see,” Weiner wrote.

Bomb radiocarbon showed us how nuclear weapons threatened the entire world. Today, everyone on Earth still carries that mark. Now our pulse of carbon 14 is turning into an inverted bomb spike, a new signal of the next great threat to human survival.
CARL ZIMMER is a columnist at The New York Times. His latest book is She Has Her Mother’s Laugh: The Powers, Perversions, and Potential of Heredity.