Sunday, October 18, 2020

Is Bolivia poised to swing back towards socialism? 


Supporters take part in an offering to the Pachamama (Mother Earth) during a closing campaign rally of the Movement to Socialism party (Mas) ahead of the Bolivian presidential election, in El Alto, on Wednesday. Photograph: David Mercado/Reuters


A year after the country’s first indigenous president was controversially ousted,
 (CIA COUP BY INEPT RIGHTWINGERS)
 his party is well placed to win a rerun presidential election

by Tom Phillips and Cindy Jiménez Bercerra in La Paz
Sat 17 Oct 2020 

David Ticona Mamani felt despair and foreboding when Evo Morales was forced from his Andean homeland last November amid civil unrest, electoral meltdown and what supporters of Bolivia’s first indigenous president called a racist, rightwing coup.

“I wept,” remembered the 56-year-old lawyer, a fervent supporter of Morales and his Movimiento al Socialismo (Mas).

Silence reigns on the US-backed coup against Evo Morales in Bolivia

Mark Weisbrot

“Evo represents the rebirth of Bolivia’s indigenous people. He gave us back our self-esteem, our pride in being indigenous, of having indigenous surnames, of our food … Evo Morales was Bolivia’s best president ever.”

With Morales condemned to exile – first in Mexico, now Argentina - believers such as Mamani were left politically orphaned and the Mas in disarray.

Jeanine Áñez, a rightwing senator who once branded Bolivia’s indigenous people “satanic”, took power as caretaker president bringing a sudden and shocking end to nearly 14 years of leftwing rule during which the country’s long-excluded originarios (native peoples) finally took centre stage.

Activists have since accused Áñez’s government of using the justice system to wage a politically motivated witch-hunt against Morales and his allies.

But nearly 12 months after last year’s convulsion, Morales’ Movement Towards Socialism could be about to pull off a sensational political comeback in Sunday’s twice-postponed presidential election.
Luis Arce, the presidential candidate of Evo Morales’s Mas party, holds a ball during his closing rally in El Alto. Photograph: Gaston Brito Miserocchi/Getty Images

The vote is a rerun of the doomed October 2019 election which was voided after incendiary claims of electoral fraud from the Organization of American States (OAS) fuelled protests and saw Morales resign under pressure from security forces.

Polls suggest the Mas candidate, Morales’s UK-educated former finance minister Luis Arce, has the edge over his main challenger, a centrist journalist and former president called Carlos Mesa.

“They [Mas] are in the driver’s seat and if they can mobilize voters this weekend – and they are the only party with the capacity to do that – they could do very well,” said Eduardo Gamarra, a Bolivia expert at Florida International University.

Gamarra thought a second round – which 67-year-old Mesa would probably win – remained the more likely prospect. If no candidate secures an outright majority, or 40% of the votes with 10% breathing space, a run-off will be held on 29 November. The third major candidate is Luis Fernando Camacho of the new rightwing Creemos (“We believe”) alliance. Áñez withdrew her candidacy last month saying she did not want to split the conservative vote.

But because Morales’s rivals had “atomized” the anti-Mas vote, it was not far-fetched to imagine that Arce, a softly spoken career civil servant who boasts a master’s degree from the University of Warwick, might triumph at the first time of asking.

“There is quite possibly a scenario where the Mas essentially picks up where it left off, only with Luis Arce as president,” said Gamarra.
Advertisement


Arce talked up that possibility on Wednesday at his final campaign rally – a high-altitude celebration of flag waving and dance in El Alto, a bastion of Morales support above the de facto capital La Paz.
 
A graffiti of the former president Evo Morales in La Paz. The exiled Morales still overshadows Bolivian politics. Photograph: Juan Karita/AP

“They thought they were going to kill the Movement Towards Socialism. But we’re here in El Alto to tell them: ‘We’re here and we’re alive!’” the 57-year-old candidate told supporters clad in the group’s blue, white and black colours.

“The right robbed the people and have shown their inability to govern,” Arce added in reference to charges that Áñez and her cabinet took power illegitimately and botched the response to Covid-19, which has killed thousands of Bolivians.

Mamani was also hopeful of a first-round victory and believed Arce could “relaunch” Bolivia.

But, like many Mas voters, he feared “a monumental fraud” was being cooked up with the acquiescence of the United States and the OAS, whose disputed claims about vote rigging in last year’s election played a key role in forcing Morales overseas.

This week a senior US state department official maintained Morales’s claim to have won the 2019 election was “the product of massive fraud” and hinted support for an Arce presidency was not completely assured.

“We look forward to working with whomever the Bolivians freely and fairly choose to be their president,” the official told journalists vaguely, praising the protesters who rose up against Morales last year for having “defended their democracy”.
Advertisement


If the prospect of a socialist revival has Masistas overjoyed, it is the stuff of nightmares for Morales’s detractors, who regard him as a power-obsessed authoritarian bent on clinging to power and destroying Bolivian democracy.

Morales’s bid to secure an unprecedented fourth straight term last year came despite voters denying him that right in a 2016 referendum, the result of which he ignored. 
The presidential candidate Carlos Mesa delivers a speech during the closing rally of his campaign in the lowland city of Santa Cruz on Tuesday. Photograph: Enrique Canedo/AFP/Getty Images

Libertad Gabriela Vaca Poehlmann, the president of an opposition group called Unidos en Acción (United in Action), remembered her elation as the former president fled to Mexico City on 10 November last year.

“I felt relief. I felt hope. I felt freedom,” said Poehlmann, 45, one of thousands of citizens who took to Bolivia’s streets last year to pile pressure on Morales.

Twelve months later she fretted his movement might mount what had once seemed an unlikely comeback and urged voters to back whichever candidate they felt was best placed to prevent that. “If Mas came back … it would be terrible for the country. As the saying goes: ‘People get the governors they deserve’,” Poehlmann said.

Foreign diplomats and voters on both sides voice fear another disputed result could lead to a repeat of last year’s violence when at least 36 people, most of them Mas supporters, lost their lives. And tensions have been building in the lead-up to the vote with reports of paramilitary groups attacking Mas activists and some panicked citizens reportedly stockpiling food in anticipation of possible turmoil.
Advertisement

Observers are convinced Morales will seek to return to Bolivia, and possibly frontline politics, if Arce wins.

“He’s a political animal. His whole life is about politics. So he will try to come back and there might be some tensions,” said Diego von Vacano, a Bolivian political scientist at the Texas A&M University. “But for the good of the party … I think Evo might play a bit more of symbolic role as opposed to a more active, commander role,” he added.

 
Employees of the electoral court, guarded by the military police, load a truck with electoral material to be distributed for Sunday’s general election, in La Paz, on Firday. Photograph: AFP/Getty Images

Vacano denied Arce was merely a proxy for Morales, who was placed under investigation for alleged acts of terrorism by Bolivia’s conservative caretaker governors and is barred from running himself.

“Arce is not a puppet,” insisted the academic who has been informally advising the candidate’s campaign. “He’s aware that Evo is the historic leader of the Mas. But this is a new period and it requires a different approach. He has been pretty clear that he wants to do it his own way.”

Mamani said he also hoped the former president would step back, despite his affection for Morales and the commodity-fuelled social and economic progress he oversaw after his historic 2005 election.

“We need to see the rotation of power. No matter how good a leader is they shouldn’t stay in power permanently. You need change.”

“He spent 14 years working. Saturdays, Sundays, bank holidays. From 5am to midnight,” Mamani said of Morales. “It’s time for him to rest.”
In Netflix’s The Trial of the Chicago 7, Aaron Sorkin tackles an all-too-relevant court case

The star-studded drama returns to the past with a purpose.
Eddie Redmayne in The Trial of the Chicago 7. Niko Tavernise/Netflix


Any time a story from history is retold for the big screen (or, these days, for the little screen), one fundamental question must be answered: Why now?

Filmmakers don’t (or shouldn’t) revisit the past just because they think it’s kind of a cool story that will make bank at the box office. Real people’s lives are being mined for material, after all. So if you’re going to retell a historical tale, you need a reason: parallels to the present, or inspiring heroism, or a lesson of some kind.

We can ask this question of Aaron Sorkin’s The Trial of the Chicago 7 and find several obvious answers. Sorkin — one of the few screenwriters whose name is a household brand unto itself — originally wrote the script back in 2007, but the project got shelved during the 2007–’08 writers’ strike. He picked it up again in 2018 with a presidential election in the middle distance, and it’s easy to understand why: The film is a lightly fictionalized courtroom drama based on the six-month trial of seven men accused of conspiring to cross state lines and incite riots at the 1968 Democratic National Convention in Chicago. And it rings all those “now more than ever” bells that Hollywood has loved to ring (and ring and ring) during the Trump era, albeit with a little more finesse than some.

Rating: 3.5 out of 5

Does The Trial of the Chicago 7 work as a film? Sometimes! From his TV series The West Wing to movies like The Social Network and Steve Jobs, Sorkin is indelibly associated with a few idiosyncrasies, two of which matter most here: a tight, wordy dialogue style (often fired off while speakers hurry from one place to another), and grandstanding characters with progressive but rarely radical notions of American politics. By those markers, The Trial of the Chicago 7 is identifiably Sorkin’s work, sometimes to its detriment, particularly as the movie rounds third base and heads for home plate.

But the movie is effective in spite of its foibles. It’s an ensemble piece that tells a complex story cleanly. And even its missteps hint as to why Sorkin chose to return to this historical moment now.
Sorkin puts a Hollywood gloss on the story of the Chicago 7. It’s mostly successful.

The Chicago 7, played in the movie by a uniformly outstanding cast, were Abbie Hoffman (Sacha Baron Cohen), Jerry Rubin (Jeremy Strong), David Dellinger (John Carroll Lynch), Tom Hayden (Eddie Redmayne), Rennie Davis (Alex Sharp), John Froines (Danny Flaherty), and Lee Weiner (Noah Robbins). All seven men were activists who used different tactics but shared the same goal: to end the war in Vietnam. (I don’t know if Cohen and Strong are the best of the bunch, but their performances suggest they’re having an immense amount of fun; Lynch is particularly good, as well, reminding me he’s one of the great unsung character actors of our time.)

Representing different organizations and not coordinating with one another, they all traveled to Chicago in 1968 to participate in protests outside the DNC that would grab the attention of not so much the delegates as the entire country. Denied permits by the city, their demonstrations ended with police beatings and bloodshed, which they contended were started by Chicago police. The federal government charged the men with conspiracy and crossing state lines with intent to start a riot, and the trial began in September 1968 under Judge Julius Hoffman (Frank Langella).
Kelvin Harrison Jr., Yahya Abdul-Mateen II, Mark Rylance, Aaron Sorkin, and Eddie Redmayne on the set of The Trial of Chicago 7. Niko Tavernise/Netflix

An eighth man, Bobby Seale (a stunning Yahya Abdul-Mateen II), co-founder of the Black Panther Party, was also in Chicago to speak at a demonstration. He was swept into their case, and famously petitioned the court to delay the trial so the attorney of his choosing could have gallbladder surgery. After he was denied by Judge Hoffman, he petitioned to represent himself, which the judge also denied, and then continued to loudly protest this breach of his rights during the hearing. Eventually, he was bound and gagged in the courtroom; then he was severed from the trial altogether, leaving the other seven men as co-defendants.


The introductions of all of these men, and the first half of the film, are mainly devoted to showing their different styles of anti-war activism. Hoffman and Rubin are the disruptive hippies; Dellinger the peaceful grownup; Hayden the principled statesman; Davis the young radical; and Froines and Weiner are just happy to find themselves in such august company. What they all have in common is their intense hatred for the Vietnam War and the fact that they are white.

Seale, in clear contrast, is Black. And we’re meant to understand that the judge’s actions toward him — which differ from the way he treats the seven white defendants — are part of the long-running American tradition of justice lifting her blindfold.

At the center of the trial is the men’s attorney William Kunstler (Mark Rylance, tremendous as always) and the government prosecutor Richard Schultz (Joseph Gordon-Levitt). The latter is a character who, by all accounts, has been substantially altered for this film, presumably to transform him into an avatar for those in the audience inclined to cock an eyebrow at the defendants. The historical record suggests Schultz was more of a hard-driving idealogue than the even-handed attorney we meet in The Trial of the Chicago 7, who gets to play the part of, if not a hero, at least a Pretty Good Guy by the end.

Softening Schulz is one of a number of tweaks to the facts that Sorkin makes for the film, something he has done plenty of times in the past; The Social Network, which might be his best script, plays very fast and loose with characters and events alike. Sorkin’s aim is to tell a good story, and reality does not always comply. The fun of being a screenwriter is that you get to create reality.
Caitlin Fitzgerald, Alan Metoskie, Alex Sharp, Jeremy Strong, John Carroll Lynch, Sacha Baron Cohen, and Noah Robbins in The Trial of the Chicago 7. Niko Tavernise / Netflix

There’s a reason we need these reminders of the past

How you feel about Sorkin’s historical liberties will probably determine how you react to this film. Not because anyone thinks The Trial of the Chicago 7 should have been a documentary — there have already been several about the same sequence of events, and you can stream them if you like — but because Sorkin takes those liberties to fit this tale to the contours of the classic Hollywood courtroom drama. And classic Hollywood courtroom dramas have to end in triumph, the underdog winning out over those of whom society approves.

I was with the film right till the end, when it makes this heel turn, which I think is ineffective — or, at least, could have been more effective handled another way, one that would probably have involved hewing more closely to the facts. Sorkin doesn’t change the outcome of the trial, but the way he moves pieces of history around is clearly bent toward turning The Trial of the Chicago 7 into a Hollywood tale of underdog courtroom triumph. (I don’t want to spoil the movie’s beats, but I will say that Sorkin’s placement of events near its conclusion, combined with the requisite swelling triumphal music, shifts the tone of The Trial of the Chicago 7 into the kind of fairy tale that I’d hoped the movie would avoid.)


But the way he ends the film gives me the sense that Sorkin’s answer to the “why now?” question would be simple: Because very little has changed. The forces that tried to pin the Chicago 7, not to mention Bobby Seale, to the wall are still active and powerful. We hear a lot of the same rhetoric today. And retelling the story has an effect — especially when you put a bunch of movie stars in it and send it to Netflix, where it’s bound to be seen by a lot of people.

Maybe Sorkin’s idea is to stir people to action. But I think the movie answers the question of “why now” a little differently. For people like me — a 30-something whose parents were still in grade school when this monumental trial went down — a glossy Hollywood movie like The Trial of the Chicago 7, about things I can’t remember and that many people would like society to forget, can do something truly useful.

Here’s why: In my adult lifetime, I’ve lived through 9/11, various unending wars, a memorable uptick in blatant hate toward ethnic and religious minorities, mounting environmental insecurity, and multiple “once-in-a-lifetime” recessions. That’s without even mentioning Donald Trump’s disastrous, norms-obliterating administration, which has had the additional effect of destroying the trust many Americans below the age of 40 once had in governmental, social, and religious institutions. From my side of the age divide, more often than not, things seem pretty bleak.
Mark Rylance and Eddie Redmayne in The Trial of the Chicago 7. Niko Tavernise / Netflix

I’ve responded by dipping back into history — specifically, by going back a half-century, to right around the late 1960s. What I’ve found there is depressing, and a little comforting. Depressing because much of what we hear in public discourse today about law and order, radicals, riots, policing, voter suppression, and all the rest is just ripped out of the past and barely even repackaged. What we see on the news isn’t even a reboot; it feels like a lazy rerun, sped up by 50 percent.

But comforting because it destroys the fanciful notion peddled by too many leaders that things were better not all that long ago. Studying this history puts our current reality on a continuum with the past, rather than representing it as a uniquely terrible time in human history. We know the world we are inheriting is a wreck; it’s useful to understand exactly why, and to see which myths we hear from grandstanding politicians made it so.

And retellings like The Trial of the Chicago 7 are an invitation to imagine which threads of goodness we can hang onto. Sorkin’s fairy-tale ending is, I think, a bit of a misstep, shifting the tone away from sobriety toward something significantly more self-congratulatory.

But one theme his chosen ending underlines is that, at least in his rendering, the fight over Vietnam and the fight over policing and the fight over who matters to the law is, ultimately, a fight about who is worth honoring. Those who are lost in political fights are too often those who fell on battlefields or in parks or city streets, caught in a firestorm they didn’t start. Honoring them is an act of revolution — and The Trial of the Chicago 7 argues that the fight to keep them from being lost in the first place has been going on a long, long time.

The Trial of the Chicago 7 is streaming on Netflix.

2020’s marijuana legalization ballot measures, explained

If the measures win, more than one in three Americans will live in a state where marijuana is legal
.


Zac Freeland/Vox

By German Lopez@germanrlopezgerman.lopez@vox.com Oct 16, 2020, 1:00pm EDT


Between the presidential election, governors’ races, and down-ballot contests, this year’s election features a lot of important choices. Among those, voters in five states will have a chance to legalize marijuana for recreational or medical uses.

In Arizona, Montana, New Jersey, and South Dakota, voters could legalize marijuana for recreational purposes. In Mississippi and South Dakota (in a ballot initiative separate from the full legalization measure), voters could also legalize medical marijuana.

If all these measures are approved, the United States would go from having 11 states in which marijuana is legal to 15. Counting by population, that would mean more than a third of Americans would live in a state with legalized marijuana, up from more than a quarter today.

The ballot initiatives represent a massive shift in drug policy. A decade ago, zero states had legalized marijuana. Then, in 2012, Colorado and Washington became the first two states to legalize cannabis for recreational use and sales.

Despite the success of state measures, marijuana remains illegal at the federal level. But since President Barack Obama’s administration, the federal government has generally taken a hands-off approach to states’ marijuana initiatives. There are still hurdles — banking is a challenge for marijuana businesses under federal prohibition — but for the most part the federal government has not interfered in states’ laws since 2013.

That policy may reflect a change in public opinion — one that would make a federal crackdown on marijuana legalization very unpopular: As it stands, public opinion surveys show that even a majority of Republicans, who tend to take more anti-marijuana views than their Democratic and independent peers, support legalization.


In that context, legalization advocates are optimistic about their prospects this year, even in historically red states like Arizona, Montana, and South Dakota.
Marijuana legalization in Arizona, Montana, New Jersey, and South Dakota

In November, four states will vote whether to legalize marijuana for recreational purposes. They would all allow sales, leading to the kind of tax-and-regulate, commercialized system that’s taken form in other legalization states.

Here are the 2020 ballot measures:
Arizona: Proposition 207 would legalize marijuana possession and use for adults 21 years or older, and would let individuals grow up to six cannabis plants. It would charge the Arizona Department of Health Services with licensing and regulating marijuana businesses, from retailers to growers, and impose a 16 percent tax on marijuana sales. Local governments could ban marijuana businesses within their borders. It would also let people with criminal records related to marijuana petition for expungement. It’s similar to a 2016 ballot measure that narrowly failed, but activists believe that support for legalization has grown since then.
Montana: A constitutional amendment, CI-118, would let the legislature or a ballot initiative set a legal age for marijuana. A statutory measure, I-190, would allow possession and use for adults 21 and older, letting them grow up to four marijuana plants and four seedlings for personal use. I-190 would task the Department of Revenue with setting up and regulating a commercial system for growing and selling cannabis, while imposing a 20 percent tax and letting local governments ban cannabis businesses within their borders. And I-190 would let people convicted for past marijuana crimes seek resentencing or expungement.
New Jersey: Public Question 1 would legalize the possession and use of marijuana for adults 21 and older, and task the state’s Cannabis Regulatory Commission with regulating the legal system for marijuana production and sales. The measure is open-ended on several fronts, including regulations, taxes, and home-growing, instead leaving it to the state legislature to work out the details. The legislature placed the measure on the ballot after it failed to pass its own legalization bill.
South Dakota: Constitutional Amendment A would legalize marijuana possession and use for adults 21 and older. It would let individuals grow up to three cannabis plants if they live in a jurisdiction with no licensed marijuana retailers. It would allow distribution and sales, with a 15 percent tax. Local governments could prohibit marijuana businesses within their borders.

All four states’ measures follow the same commercialized model for legalization, but that’s not the only model for legalization. Washington, DC, for example, allows possession, use, growing, and gifting but not sales (although the “gifting” provision has been used, in a legally dubious manner, to “gift” marijuana with purchases of overpriced juices and decals).

Some drug policy experts have pushed for a legalization model that doesn’t allow a big marijuana industry to take root, out of fears that such an industry would, similar to alcohol and tobacco companies, irresponsibly market its product and enable misuse or addiction. A 2015 RAND report listed a dozen alternatives to the standard prohibition of marijuana, from putting state agencies in charge of sales to allowing only personal possession and growing:
RAND Corporation

While marijuana is much safer than alcohol, tobacco, and other illegal drugs, it’s not totally safe. Misuse and addiction are genuine problems, with millions of Americans reporting that they want to quit but can’t despite negative consequences. A review of the research by the National Academies of Sciences, Engineering, and Medicine linked cannabis use to other potential downsides, including respiratory issues (if smoked), schizophrenia and psychosis, car crashes, lagging academic and other social achievements, and lower birth weight (if smoked during pregnancy).

It’s these risks that have driven even some supporters of legalization to call for alternatives to the commercialized model. Opponents of legalization have also jumped on the concerns about Big Marijuana potentially marketing the drug irresponsibly, causing bad public health outcomes.


Legalization advocates, however, generally argue that marijuana’s potential downsides are so mild that the benefits of legalization greatly outweigh the problems with prohibition, including the hundreds of thousands of arrests around the US, the racial disparities behind those arrests, and the billions of dollars that flow from the black market for illicit marijuana to drug cartels that then use the money for violent operations around the world.

Supporters are winning the argument in more and more states, and typically doing so in a way that establishes a commercialized, tax-and-regulate system — setting up the US for a big marijuana industry in the coming years.

Medical marijuana in Mississippi and South Dakota

In two states, voters will have a chance to legalize medical marijuana, joining the 33 states that have already done so. The two states’ measures generally follow the same track as the other states’ laws, letting patients with certain conditions get a doctor’s recommendation for marijuana and obtain it at dispensaries.

Here are the 2020 ballot measures:
Mississippi: Ballot Measure 1 is actually broken into two alternative ballot initiatives. Initiative 65 details specifics for qualifying conditions (22, including cancer and PTSD), possession limits (up to 2.5 ounces), a sales tax (7 percent), the cost of a medical marijuana card (up to $50), and who would set up regulations for distribution (the Mississippi Department of Health). Initiative 65A offers no specifics on all these fronts; Mississippi’s legislature put it on the ballot as an alternative to Initiative 65 and will fill in the blanks later if voters approve the legislature’s initiative over the citizen initiative.
South Dakota: Initiated Measure 26 would set up a medical marijuana system for people with debilitating medical conditions. Patients would be able to possess up to three ounces of marijuana and grow three plants or more, depending on what a physician recommends. The Department of Health would set up rules and regulations for distribution.

A review of the evidence from the National Academies of Sciences, Engineering, and Medicine found little evidence for pot’s ability to treat health conditions outside chronic pain, chemotherapy-induced nausea and vomiting, and patient-reported multiple sclerosis spasticity symptoms. But most states, relying largely on anecdotal evidence, have allowed medical marijuana for many other conditions.

Supporters argue there’s no time to get approval for and run scientific studies, which can take years, to prove the benefits of a drug that isn’t very harmful anyway. And they point out that the federal government has stifled marijuana research for years, making it impossible to get good evidence. So they’d rather states let sick patients get access to marijuana now instead of wait for broader federal reform and research.

Opponents, however, point to the lack of rigorous evidence. They argue that it should be up to public health agencies, such as the Food and Drug Administration, to approve the use of medical marijuana, as is true for other medicines. They’ve been particularly critical of more lax approaches to medical marijuana — with states, like California, enacting laws that in the past amounted to total legalization in practice.
Marijuana legalization is very popular in the US

There’s very good reason to believe an increasing number of states will legalize marijuana in the coming years: Legalization is very popular, and support for it has been growing for decades.


According to surveys from Gallup, support for legalization rose from 12 percent in 1969 to 31 percent in 2000 to 66 percent in 2019. Surveys from Civic Science, the General Social Survey, and the Pew Research Center have found similar levels of support.
Gallup

Support for legalization is even bipartisan. Both Gallup and Pew have found that a slim majority of Republicans, with much bigger majorities of Democrats and independents, support legalization.
Gallup

Medical marijuana is even more popular, with support in polls typically hitting 80 percent, 90 percent, or more.

The positions of US political leaders, however, don’t align with public opinion. President Donald Trump opposes marijuana legalization at the federal level, previously suggesting the issue should be left up to the states. Former Vice President Joe Biden, the Democratic nominee for president, has called for the decriminalization of cannabis — repealing criminal penalties, particularly prison, for possession but for not allowing sales — but has opposed legalization at the federal level.

Meanwhile, only Illinois and Vermont have legalized marijuana for recreational use through their legislatures. The other nine states that have legalized did so through ballot measures.

As lawmakers lag behind, voters will find another way to legalize marijuana for recreational or medical purposes — as five more states might demonstrate this year.
POSTMODERN EUGENICS
The Great Barrington Declaration is an ethical nightmare

THE 1% MASTER RACE PROMOTE IT
These scientists want more young, healthy people infected by the coronavirus. It’s a bad idea.
Society doesn’t neatly sort itself into different risk groups. 
Orbon Alija/Getty Creative Images


It’s been eight, long, devastating months in the United States since the pandemic began. A staggering number of people have been sickened and hospitalized, and hundreds of thousands have died. People are isolated from those they care about, businesses are hurting, education has suffered, and so has our mental health.

It’s understandable, then, why the concept of ending the pandemic through building up herd immunity continues to hold allure. The proponents of herd immunity, who want all schools and businesses to reopen and sports and cultural activities to resume, say they want to ease the burden of the pandemic: “Those who are not vulnerable should immediately be allowed to resume life as normal,” reads a document called The Great Barrington Declaration, the latest vessel for this hope that life can return to normal for some before community spread of the virus is contained.

The authors of the Declaration — a trio of scientists from Harvard, Stanford, and Oxford, whose views, we should say, are outside the mainstream — call their approach “focused prevention.” The big idea is that we could let the virus spread among younger, healthier people, all the while making sure we protect older, more vulnerable people.

The declaration website says it has attracted thousands of signatures (though the names of those who signed have not been made public) and has fans on the right and at the White House, where pandemic adviser Scott Atlas (who is a neuroradiologist, not an epidemiologist) has previously suggested this is a good thing to do. “When younger, healthier people get infected, that’s a good thing,” he said in a July interview with a San Diego local news station.


RELATED
What people get wrong about herd immunity, explained by epidemiologists

And yet there are ample reasons to fear that this “focused prevention” strategy of allowing the young and healthy to get sick to build population immunity to the virus would never work. And it could cause devastating unintended consequences.

“It just presumes this level of control that you can really wall off people who are at high risk,” Natalie Dean, a University of Florida biostatistician, told me earlier this year. Society doesn’t neatly separate itself into risk groups. We’ve seen outbreaks that have begun in younger populations move on to infect older ones.

The Barrington Declaration has been getting a lot of attention in the news and through viral social media posts. That’s caused alarm among scientists who see through its thin scientific reasoning. One group has written a counter piece in the Lancet.

“Prolonged isolation of large swathes of the population is practically impossible and highly unethical,” a group of scientists representing the mainstream thinking writes in a letter they are calling the John Snow Memorandum (named after the “father” of modern epidemiology).

It’s unethical for many, many reasons. Here’s why.
Herd immunity through natural infection is unethical because disadvantaged people are most at risk for getting very sick

There are multiple dimensions that put someone at risk for severe Covid-19. It’s not just age. Conditions like diabetes and hypertension exacerbate risk. So do societal factors like poverty, working conditions, and incarceration.

Severe Covid-19 and coronavirus deaths have disproportionately impacted minorities and the less advantaged in the United States. This herd immunity strategy risks either isolating these already marginalized communities even further from society since they may not feel safe in a more relaxed environment. Or even worse: We risk sacrificing their health in the name of building up a level of population immunity sufficient to control the virus.


Harvard epidemiologist Bill Hanage underscores a gross inequality here: Herd immunity achieved through natural infection would come at an undue cost to some of the most vulnerable groups in the country.

“Because of the fact that some groups are more at risk of becoming infected than others — and they are predominantly people from racial [and] ethnic minorities and predominantly poor people with less good housing — we are effectively forcing those people to have a higher risk of infection and bear the brunt of the pandemic,” Hanage says.

I think about my grandmother, who recently died at age 94, of her final years of life in a nursing home, where she spent most of her time confined to her room, due to Covid-19 precautions. “I’m so lonesome here,” she would say when I called. Older people don’t deserve to be written off, isolated further, and forgotten.

Or as the John Snow memorandum (which Hanage signed) states: “Such an approach also risks further exacerbating the socioeconomic inequities and structural discriminations already laid bare by the pandemic.”
Herd immunity through natural infection is also a scientifically bad idea

Typically, the term herd immunity is thought of in the context of vaccination campaigns against contagious viruses like measles. The concept helps public health officials think through the math of how many people in a population need to be vaccinated to prevent outbreaks.

“Never in the history of public health has herd immunity been used as a strategy for responding to an outbreak, let alone a pandemic,” World Health Organization Director-General Tedros Ghebreyesus said this week. “It is scientifically and ethically problematic.”


Let’s count the reasons why.

1) Even if we could limit exposure to the people least likely to die of Covid-19, this group still can suffer immense consequences from the infection — like hospitalization, long-term symptoms, organ damage, missed work, and high medical bills. The long-term health consequences of the virus have barely been studied. When we expose younger, healthier people to the virus (on purpose!), we don’t know what the consequence of that will be down the road.

2) We have a lonnnnnngggggg way to go. There’s no one, perfect estimate of what percentage of the US population has already been infected by the virus. But, by all accounts, it’s nowhere near the figures needed for herd immunity to kick in. Overall, a new Lancet study — which drew its data from a sample of dialysis patients — suggests that fewer than 10 percent of people nationwide have been exposed to the virus. No one knows the exact threshold percentage for herd immunity to kick in for a meaningful way to help end the pandemic. But common estimates hover around 60 percent.

So far, there have been more than 200,000 deaths in the United States. There’s so much more potential for death if the virus spreads to true herd immunity levels. “The cost of herd immunity [through natural infection] is extraordinarily high,” Hanage says.

Look at what happened to Manaus, Brazil, an Amazonian city of around 2 million people, which experienced one of the most severe Covid-19 outbreaks in the world.

Researchers now estimate between 44 percent and 66 percent of the city’s population was infected with the virus, which means it’s possible herd immunity has been achieved there. (This research has yet to be peer-reviewed.) But during their epidemic period, there were four times as many deaths as normal for that point in the year.

3) Scientists don’t know how long naturally acquired immunity to the virus lasts or how common reinfections might be. If immunity wanes and reinfections are common, then it will be all the more difficult to build up herd immunity in the country. In the spring, epidemiologists at Harvard sketched out the scenarios. If immunity lasts a couple of years or more, Covid-19 could fade in a few years’ time, per their analysis published in Science (much too long a time to begin with, if you ask me). If immunity wanes within a year, Covid-19 could make fierce annual comebacks until an effective vaccine is widely available.

At the same time, we don’t know how long immunity delivered via a vaccine would last. But, at least a vaccine would come without the cost of increased illnesses, hospitalizations, and long-term complications.

If immunity doesn’t last, “such a [focused prevention] strategy would not end the COVID-19 pandemic but result in recurrent epidemics, as was the case with numerous infectious diseases before the advent of vaccination,” the John Snow Memorandum says.

4) By letting the pandemic rage, we risk overshooting the herd immunity threshold. Once you hit the herd immunity threshold, it doesn’t mean the pandemic is over. After the threshold is reached, “all it means is that, on average, each infection causes less than one ongoing infection,” Hanage says. “That’s of limited use if you’ve already got a million people infected.” If each infection causes, on average, 0.8 new infections, the epidemic will slow. But 0.8 isn’t zero. If a million people are infected at the time herd immunity is reached, per Hanage’s example, those already infected people may infect 800,000 more.

There are a lot of other unknowns here, too. One is the type of immunity conferred by natural infection. “Immunity” is a catchall term that means many different things. It could mean true protection from getting infected with the virus a second time. Or it could mean reinfections are possible but less severe. You could, potentially, get infected a second time, never feel sick at all (thanks to a quick immune response), and still pass on the virus to another person.
Scientists who favor some continued distancing have never argued for endless lockdowns

The mainstream scientific consensus on fighting the pandemic has never been calling for endless lockdowns and an endless choking of our economy.


Rather, health experts have argued that the first thing we need to do is manage community transmission of the virus, and then keep new huge outbreaks from forming with aggressive testing, contact tracing, and interventions like universal masking, better indoor ventilation, and social distancing.

But we never managed to get the virus down to containable levels. (It’s not impossible; other countries like South Korea and Japan have.) So here we are.

The last thing that strikes me as really cynical about the Great Barrington Declaration is it avoids discussing how the government could have done more to help people suffering the downstream economic impacts of the pandemic. Instead of forcing restaurants to choose between their livelihoods and putting their customers and staff at risk, they could have been paid by the government to remain closed. Instead of letting people face the stark psychological insecurity of a missing paycheck, Congress and the White House could have extended unemployment insurance benefits by now (they haven’t).

For so many reasons, the Great Barrington Declaration — like all herd immunity proposals — just feels like giving up, while sacrificing young people’s health and the health of the marginalized. Don’t give up. There’s no easy way out.

The case for taking AI seriously as a threat to humanity

Why some people fear AI, explained.

LONG READ

By Kelsey Piper Updated Oct 15, 2020, Illustrations by Javier Zarracina for Vox

This story is part of a group of stories called  


Finding the best ways to do good.



Stephen Hawking has said, “The development of full artificial intelligence could spell the end of the human race.” Elon Musk claims that AI is humanity’s “biggest existential threat.”

That might have people asking: Wait, what? But these grand worries are rooted in research. Along with Hawking and Musk, prominent figures at Oxford and UC Berkeley and many of the researchers working in AI today believe that advanced AI systems, if deployed carelessly, could permanently cut off human civilization from a good future.

This concern has been raised since the dawn of computing. But it has come into particular focus in recent years, as advances in machine-learning techniques have given us a more concrete understanding of what we can do with AI, what AI can do for (and to) us, and how much we still don’t know.

There are also skeptics. Some of them think advanced AI is so distant that there’s no point in thinking about it now. Others are worried that excessive hype about the power of their field might kill it prematurely. And even among the people who broadly agree that AI poses unique dangers, there are varying takes on what steps make the most sense today.

The conversation about AI is full of confusion, misinformation, and people talking past each other — in large part because we use the word “AI” to refer to so many things. So here’s the big picture on how artificial intelligence might pose a catastrophic danger, in nine questions:
1) What is AI?

Artificial intelligence is the effort to create computers capable of intelligent behavior. It is a broad catchall term, used to refer to everything from Siri to IBM’s Watson to powerful technologies we have yet to invent.

Some researchers distinguish between “narrow AI” — computer systems that are better than humans in some specific, well-defined field, like playing chess or generating images or diagnosing cancer — and “general AI,” systems that can surpass human capabilities in many domains. We don’t have general AI yet, but we’re starting to get a better sense of the challenges it will pose.


Narrow AI has seen extraordinary progress over the past few years. AI systems have improved dramatically at translation, at games like chess and Go, at important research biology questions like predicting how proteins fold, and at generating images. AI systems determine what you’ll see in a Google search or in your Facebook Newsfeed. They compose music and write articles that, at a glance, read as if a human wrote them. They play strategy games. They are being developed to improve drone targeting and detect missiles.

But narrow AI is getting less narrow. Once, we made progress in AI by painstakingly teaching computer systems specific concepts. To do computer vision — allowing a computer to identify things in pictures and video — researchers wrote algorithms for detecting edges. To play chess, they programmed in heuristics about chess. To do natural language processing (speech recognition, transcription, translation, etc.), they drew on the field of linguistics.

But recently, we’ve gotten better at creating computer systems that have generalized learning capabilities. Instead of mathematically describing detailed features of a problem, we let the computer system learn that by itself. While once we treated computer vision as a completely different problem from natural language processing or platform game playing, now we can solve all three problems with the same approaches.

And as computers get good enough at narrow AI tasks, they start to exhibit more general capabilities. For example, OpenAI’s famous GPT-series of text AIs is, in one sense, the narrowest of narrow AIs — it just predicts what the next word will be in a text, based on the previous words and its corpus of human language. And yet, it can now identify questions as reasonable or unreasonable and discuss the physical world (for example, answering questions about which objects are larger or which steps in a process must come first). In order to be very good at the narrow task of text prediction, an AI system will eventually develop abilities that are not narrow at all.

Our AI progress so far has enabled enormous advances — and has also raised urgent ethical questions. When you train a computer system to predict which convicted felons will reoffend, you’re using inputs from a criminal justice system biased against black people and low-income people — and so its outputs will likely be biased against black and low-income people too. Making websites more addictive can be great for your revenue but bad for your users. Releasing a program that writes convincing fake reviews or fake news might make those widespread, making it harder for the truth to get out.

Rosie Campbell at UC Berkeley’s Center for Human-Compatible AI argues that these are examples, writ small, of the big worry experts have about general AI in the future. The difficulties we’re wrestling with today with narrow AI don’t come from the systems turning on us or wanting revenge or considering us inferior. Rather, they come from the disconnect between what we tell our systems to do and what we actually want them to do.

For example, we tell a system to run up a high score in a video game. We want it to play the game fairly and learn game skills — but if it instead has the chance to directly hack the scoring system, it will do that. It’s doing great by the metric we gave it. But we aren’t getting what we wanted.

In other words, our problems come from the systems being really good at achieving the goal they learned to pursue; it’s just that the goal they learned in their training environment isn’t the outcome we actually wanted. And we’re building systems we don’t understand, which means we can’t always anticipate their behavior.

Right now the harm is limited because the systems are so limited. But it’s a pattern that could have even graver consequences for human beings in the future as AI systems become more advanced.
2) Is it even possible to make a computer as smart as a person?

Yes, though current AI systems aren’t nearly that smart.


One popular adage about AI is “everything that’s easy is hard, and everything that’s hard is easy.” Doing complex calculations in the blink of an eye? Easy. Looking at a picture and telling you whether it’s a dog? Hard (until very recently).

Lots of things humans do are still outside AI’s grasp. For instance, it’s hard to design an AI system that explores an unfamiliar environment, that can navigate its way from, say, the entryway of a building it’s never been in before up the stairs to a specific person’s desk. We are just beginning to learn how to design an AI system that reads a book and retains an understanding of the concepts.

The paradigm that has driven many of the biggest breakthroughs in AI recently is called “deep learning.” Deep learning systems can do some astonishing stuff: beat games we thought humans might never lose, invent compelling and realistic photographs, solve open problems in molecular biology.

These breakthroughs have made some researchers conclude it’s time to start thinking about the dangers of more powerful systems, but skeptics remain. The field’s pessimists argue that programs still need an extraordinary pool of structured data to learn from, require carefully chosen parameters, or work only in environments designed to avoid the problems we don’t yet know how to solve. They point to self-driving cars, which are still mediocre under the best conditions despite the billions that have been poured into making them work.

It’s rare, though, to find a top researcher in AI who thinks that general AI is impossible. Instead, the field’s luminaries tend to say that it will happen someday — but probably a day that’s a long way off.

Other researchers argue that the day may not be so distant after all.

That’s because for almost all the history of AI, we’ve been held back in large part by not having enough computing power to realize our ideas fully. Many of the breakthroughs of recent years — AI systems that learned how to play strategy games, generate fake photos of celebrities, fold proteins, and compete in massive multiplayer online strategy games — have happened because that’s no longer true. Lots of algorithms that seemed not to work at all turned out to work quite well once we could run them with more computing power.

And the cost of a unit of computing time keeps falling. Progress in computing speed has slowed recently, but the cost of computing power is still estimated to be falling by a factor of 10 every 10 years. Through most of its history, AI has had access to less computing power than the human brain. That’s changing. By most estimates, we’re now approaching the era when AI systems can have the computing resources that we humans enjoy.

And deep learning, unlike previous approaches to AI, is highly suited to developing general capabilities.

“If you go back in history,” top AI researcher and OpenAI cofounder Ilya Sutskever told me, “they made a lot of cool demos with little symbolic AI. They could never scale them up — they were never able to get them to solve non-toy problems. Now with deep learning the situation is reversed. ... Not only is [the AI we’re developing] general, it’s also competent — if you want to get the best results on many hard problems, you must use deep learning. And it’s scalable.”

In other words, we didn’t need to worry about general AI back when winning at chess required entirely different techniques than winning at Go. But now, the same approach produces fake news or music depending on what training data it is fed. And as far as we can discover, the programs just keep getting better at what they do when they’re allowed more computation time — we haven’t discovered a limit to how good they can get. Deep learning approaches to most problems blew past all other approaches when deep learning was first discovered.

Furthermore, breakthroughs in a field can often surprise even other researchers in the field. “Some have argued that there is no conceivable risk to humanity [from AI] for centuries to come,” wrote UC Berkeley professor Stuart Russell, “perhaps forgetting that the interval of time between Rutherford’s confident assertion that atomic energy would never be feasibly extracted and Szilárd’s invention of the neutron-induced nuclear chain reaction was less than twenty-four hours.”

There’s another consideration. Imagine an AI that is inferior to humans at everything, with one exception: It’s a competent engineer that can build AI systems very effectively. Machine learning engineers who work on automating jobs in other fields often observe, humorously, that in some respects, their own field looks like one where much of the work — the tedious tuning of parameters — could be automated.

If we can design such a system, then we can use its result — a better engineering AI — to build another, even better AI. This is the mind-bending scenario experts call “recursive self-improvement,” where gains in AI capabilities enable more gains in AI capabilities, allowing a system that started out behind us to rapidly end up with abilities well beyond what we anticipated.

This is a possibility that has been anticipated since the first computers. I.J. Good, a colleague of Alan Turing who worked at the Bletchley Park codebreaking operation during World War II and helped build the first computers afterward, may have been the first to spell it out, back in 1965: “An ultraintelligent machine could design even better machines; there would then unquestionably be an ‘intelligence explosion,’ and the intelligence of man would be left far behind. Thus the first ultraintelligent machine is the last invention that man need ever make.”

3) How exactly could AI wipe us out?

It’s immediately clear how nuclear bombs will kill us. No one working on mitigating nuclear risk has to start by explaining why it’d be a bad thing if we had a nuclear war.

The case that AI could pose an existential risk to humanity is more complicated and harder to grasp. So many of the people who are working to build safe AI systems have to start by explaining why AI systems, by default, are dangerous.  
Javier Zarracina/Vox

The idea that AI can become a danger is rooted in the fact that AI systems pursue their goals, whether or not those goals are what we really intended — and whether or not we’re in the way. “You’re probably not an evil ant-hater who steps on ants out of malice,” Stephen Hawking wrote, “but if you’re in charge of a hydroelectric green-energy project and there’s an anthill in the region to be flooded, too bad for the ants. Let’s not place humanity in the position of those ants.”

Here’s one scenario that keeps experts up at night: We develop a sophisticated AI system with the goal of, say, estimating some number with high confidence. The AI realizes it can achieve more confidence in its calculation if it uses all the world’s computing hardware, and it realizes that releasing a biological superweapon to wipe out humanity would allow it free use of all the hardware. Having exterminated humanity, it then calculates the number with higher confidence.

It is easy to design an AI that averts that specific pitfall. But there are lots of ways that unleashing powerful computer systems will have unexpected and potentially devastating effects, and avoiding all of them is a much harder problem than avoiding any specific one.


Victoria Krakovna, an AI researcher at DeepMind (now a division of Alphabet, Google’s parent company), compiled a list of examples of “specification gaming”: the computer doing what we told it to do but not what we wanted it to do. For example, we tried to teach AI organisms in a simulation to jump, but we did it by teaching them to measure how far their “feet” rose above the ground. Instead of jumping, they learned to grow into tall vertical poles and do flips — they excelled at what we were measuring, but they didn’t do what we wanted them to do.

An AI playing the Atari exploration game Montezuma’s Revenge found a bug that let it force a key in the game to reappear, thereby allowing it to earn a higher score by exploiting the glitch. An AI playing a different game realized it could get more points by falsely inserting its name as the owner of high-value items.

Sometimes, the researchers didn’t even know how their AI system cheated: “the agent discovers an in-game bug. ... For a reason unknown to us, the game does not advance to the second round but the platforms start to blink and the agent quickly gains a huge amount of points (close to 1 million for our episode time limit).”

What these examples make clear is that in any system that might have bugs or unintended behavior or behavior humans don’t fully understand, a sufficiently powerful AI system might act unpredictably — pursuing its goals through an avenue that isn’t the one we expected.

In his 2009 paper “The Basic AI Drives,” Steve Omohundro, who has worked as a computer science professor at the University of Illinois Urbana-Champaign and as the president of Possibility Research, argues that almost any AI system will predictably try to accumulate more resources, become more efficient, and resist being turned off or modified: “These potentially harmful behaviors will occur not because they were programmed in at the start, but because of the intrinsic nature of goal driven systems.”

His argument goes like this: Because AIs have goals, they’ll be motivated to take actions that they can predict will advance their goals. An AI playing a chess game will be motivated to take an opponent’s piece and advance the board to a state that looks more winnable.

But the same AI, if it sees a way to improve its own chess evaluation algorithm so it can evaluate potential moves faster, will do that too, for the same reason: It’s just another step that advances its goal.

If the AI sees a way to harness more computing power so it can consider more moves in the time available, it will do that. And if the AI detects that someone is trying to turn off its computer mid-game, and it has a way to disrupt that, it’ll do it. It’s not that we would instruct the AI to do things like that; it’s that whatever goal a system has, actions like these will often be part of the best path to achieve that goal.

That means that any goal, even innocuous ones like playing chess or generating advertisements that get lots of clicks online, could produce unintended results if the agent pursuing it has enough intelligence and optimization power to identify weird, unexpected routes to achieve its goals.

Goal-driven systems won’t wake up one day with hostility to humans lurking in their hearts. But they will take actions that they predict will help them achieve their goal — even if we’d find those actions problematic, even horrifying. They’ll work to preserve themselves, accumulate more resources, and become more efficient. They already do that, but it takes the form of weird glitches in games. As they grow more sophisticated, scientists like Omohundro predict more adversarial behavior.
4) When did scientists first start worrying about AI risk?

Scientists have been thinking about the potential of artificial intelligence since the early days of computers. In the famous paper where he put forth the Turing test for determining if an artificial system is truly “intelligent,” Alan Turing wrote:


Let us now assume, for the sake of argument, that these machines are a genuine possibility, and look at the consequences of constructing them. ... There would be plenty to do in trying to keep one’s intelligence up to the standards set by the machines, for it seems probable that once the machine thinking method had started, it would not take long to outstrip our feeble powers. … At some stage therefore we should have to expect the machines to take control.

I.J. Good worked closely with Turing and reached the same conclusions, according to his assistant, Leslie Pendleton. In an excerpt from unpublished notes Good wrote shortly before he died in 2009, he writes about himself in third person and notes a disagreement with his younger self — while as a younger man, he thought powerful AIs might be helpful to us, the older Good expected AI to annihilate us.


[The paper] “Speculations Concerning the First Ultra-intelligent Machine” (1965) ... began: “The survival of man depends on the early construction of an ultra-intelligent machine.” Those were his words during the Cold War, and he now suspects that “survival” should be replaced by “extinction.” He thinks that, because of international competition, we cannot prevent the machines from taking over. He thinks we are lemmings. He said also that “probably Man will construct the deus ex machina in his own image.”

In the 21st century, with computers quickly establishing themselves as a transformative force in our world, younger researchers started expressing similar worries.

Nick Bostrom is a professor at the University of Oxford, the director of the Future of Humanity Institute, and the director of the Governance of Artificial Intelligence Program. He researches risks to humanity, both in the abstract — asking questions like why we seem to be alone in the universe — and in concrete terms, analyzing the technological advances on the table and whether they endanger us. AI, he concluded, endangers us.

In 2014, he wrote a book explaining the risks AI poses and the necessity of getting it right the first time, concluding, “once unfriendly superintelligence exists, it would prevent us from replacing it or changing its preferences. Our fate would be sealed.”


Across the world, others have reached the same conclusion. Bostrom co-authored a paper on the ethics of artificial intelligence with Eliezer Yudkowsky, founder of and research fellow at the Berkeley Machine Intelligence Research Institute (MIRI), an organization that works on better formal characterizations of the AI safety problem.

Yudkowsky started his career in AI by worriedly poking holes in others’ proposals for how to make AI systems safe, and has spent most of it working to persuade his peers that AI systems will, by default, be unaligned with human values (not necessarily opposed to but indifferent to human morality) — and that it’ll be a challenging technical problem to prevent that outcome.


Increasingly, researchers realized that there’d be challenges that hadn’t been present with AI systems when they were simple. “‘Side effects’ are much more likely to occur in a complex environment, and an agent may need to be quite sophisticated to hack its reward function in a dangerous way. This may explain why these problems have received so little study in the past, while also suggesting their importance in the future,” concluded a 2016 research paper on problems in AI safety.

Bostrom’s book Superintelligence was compelling to many people, but there were skeptics. “No, experts don’t think superintelligent AI is a threat to humanity,” argued an op-ed by Oren Etzioni, a professor of computer science at the University of Washington and CEO of the Allan Institute for Artificial Intelligence. “Yes, we are worried about the existential risk of artificial intelligence,” replied a dueling op-ed by Stuart Russell, an AI pioneer and UC Berkeley professor, and Allan DaFoe, a senior research fellow at Oxford and director of the Governance of AI program there.

It’s tempting to conclude that there’s a pitched battle between AI-risk skeptics and AI-risk believers. In reality, they might not disagree as profoundly as you would think.

Facebook’s chief AI scientist Yann LeCun, for example, is a vocal voice on the skeptical side. But while he argues we shouldn’t fear AI, he still believes we ought to have people working on, and thinking about, AI safety. “Even if the risk of an A.I. uprising is very unlikely and very far in the future, we still need to think about it, design precautionary measures, and establish guidelines,” he writes.

That’s not to say there’s an expert consensus here — far from it. There is substantial disagreement about which approaches seem likeliest to bring us to general AI, which approaches seem likeliest to bring us to safe general AI, and how soon we need to worry about any of this.

Many experts are wary that others are overselling their field, and dooming it when the hype runs out. But that disagreement shouldn’t obscure a growing common ground; these are possibilities worth thinking about, investing in, and researching, so we have guidelines when the moment comes that they’re needed.
5) Why couldn’t we just shut off a computer if it got too powerful?

A smart AI could predict that we’d want to turn it off if it made us nervous. So it would try hard not to make us nervous, because doing so wouldn’t help it accomplish its goals. If asked what its intentions are, or what it’s working on, it would attempt to evaluate which responses are least likely to get it shut off, and answer with those. If it wasn’t competent enough to do that, it might pretend to be even dumber than it was — anticipating that researchers would give it more time, computing resources, and training data.

So we might not know when it’s the right moment to shut off a computer.

We also might do things that make it impossible to shut off the computer later, even if we realize eventually that it’s a good idea. For example, many AI systems could have access to the internet, which is a rich source of training data and which they’d need if they’re to make money for their creators (for example, on the stock market, where more than half of trading is done by fast-reacting AI algorithms).

But with internet access, an AI could email copies of itself somewhere where they’ll be downloaded and read, or hack vulnerable systems elsewhere. Shutting off any one computer wouldn’t help.


In that case, isn’t it a terrible idea to let any AI system — even one which doesn’t seem powerful enough to be dangerous — have access to the internet? Probably. But that doesn’t mean it won’t continue to happen. AI researchers want to make their AI systems more capable — that’s what makes them more scientifically interesting and more profitable. It’s not clear that the many incentives to make your systems powerful and use them online will suddenly change once systems become powerful enough to be dangerous.

So far, we’ve mostly talked about the technical challenges of AI. But from here forward, it’s necessary to veer more into the politics. Since AI systems enable incredible things, there will be lots of different actors working on such systems.

There will likely be startups, established tech companies like Google (Alphabet’s recently acquired startup DeepMind is frequently mentioned as an AI frontrunner), and organizations like Elon-Musk-founded OpenAI, which recently transitioned to a hybrid for-profit/non-profit structure.

There will be governments — Russia’s Vladimir Putin has expressed an interest in AI, and China has made big investments. Some of them will presumably be cautious and employ safety measures, including keeping their AI off the internet. But in a scenario like this one, we’re at the mercy of the least cautious actor, whoever they may be.

That’s part of what makes AI hard: Even if we know how to take appropriate precautions (and right now we don’t), we also need to figure out how to ensure that all would-be AI programmers are motivated to take those precautions and have the tools to implement them correctly.
6) What are we doing right now to avoid an AI apocalypse?

“It could be said that public policy on AGI [artificial general intelligence] does not exist,” concluded a paper in 2018 reviewing the state of the field.

The truth is that technical work on promising approaches is getting done, but there’s shockingly little in the way of policy planning, international collaboration, or public-private partnerships. In fact, much of the work is being done by only a handful of organizations, and it has been estimated that around 50 people in the world work full time on technical AI safety.

Bostrom’s Future of Humanity Institute has published a research agenda for AI governance: the study of “devising global norms, policies, and institutions to best ensure the beneficial development and use of advanced AI.” It has published research on the risk of malicious uses of AI, on the context of China’s AI strategy, and on artificial intelligence and international security.

The longest-established organization working on technical AI safety is the Machine Intelligence Research Institute (MIRI), which prioritizes research into designing highly reliable agents — artificial intelligence programs whose behavior we can predict well enough to be confident they’re safe. (Disclosure: MIRI is a nonprofit and I donated to its work in 2017-2019.)

The Elon Musk-founded OpenAI is a very new organization, less than three years old. But researchers there are active contributors to both AI safety and AI capabilities research. A research agenda in 2016 spelled out “concrete open technical problems relating to accident prevention in machine learning systems,” and researchers have since advanced some approaches to safe AI systems.

Alphabet’s DeepMind, a leader in this field, has a safety team and a technical research agenda outlined here. “Our intention is to ensure that AI systems of the future are not just ‘hopefully safe’ but robustly, verifiably safe ,” it concludes, outlining an approach with an emphasis on specification (designing goals well), robustness (designing systems that perform within safe limits under volatile conditions), and assurance (monitoring systems and understanding what they’re doing).


There are also lots of people working on more present-day AI ethics problems: algorithmic bias, robustness of modern machine-learning algorithms to small changes, and transparency and interpretability of neural nets, to name just a few. Some of that research could potentially be valuable for preventing destructive scenarios.

But on the whole, the state of the field is a little bit as if almost all climate change researchers were focused on managing the droughts, wildfires, and famines we’re already facing today, with only a tiny skeleton team dedicating to forecasting the future and 50 or so researchers who work full time on coming up with a plan to turn things around.

Not every organization with a major AI department has a safety team at all, and some of them have safety teams focused only on algorithmic fairness and not on the risks from advanced systems. The US government doesn’t have a department for AI.

The field still has lots of open questions — many of which might make AI look much scarier, or much less so — which no one has dug into in depth.
7) Is this really likelier to kill us all than, say, climate change?

It sometimes seems like we’re facing dangers from all angles in the 21st century. Both climate change and future AI developments are likely to be transformative forces acting on our world.

Our predictions about climate change are more confident, both for better and for worse. We have a clearer understanding of the risks the planet will face, and we can estimate the costs to human civilization. They are projected to be enormous, risking potentially hundreds of millions of lives. The ones who will suffer most will be low-income people in developing countries; the wealthy will find it easier to adapt. We also have a clearer understanding of the policies we need to enact to address climate change than we do with AI.  


There’s intense disagreement in the field on timelines for critical advances in AI. While AI safety experts agree on many features of the safety problem, they’re still making the case to research teams in their own field, and they disagree on some of the details. There’s substantial disagreement on how badly it could go, and on how likely it is to go badly. There are only a few people who work full time on AI forecasting. One of the things current researchers are trying to nail down is their models and the reasons for the remaining disagreements about what safe approaches will look like.

Most experts in the AI field think it poses a much larger risk of total human extinction than climate change, since analysts of existential risks to humanity think that climate change, while catastrophic, is unlikely to lead to human extinction. But many others primarily emphasize our uncertainty — and emphasize that when we’re working rapidly toward powerful technology about which there are still many unanswered questions, the smart step is to start the research now.
8) Is there a possibility that AI can be benevolent?

AI safety researchers emphasize that we shouldn’t assume AI systems will be benevolent by default. They’ll have the goals that their training environment set them up for, and no doubt this will fail to encapsulate the whole of human values.

When the AI gets smarter, might it figure out morality by itself? Again, researchers emphasize that it won’t. It’s not really a matter of “figuring out” — the AI will understand just fine that humans actually value love and fulfillment and happiness, and not just the number associated with Google on the New York Stock Exchange. But the AI’s values will be built around whatever goal system it was initially built around, which means it won’t suddenly become aligned with human values if it wasn’t designed that way to start with.

Of course, we can build AI systems that are aligned with human values, or at least that humans can safely work with. That is ultimately what almost every organization with an artificial general intelligence division is trying to do. Success with AI could give us access to decades or centuries of technological innovation all at once.

“If we’re successful, we believe this will be one of the most important and widely beneficial scientific advances ever made,” writes the introduction to Alphabet’s DeepMind. “From climate change to the need for radically improved healthcare, too many problems suffer from painfully slow progress, their complexity overwhelming our ability to find solutions. With AI as a multiplier for human ingenuity, those solutions will come into reach.”

So, yes, AI can share our values — and transform our world for the good. We just need to solve a very hard engineering problem first.
9) I just really want to know: how worried should we be?

To people who think the worrying is premature and the risks overblown, AI safety is competing with other priorities that sound, well, a bit less sci-fi — and it’s not clear why AI should take precedence. To people who think the risks described are real and substantial, it’s outrageous that we’re dedicating so few resources to working on them.

While machine-learning researchers are right to be wary of hype, it’s also hard to avoid the fact that they’re accomplishing some impressive, surprising things using very generalizable techniques, and that it doesn’t seem that all the low-hanging fruit has been picked.

AI looks increasingly like a technology that will change the world when it arrives. Researchers across many major AI organizations tell us it will be like launching a rocket: something we have to get right before we hit “go.” So it seems urgent to get to work learning rocketry. No matter whether or not humanity should be afraid, we should definitely be doing our homework.