Sunday, October 18, 2020

The case for taking AI seriously as a threat to humanity

Why some people fear AI, explained.

LONG READ

By Kelsey Piper Updated Oct 15, 2020, Illustrations by Javier Zarracina for Vox

This story is part of a group of stories called  


Finding the best ways to do good.



Stephen Hawking has said, “The development of full artificial intelligence could spell the end of the human race.” Elon Musk claims that AI is humanity’s “biggest existential threat.”

That might have people asking: Wait, what? But these grand worries are rooted in research. Along with Hawking and Musk, prominent figures at Oxford and UC Berkeley and many of the researchers working in AI today believe that advanced AI systems, if deployed carelessly, could permanently cut off human civilization from a good future.

This concern has been raised since the dawn of computing. But it has come into particular focus in recent years, as advances in machine-learning techniques have given us a more concrete understanding of what we can do with AI, what AI can do for (and to) us, and how much we still don’t know.

There are also skeptics. Some of them think advanced AI is so distant that there’s no point in thinking about it now. Others are worried that excessive hype about the power of their field might kill it prematurely. And even among the people who broadly agree that AI poses unique dangers, there are varying takes on what steps make the most sense today.

The conversation about AI is full of confusion, misinformation, and people talking past each other — in large part because we use the word “AI” to refer to so many things. So here’s the big picture on how artificial intelligence might pose a catastrophic danger, in nine questions:
1) What is AI?

Artificial intelligence is the effort to create computers capable of intelligent behavior. It is a broad catchall term, used to refer to everything from Siri to IBM’s Watson to powerful technologies we have yet to invent.

Some researchers distinguish between “narrow AI” — computer systems that are better than humans in some specific, well-defined field, like playing chess or generating images or diagnosing cancer — and “general AI,” systems that can surpass human capabilities in many domains. We don’t have general AI yet, but we’re starting to get a better sense of the challenges it will pose.


Narrow AI has seen extraordinary progress over the past few years. AI systems have improved dramatically at translation, at games like chess and Go, at important research biology questions like predicting how proteins fold, and at generating images. AI systems determine what you’ll see in a Google search or in your Facebook Newsfeed. They compose music and write articles that, at a glance, read as if a human wrote them. They play strategy games. They are being developed to improve drone targeting and detect missiles.

But narrow AI is getting less narrow. Once, we made progress in AI by painstakingly teaching computer systems specific concepts. To do computer vision — allowing a computer to identify things in pictures and video — researchers wrote algorithms for detecting edges. To play chess, they programmed in heuristics about chess. To do natural language processing (speech recognition, transcription, translation, etc.), they drew on the field of linguistics.

But recently, we’ve gotten better at creating computer systems that have generalized learning capabilities. Instead of mathematically describing detailed features of a problem, we let the computer system learn that by itself. While once we treated computer vision as a completely different problem from natural language processing or platform game playing, now we can solve all three problems with the same approaches.

And as computers get good enough at narrow AI tasks, they start to exhibit more general capabilities. For example, OpenAI’s famous GPT-series of text AIs is, in one sense, the narrowest of narrow AIs — it just predicts what the next word will be in a text, based on the previous words and its corpus of human language. And yet, it can now identify questions as reasonable or unreasonable and discuss the physical world (for example, answering questions about which objects are larger or which steps in a process must come first). In order to be very good at the narrow task of text prediction, an AI system will eventually develop abilities that are not narrow at all.

Our AI progress so far has enabled enormous advances — and has also raised urgent ethical questions. When you train a computer system to predict which convicted felons will reoffend, you’re using inputs from a criminal justice system biased against black people and low-income people — and so its outputs will likely be biased against black and low-income people too. Making websites more addictive can be great for your revenue but bad for your users. Releasing a program that writes convincing fake reviews or fake news might make those widespread, making it harder for the truth to get out.

Rosie Campbell at UC Berkeley’s Center for Human-Compatible AI argues that these are examples, writ small, of the big worry experts have about general AI in the future. The difficulties we’re wrestling with today with narrow AI don’t come from the systems turning on us or wanting revenge or considering us inferior. Rather, they come from the disconnect between what we tell our systems to do and what we actually want them to do.

For example, we tell a system to run up a high score in a video game. We want it to play the game fairly and learn game skills — but if it instead has the chance to directly hack the scoring system, it will do that. It’s doing great by the metric we gave it. But we aren’t getting what we wanted.

In other words, our problems come from the systems being really good at achieving the goal they learned to pursue; it’s just that the goal they learned in their training environment isn’t the outcome we actually wanted. And we’re building systems we don’t understand, which means we can’t always anticipate their behavior.

Right now the harm is limited because the systems are so limited. But it’s a pattern that could have even graver consequences for human beings in the future as AI systems become more advanced.
2) Is it even possible to make a computer as smart as a person?

Yes, though current AI systems aren’t nearly that smart.


One popular adage about AI is “everything that’s easy is hard, and everything that’s hard is easy.” Doing complex calculations in the blink of an eye? Easy. Looking at a picture and telling you whether it’s a dog? Hard (until very recently).

Lots of things humans do are still outside AI’s grasp. For instance, it’s hard to design an AI system that explores an unfamiliar environment, that can navigate its way from, say, the entryway of a building it’s never been in before up the stairs to a specific person’s desk. We are just beginning to learn how to design an AI system that reads a book and retains an understanding of the concepts.

The paradigm that has driven many of the biggest breakthroughs in AI recently is called “deep learning.” Deep learning systems can do some astonishing stuff: beat games we thought humans might never lose, invent compelling and realistic photographs, solve open problems in molecular biology.

These breakthroughs have made some researchers conclude it’s time to start thinking about the dangers of more powerful systems, but skeptics remain. The field’s pessimists argue that programs still need an extraordinary pool of structured data to learn from, require carefully chosen parameters, or work only in environments designed to avoid the problems we don’t yet know how to solve. They point to self-driving cars, which are still mediocre under the best conditions despite the billions that have been poured into making them work.

It’s rare, though, to find a top researcher in AI who thinks that general AI is impossible. Instead, the field’s luminaries tend to say that it will happen someday — but probably a day that’s a long way off.

Other researchers argue that the day may not be so distant after all.

That’s because for almost all the history of AI, we’ve been held back in large part by not having enough computing power to realize our ideas fully. Many of the breakthroughs of recent years — AI systems that learned how to play strategy games, generate fake photos of celebrities, fold proteins, and compete in massive multiplayer online strategy games — have happened because that’s no longer true. Lots of algorithms that seemed not to work at all turned out to work quite well once we could run them with more computing power.

And the cost of a unit of computing time keeps falling. Progress in computing speed has slowed recently, but the cost of computing power is still estimated to be falling by a factor of 10 every 10 years. Through most of its history, AI has had access to less computing power than the human brain. That’s changing. By most estimates, we’re now approaching the era when AI systems can have the computing resources that we humans enjoy.

And deep learning, unlike previous approaches to AI, is highly suited to developing general capabilities.

“If you go back in history,” top AI researcher and OpenAI cofounder Ilya Sutskever told me, “they made a lot of cool demos with little symbolic AI. They could never scale them up — they were never able to get them to solve non-toy problems. Now with deep learning the situation is reversed. ... Not only is [the AI we’re developing] general, it’s also competent — if you want to get the best results on many hard problems, you must use deep learning. And it’s scalable.”

In other words, we didn’t need to worry about general AI back when winning at chess required entirely different techniques than winning at Go. But now, the same approach produces fake news or music depending on what training data it is fed. And as far as we can discover, the programs just keep getting better at what they do when they’re allowed more computation time — we haven’t discovered a limit to how good they can get. Deep learning approaches to most problems blew past all other approaches when deep learning was first discovered.

Furthermore, breakthroughs in a field can often surprise even other researchers in the field. “Some have argued that there is no conceivable risk to humanity [from AI] for centuries to come,” wrote UC Berkeley professor Stuart Russell, “perhaps forgetting that the interval of time between Rutherford’s confident assertion that atomic energy would never be feasibly extracted and Szilárd’s invention of the neutron-induced nuclear chain reaction was less than twenty-four hours.”

There’s another consideration. Imagine an AI that is inferior to humans at everything, with one exception: It’s a competent engineer that can build AI systems very effectively. Machine learning engineers who work on automating jobs in other fields often observe, humorously, that in some respects, their own field looks like one where much of the work — the tedious tuning of parameters — could be automated.

If we can design such a system, then we can use its result — a better engineering AI — to build another, even better AI. This is the mind-bending scenario experts call “recursive self-improvement,” where gains in AI capabilities enable more gains in AI capabilities, allowing a system that started out behind us to rapidly end up with abilities well beyond what we anticipated.

This is a possibility that has been anticipated since the first computers. I.J. Good, a colleague of Alan Turing who worked at the Bletchley Park codebreaking operation during World War II and helped build the first computers afterward, may have been the first to spell it out, back in 1965: “An ultraintelligent machine could design even better machines; there would then unquestionably be an ‘intelligence explosion,’ and the intelligence of man would be left far behind. Thus the first ultraintelligent machine is the last invention that man need ever make.”

3) How exactly could AI wipe us out?

It’s immediately clear how nuclear bombs will kill us. No one working on mitigating nuclear risk has to start by explaining why it’d be a bad thing if we had a nuclear war.

The case that AI could pose an existential risk to humanity is more complicated and harder to grasp. So many of the people who are working to build safe AI systems have to start by explaining why AI systems, by default, are dangerous.  
Javier Zarracina/Vox

The idea that AI can become a danger is rooted in the fact that AI systems pursue their goals, whether or not those goals are what we really intended — and whether or not we’re in the way. “You’re probably not an evil ant-hater who steps on ants out of malice,” Stephen Hawking wrote, “but if you’re in charge of a hydroelectric green-energy project and there’s an anthill in the region to be flooded, too bad for the ants. Let’s not place humanity in the position of those ants.”

Here’s one scenario that keeps experts up at night: We develop a sophisticated AI system with the goal of, say, estimating some number with high confidence. The AI realizes it can achieve more confidence in its calculation if it uses all the world’s computing hardware, and it realizes that releasing a biological superweapon to wipe out humanity would allow it free use of all the hardware. Having exterminated humanity, it then calculates the number with higher confidence.

It is easy to design an AI that averts that specific pitfall. But there are lots of ways that unleashing powerful computer systems will have unexpected and potentially devastating effects, and avoiding all of them is a much harder problem than avoiding any specific one.


Victoria Krakovna, an AI researcher at DeepMind (now a division of Alphabet, Google’s parent company), compiled a list of examples of “specification gaming”: the computer doing what we told it to do but not what we wanted it to do. For example, we tried to teach AI organisms in a simulation to jump, but we did it by teaching them to measure how far their “feet” rose above the ground. Instead of jumping, they learned to grow into tall vertical poles and do flips — they excelled at what we were measuring, but they didn’t do what we wanted them to do.

An AI playing the Atari exploration game Montezuma’s Revenge found a bug that let it force a key in the game to reappear, thereby allowing it to earn a higher score by exploiting the glitch. An AI playing a different game realized it could get more points by falsely inserting its name as the owner of high-value items.

Sometimes, the researchers didn’t even know how their AI system cheated: “the agent discovers an in-game bug. ... For a reason unknown to us, the game does not advance to the second round but the platforms start to blink and the agent quickly gains a huge amount of points (close to 1 million for our episode time limit).”

What these examples make clear is that in any system that might have bugs or unintended behavior or behavior humans don’t fully understand, a sufficiently powerful AI system might act unpredictably — pursuing its goals through an avenue that isn’t the one we expected.

In his 2009 paper “The Basic AI Drives,” Steve Omohundro, who has worked as a computer science professor at the University of Illinois Urbana-Champaign and as the president of Possibility Research, argues that almost any AI system will predictably try to accumulate more resources, become more efficient, and resist being turned off or modified: “These potentially harmful behaviors will occur not because they were programmed in at the start, but because of the intrinsic nature of goal driven systems.”

His argument goes like this: Because AIs have goals, they’ll be motivated to take actions that they can predict will advance their goals. An AI playing a chess game will be motivated to take an opponent’s piece and advance the board to a state that looks more winnable.

But the same AI, if it sees a way to improve its own chess evaluation algorithm so it can evaluate potential moves faster, will do that too, for the same reason: It’s just another step that advances its goal.

If the AI sees a way to harness more computing power so it can consider more moves in the time available, it will do that. And if the AI detects that someone is trying to turn off its computer mid-game, and it has a way to disrupt that, it’ll do it. It’s not that we would instruct the AI to do things like that; it’s that whatever goal a system has, actions like these will often be part of the best path to achieve that goal.

That means that any goal, even innocuous ones like playing chess or generating advertisements that get lots of clicks online, could produce unintended results if the agent pursuing it has enough intelligence and optimization power to identify weird, unexpected routes to achieve its goals.

Goal-driven systems won’t wake up one day with hostility to humans lurking in their hearts. But they will take actions that they predict will help them achieve their goal — even if we’d find those actions problematic, even horrifying. They’ll work to preserve themselves, accumulate more resources, and become more efficient. They already do that, but it takes the form of weird glitches in games. As they grow more sophisticated, scientists like Omohundro predict more adversarial behavior.
4) When did scientists first start worrying about AI risk?

Scientists have been thinking about the potential of artificial intelligence since the early days of computers. In the famous paper where he put forth the Turing test for determining if an artificial system is truly “intelligent,” Alan Turing wrote:


Let us now assume, for the sake of argument, that these machines are a genuine possibility, and look at the consequences of constructing them. ... There would be plenty to do in trying to keep one’s intelligence up to the standards set by the machines, for it seems probable that once the machine thinking method had started, it would not take long to outstrip our feeble powers. … At some stage therefore we should have to expect the machines to take control.

I.J. Good worked closely with Turing and reached the same conclusions, according to his assistant, Leslie Pendleton. In an excerpt from unpublished notes Good wrote shortly before he died in 2009, he writes about himself in third person and notes a disagreement with his younger self — while as a younger man, he thought powerful AIs might be helpful to us, the older Good expected AI to annihilate us.


[The paper] “Speculations Concerning the First Ultra-intelligent Machine” (1965) ... began: “The survival of man depends on the early construction of an ultra-intelligent machine.” Those were his words during the Cold War, and he now suspects that “survival” should be replaced by “extinction.” He thinks that, because of international competition, we cannot prevent the machines from taking over. He thinks we are lemmings. He said also that “probably Man will construct the deus ex machina in his own image.”

In the 21st century, with computers quickly establishing themselves as a transformative force in our world, younger researchers started expressing similar worries.

Nick Bostrom is a professor at the University of Oxford, the director of the Future of Humanity Institute, and the director of the Governance of Artificial Intelligence Program. He researches risks to humanity, both in the abstract — asking questions like why we seem to be alone in the universe — and in concrete terms, analyzing the technological advances on the table and whether they endanger us. AI, he concluded, endangers us.

In 2014, he wrote a book explaining the risks AI poses and the necessity of getting it right the first time, concluding, “once unfriendly superintelligence exists, it would prevent us from replacing it or changing its preferences. Our fate would be sealed.”


Across the world, others have reached the same conclusion. Bostrom co-authored a paper on the ethics of artificial intelligence with Eliezer Yudkowsky, founder of and research fellow at the Berkeley Machine Intelligence Research Institute (MIRI), an organization that works on better formal characterizations of the AI safety problem.

Yudkowsky started his career in AI by worriedly poking holes in others’ proposals for how to make AI systems safe, and has spent most of it working to persuade his peers that AI systems will, by default, be unaligned with human values (not necessarily opposed to but indifferent to human morality) — and that it’ll be a challenging technical problem to prevent that outcome.


Increasingly, researchers realized that there’d be challenges that hadn’t been present with AI systems when they were simple. “‘Side effects’ are much more likely to occur in a complex environment, and an agent may need to be quite sophisticated to hack its reward function in a dangerous way. This may explain why these problems have received so little study in the past, while also suggesting their importance in the future,” concluded a 2016 research paper on problems in AI safety.

Bostrom’s book Superintelligence was compelling to many people, but there were skeptics. “No, experts don’t think superintelligent AI is a threat to humanity,” argued an op-ed by Oren Etzioni, a professor of computer science at the University of Washington and CEO of the Allan Institute for Artificial Intelligence. “Yes, we are worried about the existential risk of artificial intelligence,” replied a dueling op-ed by Stuart Russell, an AI pioneer and UC Berkeley professor, and Allan DaFoe, a senior research fellow at Oxford and director of the Governance of AI program there.

It’s tempting to conclude that there’s a pitched battle between AI-risk skeptics and AI-risk believers. In reality, they might not disagree as profoundly as you would think.

Facebook’s chief AI scientist Yann LeCun, for example, is a vocal voice on the skeptical side. But while he argues we shouldn’t fear AI, he still believes we ought to have people working on, and thinking about, AI safety. “Even if the risk of an A.I. uprising is very unlikely and very far in the future, we still need to think about it, design precautionary measures, and establish guidelines,” he writes.

That’s not to say there’s an expert consensus here — far from it. There is substantial disagreement about which approaches seem likeliest to bring us to general AI, which approaches seem likeliest to bring us to safe general AI, and how soon we need to worry about any of this.

Many experts are wary that others are overselling their field, and dooming it when the hype runs out. But that disagreement shouldn’t obscure a growing common ground; these are possibilities worth thinking about, investing in, and researching, so we have guidelines when the moment comes that they’re needed.
5) Why couldn’t we just shut off a computer if it got too powerful?

A smart AI could predict that we’d want to turn it off if it made us nervous. So it would try hard not to make us nervous, because doing so wouldn’t help it accomplish its goals. If asked what its intentions are, or what it’s working on, it would attempt to evaluate which responses are least likely to get it shut off, and answer with those. If it wasn’t competent enough to do that, it might pretend to be even dumber than it was — anticipating that researchers would give it more time, computing resources, and training data.

So we might not know when it’s the right moment to shut off a computer.

We also might do things that make it impossible to shut off the computer later, even if we realize eventually that it’s a good idea. For example, many AI systems could have access to the internet, which is a rich source of training data and which they’d need if they’re to make money for their creators (for example, on the stock market, where more than half of trading is done by fast-reacting AI algorithms).

But with internet access, an AI could email copies of itself somewhere where they’ll be downloaded and read, or hack vulnerable systems elsewhere. Shutting off any one computer wouldn’t help.


In that case, isn’t it a terrible idea to let any AI system — even one which doesn’t seem powerful enough to be dangerous — have access to the internet? Probably. But that doesn’t mean it won’t continue to happen. AI researchers want to make their AI systems more capable — that’s what makes them more scientifically interesting and more profitable. It’s not clear that the many incentives to make your systems powerful and use them online will suddenly change once systems become powerful enough to be dangerous.

So far, we’ve mostly talked about the technical challenges of AI. But from here forward, it’s necessary to veer more into the politics. Since AI systems enable incredible things, there will be lots of different actors working on such systems.

There will likely be startups, established tech companies like Google (Alphabet’s recently acquired startup DeepMind is frequently mentioned as an AI frontrunner), and organizations like Elon-Musk-founded OpenAI, which recently transitioned to a hybrid for-profit/non-profit structure.

There will be governments — Russia’s Vladimir Putin has expressed an interest in AI, and China has made big investments. Some of them will presumably be cautious and employ safety measures, including keeping their AI off the internet. But in a scenario like this one, we’re at the mercy of the least cautious actor, whoever they may be.

That’s part of what makes AI hard: Even if we know how to take appropriate precautions (and right now we don’t), we also need to figure out how to ensure that all would-be AI programmers are motivated to take those precautions and have the tools to implement them correctly.
6) What are we doing right now to avoid an AI apocalypse?

“It could be said that public policy on AGI [artificial general intelligence] does not exist,” concluded a paper in 2018 reviewing the state of the field.

The truth is that technical work on promising approaches is getting done, but there’s shockingly little in the way of policy planning, international collaboration, or public-private partnerships. In fact, much of the work is being done by only a handful of organizations, and it has been estimated that around 50 people in the world work full time on technical AI safety.

Bostrom’s Future of Humanity Institute has published a research agenda for AI governance: the study of “devising global norms, policies, and institutions to best ensure the beneficial development and use of advanced AI.” It has published research on the risk of malicious uses of AI, on the context of China’s AI strategy, and on artificial intelligence and international security.

The longest-established organization working on technical AI safety is the Machine Intelligence Research Institute (MIRI), which prioritizes research into designing highly reliable agents — artificial intelligence programs whose behavior we can predict well enough to be confident they’re safe. (Disclosure: MIRI is a nonprofit and I donated to its work in 2017-2019.)

The Elon Musk-founded OpenAI is a very new organization, less than three years old. But researchers there are active contributors to both AI safety and AI capabilities research. A research agenda in 2016 spelled out “concrete open technical problems relating to accident prevention in machine learning systems,” and researchers have since advanced some approaches to safe AI systems.

Alphabet’s DeepMind, a leader in this field, has a safety team and a technical research agenda outlined here. “Our intention is to ensure that AI systems of the future are not just ‘hopefully safe’ but robustly, verifiably safe ,” it concludes, outlining an approach with an emphasis on specification (designing goals well), robustness (designing systems that perform within safe limits under volatile conditions), and assurance (monitoring systems and understanding what they’re doing).


There are also lots of people working on more present-day AI ethics problems: algorithmic bias, robustness of modern machine-learning algorithms to small changes, and transparency and interpretability of neural nets, to name just a few. Some of that research could potentially be valuable for preventing destructive scenarios.

But on the whole, the state of the field is a little bit as if almost all climate change researchers were focused on managing the droughts, wildfires, and famines we’re already facing today, with only a tiny skeleton team dedicating to forecasting the future and 50 or so researchers who work full time on coming up with a plan to turn things around.

Not every organization with a major AI department has a safety team at all, and some of them have safety teams focused only on algorithmic fairness and not on the risks from advanced systems. The US government doesn’t have a department for AI.

The field still has lots of open questions — many of which might make AI look much scarier, or much less so — which no one has dug into in depth.
7) Is this really likelier to kill us all than, say, climate change?

It sometimes seems like we’re facing dangers from all angles in the 21st century. Both climate change and future AI developments are likely to be transformative forces acting on our world.

Our predictions about climate change are more confident, both for better and for worse. We have a clearer understanding of the risks the planet will face, and we can estimate the costs to human civilization. They are projected to be enormous, risking potentially hundreds of millions of lives. The ones who will suffer most will be low-income people in developing countries; the wealthy will find it easier to adapt. We also have a clearer understanding of the policies we need to enact to address climate change than we do with AI.  


There’s intense disagreement in the field on timelines for critical advances in AI. While AI safety experts agree on many features of the safety problem, they’re still making the case to research teams in their own field, and they disagree on some of the details. There’s substantial disagreement on how badly it could go, and on how likely it is to go badly. There are only a few people who work full time on AI forecasting. One of the things current researchers are trying to nail down is their models and the reasons for the remaining disagreements about what safe approaches will look like.

Most experts in the AI field think it poses a much larger risk of total human extinction than climate change, since analysts of existential risks to humanity think that climate change, while catastrophic, is unlikely to lead to human extinction. But many others primarily emphasize our uncertainty — and emphasize that when we’re working rapidly toward powerful technology about which there are still many unanswered questions, the smart step is to start the research now.
8) Is there a possibility that AI can be benevolent?

AI safety researchers emphasize that we shouldn’t assume AI systems will be benevolent by default. They’ll have the goals that their training environment set them up for, and no doubt this will fail to encapsulate the whole of human values.

When the AI gets smarter, might it figure out morality by itself? Again, researchers emphasize that it won’t. It’s not really a matter of “figuring out” — the AI will understand just fine that humans actually value love and fulfillment and happiness, and not just the number associated with Google on the New York Stock Exchange. But the AI’s values will be built around whatever goal system it was initially built around, which means it won’t suddenly become aligned with human values if it wasn’t designed that way to start with.

Of course, we can build AI systems that are aligned with human values, or at least that humans can safely work with. That is ultimately what almost every organization with an artificial general intelligence division is trying to do. Success with AI could give us access to decades or centuries of technological innovation all at once.

“If we’re successful, we believe this will be one of the most important and widely beneficial scientific advances ever made,” writes the introduction to Alphabet’s DeepMind. “From climate change to the need for radically improved healthcare, too many problems suffer from painfully slow progress, their complexity overwhelming our ability to find solutions. With AI as a multiplier for human ingenuity, those solutions will come into reach.”

So, yes, AI can share our values — and transform our world for the good. We just need to solve a very hard engineering problem first.
9) I just really want to know: how worried should we be?

To people who think the worrying is premature and the risks overblown, AI safety is competing with other priorities that sound, well, a bit less sci-fi — and it’s not clear why AI should take precedence. To people who think the risks described are real and substantial, it’s outrageous that we’re dedicating so few resources to working on them.

While machine-learning researchers are right to be wary of hype, it’s also hard to avoid the fact that they’re accomplishing some impressive, surprising things using very generalizable techniques, and that it doesn’t seem that all the low-hanging fruit has been picked.

AI looks increasingly like a technology that will change the world when it arrives. Researchers across many major AI organizations tell us it will be like launching a rocket: something we have to get right before we hit “go.” So it seems urgent to get to work learning rocketry. No matter whether or not humanity should be afraid, we should definitely be doing our homework.
SUNDAY SERMON
Poll: Trump sees slight decline in support from white Christians

White Catholics and white Protestants still favor Trump, but their support has weakened since the summer.

By Benjamin Rosenbergbenjamin.rosenberg@voxmedia.com Oct 15, 2020, 1:30pm 
President Donald Trump stands in front of St. John’s Episcopal Church in Washington, DC, as anti-racism protests take place nearby. The president has maintained strong support among white Christians, though that support has declined slightly since the summer. Brendan Smialowski/AFP via Getty Images

White Christians still favor President Donald Trump over his Democratic rival Joe Biden, but that support has declined since the summer, a new Pew Research Center poll shows.

Trump’s support among white Christians is slipping when broken down into three major groups — Catholics, non-evangelical Protestants, and evangelical Protestants — though Trump is still polling at above 50 percent among all three and is dominant in the last.

The poll consisted of more than 10,000 voters nationwide and was conducted between September 30 and October 5. White Catholic voters turned away from the president most sharply: Trump still leads Biden by 8 percentage points among that group (52 to 44 percent), but as recently as late July and early August, that margin was 19 points (59 to 40 percent). White non-evangelical Protestants have followed a similar trend, with support for Trump declining from 59 to 53 percent since the summer.


Even among the president’s strongest group, white evangelical Protestants, his support has diminished slightly. White evangelicals favored Trump over Biden by a 78 to 17 percent margin in the recent poll, but over the summer, their support for the president was even stronger, at 83 percent. Trump’s slight decline with all three groups, however, did not correspond with a significant increase in support for Biden.

Pew surveyed voters shortly after the first presidential debate, and Trump announced he had tested positive for the coronavirus about halfway through the survey window.

The proportion of white Christians among total voters has also declined recently, but they still represent 44 percent of the electorate and are especially crucial to Republicans.
Nearly every other religious group, including Black Protestants, Hispanic Catholics, and Jews, favored Biden in the summer and fall Pew surveys, as did religiously unaffiliated voters.

Biden’s support is strong among all of those groups, the poll revealed. Ninety percent of Black Protestant voters polled said they favor the former vice president, along with 70 percent of Jews, 60 percent of Hispanic Catholics, 83 percent of atheists and agnostics, and 62 percent of those who said they had no religion in particular. Trump polled no higher than 31 percent with any of those groups.

Trump has made some progress among Black Protestants and particularly Hispanic Catholics since 2016. Hillary Clinton won by 93 percentage points with Black Protestants in 2016, and by a 59 percentage point margin with Hispanic Catholics, per Pew’s review of validated voters. But as the Washington Post’s Philip Bump explained, those modest gains aren’t enough to offset Trump’s eroding support among white voters — religious or otherwise.

Trump’s dominance among white evangelicals, briefly explained

Even with the slight declines, Trump remains dominant among one group of white Christian voters: those who identify as evangelicals.

Most white evangelical Protestants have long voted for Republicans. Although Trump’s style differs from previous Republican presidents, he has maintained — if not increased — his support with that group throughout most of his first term. During the 2016 campaign, the Access Hollywood tapes were released, which showed Trump talking in lewd terms about sexually assaulting women. Trump’s support among white evangelicals was unharmed.

That wasn’t as counterintuitive as it may seem, Kristin Kobes Du Mez, a historian at Calvin University who has studied evangelicals for 15 years, told Vox’s Sean Illing in July. Du Mez said that many evangelicals consider Trump their “great protector,” and he plays right into the evangelical idea of “militant masculinity.”

“It’s important to understand that the appeal of Trump to evangelicals isn’t surprising at all, because their own faith tradition has long embraced this idea of a ruthless masculine protector,” Du Mez said. “This is just the way that God works and the way that God has designed men. He filled them with testosterone so that they can fight. So there’s just much less of a conflict there.”

Moreover, many white evangelical voters’ policy views align with the president’s — even on some of his most hard-line stances, like immigration, Vox’s Nicole Narea explained last year:


[On] issues ranging from border security to immigration detention, white evangelicals — a group that includes dozens of individual denominations, from the Southern Baptist Convention to the Pentecostal movement — are substantially more conservative than the average American and even the next most conservative religious group.




How the world’s biggest emitter could be carbon neutral by 2050

Xi Jinping wants China to get to net-zero emissions. These researchers have a plan for that.

By Lili Pike Oct 15, 2020
In September, workers installed solar panels in Hefei, China. Costfoto/Barcroft Studios/Getty Images


On the virtual stage of the United Nations General Assembly in September, President Xi Jinping made a bold commitment: China — the world’s largest source of greenhouse gas emissions — would strive to become carbon neutral by 2060.

Going carbon neutral means that China would use clean energy sources and capture or offset any remaining emissions. By removing the same amount of carbon it’s emitting into the atmosphere, it would achieve “net-zero” carbon emissions.

But when Xi dropped the news at the UN, he offered few details on how exactly China would totally decarbonize in a matter of decades.

Now, a group of China’s top climate experts has come forward with a plan. In their study, released Monday, they suggest that China should peak its emissions over the next decade and then rapidly decrease them to reach carbon neutrality by 2050. The researchers, who have the ear of China’s leaders, recommended that this path guide the country’s planning.

The recommendations come at a critical time: The country is currently finalizing its next five-year plan, which will steer economic development from March 2021 through 2025. Also, in the next few months, China is expected to join other countries in submitting updated climate goals to the United Nations under the Paris Agreement.

What’s more, the study shows that Xi Jinping’s UN speech was not just talk: Experts are laying the foundation to transform China’s economy and energy system. How that unfolds will have profound consequences for whether the planet limits catastrophic climate change. So here’s a look at how the new roadmap would change China, in the near term and over the coming decades.

China’s influential “national team” for climate change

This new study isn’t the first time researchers have sketched out a long-term decarbonization path for China, but what’s different this time is who is delivering the message.

Top brass are behind the research, including Xie Zhenhua, one of China’s senior climate officials, who supervised the project. Among the co-authors are researchers from over a dozen of China’s leading think tanks and research institutes, many of which are directly affiliated with government departments, including China’s economic planning body (the National Development and Reform Commission), the Ministry of Ecology and Environment, and the Ministry of Transportation.

“They are described in the climate circle as the ‘national team,’” said Li Shuo, a senior climate officer at Greenpeace East Asia, referring to the research coalition behind the report. Tsinghua University, where Xie Zhenhua runs a climate institute (ICCSD), has been central to China’s climate policy decision-making, he added. (While the research was just made public, it began in 2019 and was completed before Xi’s announcement.)

So the recommendations from this high-level group of experts are not merely academic. Because of who’s backing it, “it will be considered very closely by leaders, and very likely played an important role in supporting President Xi’s 2060 carbon neutrality announcement,” said Alvin Lin, climate and energy policy director at the Natural Resources Defense Council in Beijing. (Disclosure: The author worked as a research fellow for NRDC in Beijing from 2016 to 2017.)

So what exactly does the study recommend and how might it influence China’s trajectory?

China’s road to net-zero emissions


The new study contains many significant recommendations; key among them is the timeline for China’s decarbonization.

When Xi Jinping announced the goal of carbon neutrality by 2060, it was broadly interpreted to refer to carbon dioxide, the main gas driving global warming, and not other greenhouse gases, like methane or nitrous oxide. But the researchers suggest otherwise, saying China should reach net-zero for all greenhouse gases by 2060, and net-zero for carbon dioxide by 2050.

In his presentation of the results on Monday, He Jiankun, a Tsinghua professor and climate expert who co-led the study, said his understanding is that Xi’s goal of “carbon neutrality” by 2060 was referring to all greenhouse gases. An expert source told China Dialogue that this interpretation shouldn’t be understood as the official government stance until it is further clarified. But if official, it would mean China would have to cut emissions more rapidly over the coming decades.

The research also shows what net-zero emissions might look like for the world’s top emitter. Under their net-zero emissions scenario, the researchers propose almost entirely replacing fossil fuels with clean energy in the electricity sector, leaving coal power at less than 5 percent of power generation — a massive drop from the almost 70 percent coal supplied in 2019.

RELATED
This climate problem is bigger than cars and much harder to solve

But totally phasing out all fossil fuel consumption would be very difficult, particularly in the industrial sector where coal is used to produce steel, cement, and other materials at high heat. So, to reach net-zero carbon emissions by 2050, China would cut these emissions from a projected peak of 10.5 billion tons to 1.7 billion tons by mid-century. To offset those remaining emissions, China would lean heavily on carbon sinks and negative emissions — methods of trapping and absorbing emissions.

The researchers suggest these emissions could be dealt with in a number of ways. Carbon would be captured at power plants and buried underground. Some power stations could reduce emissions by burning plants (which have themselves sequestered carbon growth) and burying the carbon dioxide released from the plant. The remaining half of emissions would be offset by planting trees (in itself a fraught approach to removing carbon from the atmosphere).

But relying on negative emissions is by no means a sure bet since many of the technologies have not yet been proven at scale. China does have a history of massive tree-planting campaigns, but it has only just started to develop facilities to capture carbon emissions from industry and power plants.

The study also doesn’t flesh out how China will get from net-zero carbon in 2050 to net-zero greenhouse gases in 2060, but it does vaguely refer to further use of carbon sinks and carbon dioxide removal technologies, such as removing carbon dioxide from the air (a process called direct air capture).

Will this plan help change China’s course over the next decade?

According to the plan, China’s radical decarbonization would not begin immediately.

The study laid out four decarbonization scenarios. The net-zero carbon scenario referred to above is called the 1.5 degrees Celsius scenario because it is in line with the global aim to keep global temperature rise below that level. Another scenario tracks the 2 degrees Celsius path, while two less ambitious scenarios are based on current policies and enhanced policies.

“Currently due to the inertia of the energy and economic systems, it is difficult to promptly carry out the 2 degrees Celsius and 1.5 degrees Celsius emissions reduction pathways,” He Jiankun said during his presentation.

Instead of immediately pursuing the most aggressive decarbonization path, the researchers recommend China take a less ambitious path until 2030, then quickly bring emissions in line with the 2 degrees Celsius and 1.5 degrees Celsius pathways. Following the 1.5 Celsius path, that would mean China would need to cut emissions by a breakneck pace of 8 to 10 percent a year, according to the study.

Would this approach just kick the can down the road? According to Chen Ji, a principal at the Rocky Mountain Institute in Beijing who has also studied China’s long-term decarbonization, this path doesn’t mean the researchers recommend putting off action until 2030. It means the coming decade would lay the groundwork for more rapid decarbonization.

“The rate of emissions reduction starting to increase after 2030 would actually be in response to China taking more forceful action from 2020 to 2030, but the result of these actions will be clearer after 2030,” he said.

However, some experts raised concerns about this approach. Even though the researchers think China could still get back on an emissions reduction path aligned with the Paris Agreement after 2030, leaving steeper emissions cuts until then will make decarbonization more challenging to pull off.

“Research on cost-optimized emissions reduction strategies suggests that a more linear path towards the 2060 target would be economically optimal (see e.g. IPCC special report on 1.5 degrees), not to mention more credible to the outside world,” wrote Lauri Myllyvirta, lead analyst at the Centre for Research on Energy and Clean Air (CREA), in Carbon Brief.

Shifting gears on decarbonization after 2030 might lead China to build more carbon-intensive infrastructure like coal-fired power plants over the next decade, making emissions reduction more challenging in the future, Greenpeace’s Li Shuo said.

But the “inertia” He Jiankun referred to is a real barrier for China to overcome in the coming years. For instance, China’s power sector still privileges coal over renewables by allocating hours to coal plants annually rather than allowing renewable energy to compete with coal plants in real time. Reforms to this system, which are underway, would help to boost renewable energy growth in the future.


Other challenges in the near term include developing a green hydrogen industry to replace fossil fuel use for heavy industry and transportation, according to Chen Ji. Helping millions of workers transition out of the coal, steel, and cement industries is also a looming quandary for China.

What to watch for in the coming year


Although this new study has strong backing from people with connections to the highest levels of government, its place in China’s official plans will be clearer when China submits its “mid-century strategy,” a document that all signatories of the Paris Agreement are requested to complete by the end of 2020 to chart out long-term decarbonization. (China is expected to release this document sometime in the next few months.)

As for more immediate decision-making, the study authors also recommend that China upgrade its climate and energy targets under the Paris Agreement and in its five-year plan. China’s carbon emissions are still growing — last year saw a 2 percent increase — so the authors advise that the next five-year plan set a hard cap on carbon emissions at 10.5 billion tons. As for setting new Paris Agreement targets this year, one key recommendation is to up the 2030 target from 20 percent non-fossil fuel energy generation to 25 percent to speed China’s renewable energy build-out.

Whether China adopts these upgraded targets in the coming months will be a first real indication of how and when the country plans to get to net zero.



GREEN CAPITALI$M

How the 137 million Americans who own stock can force climate action

The two best ways to hold companies to their climate commitments.

By Michael O'Leary and Warren Valdmanis Updated Oct 15, 2020

Oil giant BP has said it will cut oil production by 40 percent in the next decade and reach net zero emissions by 2050. Getty Images/iStockphoto


With the US presidential election weeks away, we have the tempting possibility of a viable political solution to the looming climate crisis. If elected, a Biden administration may deliver sweeping climate legislation. But there is no guarantee of what that might ultimately look like or when it will happen. And under the current administration, the Department of Energy has started referring to natural gas as “molecules of US freedom.” Not quite the prelude to a carbon tax, a policy Republicans have shown some support for.

So where is immediate, needle-moving action on climate change going to come from? We need corporations to step up.

Some appear to be doing so. For example, BP may finally be making good on its decades-old promise to move “Beyond Petroleum.” This August, it announced it would cut oil production by 40 percent in the next decade and reach net-zero emissions by 2050.

It now joins hundreds of others in setting science-based targets for cutting emissions. A group of nearly 300 companies, ranging from automotive to apparel, have committed to reducing their emissions by 35 percent, a substantial goal given that these companies currently account for more emissions than France and Spain combined.

For their part, tech giants are seemingly in a sustainability arms race. Last year, Amazon pledged to buy 100,000 electric delivery vans as part of its effort to go carbon neutral by 2040. Not to be outdone, this summer Microsoft committed to go carbon negative by 2030 — and to remove enough carbon from the atmosphere to offset all of its historical emissions. Microsoft is part of Transform to Net Zero, a group of private companies including Maersk, Unilever, and Starbucks committed to achieving net-zero global emissions no later than 2050.


RELATED
Microsoft’s astonishing climate change goals, explained

This latest slate of climate commitments has elicited both cynicism and hope — hope that change at this scale can make a difference, but also cynicism about whether these commitments are real.

We’re two impact investors, and we think what’s too often missing from the conversation is that to make corporations sustainable, we must first make them accountable.


As we describe in our new book, Accountable: The Rise of Citizen Capitalism, this requires two things. First, accountability requires mandatory, standardized social and environmental metrics, built off the template of our mandatory, standardized financial reporting system. And second, it requires a more aggressive culture of engagement from citizens to hold corporations to account — in our capacities as consumers, employees, voters, and, yes, shareholders.

In all, 137 million Americans own stock, either directly or through an investment fund — that’s 15 million more people than voted in the last national election. We can use that position as shareholders to push companies to our long-term interests and our deeper values.

As impact investors, we’re often met with skepticism that private companies can be oriented around the public good. We helped launch Bain Capital’s impact investing fund, and now one of us leads Two Sigma Impact, a business that makes investments focused on workforce impact. We’ve seen the power of building companies around a deeper purpose as the broader impact investing field has grown to $715 billion under management.

But we’ve also seen every hollow promise and dead-end trend in this movement. It doesn’t help when companies adopt a posture of social responsibility without actually becoming more responsible. In the fight to reform capitalism, we risk winning the battle of ideas and losing the war of substantive action.    
Depending on whom you ask, Facebook is either one of the most or least environmentally responsible companies. Mladen Antonov/AFP/Getty Images


We need metrics to separate greenwashing from measurable progress

In 2018, Chevron announced it would invest $100 million that year in lowering emissions through its new Future Energy Fund. The same year, it invested $20 billion in traditional oil and gas. It’s hard to argue that you’re committed to change if you’re spending 99.5 percent of your budget doing the same old thing.

For those who put their faith in corporate social responsibility (CSR) as a panacea for our ailing society, we have the unfortunate reality: This kind of allocation of efforts is not uncommon. Superficial public commitments on issues like sustainability and diversity are much easier for companies than substantive action.


RELATED
Big Oil’s hopes are pinned on plastics. It won’t end well.

Corporations publicize every climate-conscious dollar they spend, with press releases, glossy reports, and expensive advertising. Eighty-six percent of S&P 500 companies now issue sustainability reports of some type, up from only 20 percent in 2011. They talk about the importance of the environment. They talk about focusing on all of their stakeholders: employees, customers, and communities. They talk about corporate citizenship and shared prosperity. They talk.

But the average company spends just 0.13 percent of its revenue on CSR. Corporations may dominate our world, but not through their CSR departments. CSR is often small and superficial, a Potemkin village constructed to appease capitalism’s critics. It is far easier for business leaders to sign on to lofty statements like the Business Roundtable’s on the purpose of a corporation or the Davos Manifesto than publicly commit to specific environmental or social targets.

New climate targets like BP’s make news not just because they are important and specific, but also because they have been historically rare. Many companies still fail to disclose their emissions, and despite the progress noted above, few have set reduction targets. Measurement of environmental, social, or governance (ESG) performance is notoriously unreliable. Companies self-report without external verification. Nearly all decide for themselves the style, format, and content of their reporting rather than following a common framework.

Look up the five largest companies in the world by revenue, and every list will be the same. Look up the most socially responsible, and there’s no agreement. In 2018, only one company made it into the top five of both Barron’s 100 Most Sustainable Companies and Newsweek’s Top 10 Green Companies.

If we look at a company’s credit rating, there is an almost perfect correlation between how different ratings agencies evaluate them. But between a company’s various ESG ratings, the correlation may be zero. Depending on whom you ask, Facebook is either one of the most or least environmentally responsible companies, and Wells Fargo is either one of the best or worst governed.

This makes holding companies to their commitments difficult and benchmarking across companies almost impossible. It also impairs our ability to connect environmental or social performance to financial performance, a critical need if we are to convert more corporations to this approach.

Compare this Wild West of ESG reporting with the staid and standardized world of financial accounting. In the United States, all public companies comply with Generally Accepted Accounting Principles, which are set by the Financial Accounting Standards Board as overseen by the Securities and Exchange Commission and audited by private accounting firms such as Ernst & Young and PricewaterhouseCoopers. It’s an alphabet soup of accountability, but for the most part, it works. Though each company is unique, all financial statements are reported according to the same standards and thus can be reliably compared against one another

We need mandatory, standardized, audited ESG metrics for large public companies. This is an area where government and industry can work together to create more accountability, as they already do on financial reporting.

There are hopeful moves in this direction elsewhere: The European Union is currently considering a set of common standards, while many of the emerging standards bodies in the ESG world like the Sustainability Accounting Standards Bureau and the Global Reporting Institute recently committed to work together to create comprehensive reporting metrics. The World Economic Forum also followed up the Davos Manifesto with its recommendation for a common set of metrics.

These sorts of clear standards are key for keeping companies on track. Last year, Irving Oil, which operates Canada’s largest oil refinery, abdicated its climate targets, silently removing commitments from its website. As part of Irving’s backtracking, the company changed the metrics by which it would be judged, choosing instead a muddier system that would allow it to claim progress despite higher emissions.

This is the risk with voluntary commitments: They’re voluntary. Nothing stops corporations setting voluntary targets from voluntarily resetting them. What if the next CEO of BP is less committed to clean energy? To hold corporations accountable, we need mandatory, independent metrics by which to judge them. 
NRG Energy’s Joliet Station power plant in Joliet, Illinois, shown in 2015. Getty Images


All investors can and should demand “stakeholder capitalism”

But better metrics will only get us so far. Who, exactly, will be holding these corporations to account? While some corporate leaders say they are more focused on society and the environment, there is one stakeholder group they cannot ignore: shareholders.


In a capitalist society, the capitalist is king. Unless investors and shareholders support these transformations, they will ultimately be perpetually superficial or only temporarily substantive.

Take the case of NRG, one of the largest power producers in the US. NRG suffers from sustainability whiplash. The public company sells electricity across the country, and under its former CEO David Crane, NRG began to transform. In a 2014 letter to shareholders about climate change, Crane wrote, “The day is coming when our children sit us down in our dotage, look us straight in the eye … and whisper to us, ‘You knew … and you didn’t do anything about it. Why?’”

And so NRG announced it would cut its carbon dioxide emissions by 50 percent by 2030 and 90 percent by 2050 — real commitments that would lead to substantive change.

But in 2017, Crane found himself deposed when the activist hedge fund Elliott Management forced him out. Elliott named new members to the board of directors, including a former Texas energy regulator who had called climate change a hoax. Two years later, NRG announced it was once again accelerating its carbon emissions goals. Today, the sustainability section of its website is bannered with the feel-good slogan, “Becoming a voice and an example of change.” They’ve got that right.

After he lost his job, Crane reflected that there’s all this “happy talk coming out of the senior ranks of major pension funds, sovereign wealth funds and university endowments about investing their money in a climate positive way,” but when it came time to make hard choices, he found only “money managers who are, at best, climate-indifferent.”

Elliott was able to force change at NRG because it owned part of the company. That’s how ownership works in a capitalist economy. But Elliott didn’t own the whole company. The fund owned only 6.9 percent of the shares. Even with its partner — the private investor Bluescape Energy Partners — it could speak for only 9.4 percent of the ownership.

Where were the other 90.6 percent of shareholders? Where were all the shareholders who cared about climate change, about the long-term viability of carbon power, about the need to transform our electrical grid? Why didn’t they speak up, supporting Crane and forcing Elliott to back off?

Without the support of these other shareholders, NRG’s transformation could not last.

As investors, we’ve seen how corporate leaders are pulled in opposite directions. Boards and shareholders want companies to hit their quarterly profit targets, while customers and other stakeholders want more sustainability and social responsibility.

Against these conflicting demands, the rational response from business leaders is hypocrisy: Say different things to different audiences and then continue to serve the priorities of shareholders — those bringing the money — above all. This enables companies to pacify reformers without sacrificing investors. It’s easier to fake good works than good returns.

But here too we are beginning to see hopeful progress. Climate Action 100+ is a group of investors representing over $47 trillion in assets and committing to use their power to push companies toward better disclosure and management of climate risk. Chris Hohn, who runs a $30 billion hedge fund in the United Kingdom, has publicly committed to voting against directors who don’t improve pollution disclosures and dramatically reduce greenhouse gas emissions. They join other shareholder activists like Ceres, As You Sow, and the Interfaith Council for Corporate Responsibility.


RELATED
Why climate activists disrupted the Harvard-Yale football game

These organizations recognize that focusing on sustainability is not just the right thing to do, it’s also actually in the best interests of most shareholders. Many critics of capitalism frame the problem as shareholders benefiting at the expense of stakeholders, but that misunderstands the interest of a vast majority of shareholders.

Of the 137 million Americans who own stock, the median shareholder is 51 years old with $65,000 in a retirement account. The median shareholder won’t withdraw that money for decades. If they’re invested in index funds, they likely hold thousands of stocks worldwide. Their economic interests — let alone their moral or political ones — aren’t best served by maximizing quarterly earnings at specific companies today. They’re best served through policies and practices that ensure the long-term, sustainable development of the global economy in a safe and stable climate.


With more and more investors joining the fight, there is greater potential for corporations to take the historically radical change required to make meaningful progress and greater potential to hold them accountable even when such progress is no longer in the best short-term interests of corporate managers.
Restoring trust with accountability

Nearly two-thirds of people worldwide want CEOs to lead on change rather than to wait for government. At the same time, two-thirds of people don’t trust most of the brands they use. Four out of five don’t trust business leaders to tell the truth or make ethical decisions. No wonder recent climate targets have been met with equal parts cynicism and hope.

This distrust doesn’t exist with smaller businesses. Three out of four people have very little or no confidence in big business, but the opposite is true for small companies: Three out of four people trust them. This is partly because externalities don’t exist in the same way for small, local companies. If a local company pollutes the river or fires its workers, it’s their river they’ve polluted and their neighbors they’ve let go.

It’s also because there’s greater accountability at the local level where, measured or not, local stakeholders have a better sense of each company’s impacts.

We’re also seeing more small businesses embrace explicit social and environmental goals through the B Corp certification process. This process allows stakeholder-minded companies to opt in to a rigorous set of standards. Certification is still voluntary, but it approximates the sort of accountability that mandatory metrics would provide. In our experience, adherence to these standards makes for better companies overall — both socially and commercially. There are currently more than 2,500 B Corps in over 50 countries, most of them very small.

To make meaningful progress toward sustainability, we must restore the trust that smaller companies have and larger corporations have lost. And to do that, we need better accountability — from mandatory metrics and engaged stakeholders, shareholders included.

Just because corporations are stepping up doesn’t mean the rest of us should be stepping down. It is up to us to hold them accountable through the laws we choose to pass, the jobs we take, the products we choose to buy, and the demands we make on them as investors. To the threadbare question of, “Can companies do well by doing good?” we have our answer: It’s up to us — as voters, consumers, employees, and savers — to decide.

Are these climate commitments harbingers of a new era of capitalism? Or just the latest collection of hollow promises? Only time will tell. But with an uncertain political future in a country suffering from heat waves, wildfires, and hurricanes exacerbated by climate change from coast to coast, time is running out for corporations to do what must be done.

Michael O’Leary and Warren Valdmanis are the co-authors of Accountable: The Rise of Citizen Capitalism. They were on the founding team of Bain Capital’s impact investing fund. Valdmanis is now a partner with Two Sigma Impact. The opinions expressed are their own and do not reflect the views or opinions of their employers.
New Zealand Prime Minister Jacinda Ardern wins historic reelection

Ardern’s Covid-19 response was hailed around the world. Now her party has won a landslide victory.

By Anna North Oct 17, 2020
New Zealand Prime Minister Jacinda Ardern delivers her victory speech in Auckland, New Zealand, after being reelected in a historic landslide win on October 17. 
Lynn Grieveson/Newsroom/Getty Images


New Zealand Prime Minister Jacinda Ardern has been hailed around the world for her government’s quick action on Covid-19, which has helped New Zealand avoid the mass infections and deaths that have devastated the US and Europe. Now, voters in the country have responded to her leadership by handing Ardern and her Labour Party their biggest election victory in 50 years.

Ardern, 40, gained international attention when she became prime minister in 2017, then one of the world’s youngest female leaders. At the beginning of this year, her center-left party looked set for a tight election due to a lack of progress on issues it had promised to prioritize, like housing and reducing child poverty, CNN reported.

Then came Covid-19. Ardern responded swiftly, with an early lockdown that essentially eliminated spread of the virus. She also spoke directly to New Zealanders with a warmth and empathy that’s been lacking in other world leaders, helping to soothe New Zealanders’ anxieties and getting them on board with coronavirus restrictions. To date, New Zealand has reported fewer than 2,000 cases and 25 deaths due to Covid-19.

In Saturday’s election, Ardern’s party is on track to win 64 of the 120 seats in the country’s parliament, according to Reuters. That would give the Labour Party decisive control of the government, allowing it to govern without having to form a coalition, and granting Ardern and her allies more power than ever to chart New Zealand’s course through the pandemic and beyond.

“We will build back better from the Covid crisis,” Ardern said in her acceptance speech on Saturday, evoking a slogan also used by former US Vice President Joe Biden’s presidential campaign. “This is our opportunity.”

Ardern has always been popular abroad. Now she has a mandate at home.

Ardern has maintained a high profile around the world since she was elected, as Damien Cave reports at the New York Times. It wasn’t just her youth that drew attention — she also became the first world leader in nearly 30 years to give birth while in office in 2018. Her six-week parental leave was hailed as groundbreaking, showing the importance of paid leave for parents at a time when many — especially in the US — struggle to access this benefit. (In New Zealand, new parents can access up to 26 weeks of paid leave funded by the government.)

But Ardern hasn’t always been as successful at home as she was popular abroad. Leading a coalition with the nationalist New Zealand First Party, she has struggled to deliver on progressive promises like making housing more affordable and tackling climate change, Cave reports.

Covid-19 then changed everything. Ardern was praised not just around the world but in New Zealand, where her quick action meant that many children could go back to school, and adults could return to work, while countries like the US saw a surge in infections.

Meanwhile, her personal addresses amid the pandemic to New Zealanders were lauded for their directness and warmth. In April, for example, she reassured the country’s children that both the tooth fairy and the Easter bunny were considered essential workers.

Ardern’s response was in many ways the embodiment of one of her leadership mantras: “Be strong, be kind.” Ardern’s effectiveness, alongside strong responses by Germany’s Angela Merkel, Taiwan’s Tsai Ing-wen, and others, even led some to wonder if female leaders were better at handling the pandemic than male leaders.

And now, her constituents have voted to keep her at the helm as New Zealand continues to weather Covid-19. With a majority in the country’s parliament, Labour will be able to form a single-party government that may give Ardern greater ability to deliver on her priorities than she’s had in the past.

Despite this mandate, Ardern’s second term will bring new challenges including repairing an economy weakened by successive lockdowns, and ensuring her majority is able to deliver on its campaign promises. “She has significant political capital,” Jennifer Curtin, director of the Public Policy Institute at the University of Auckland, told the Times. “She’s going to have to fulfill her promises with more substance.”

But Ardern says she’s ready to get to work. The campaign slogan that carried her to victory was simple: “Let’s keep moving.”

Incumbent PM Arden Wins New Zealand Election by Landslide
October 17, 2020




New Zealand Prime Minister Jacinda Ardern has won a landslide victory in the country’s general election. With most ballots tallied, Ardern’s Labour Party has won 49% of the vote and she is projected to win a rare outright parliamentary majority. The opposition centre-right National Party, currently on 27%, has admitted defeat in Saturday’s poll.

The vote was originally due to be in September, but was postponed by a month after a renewed Covid-19 outbreak.

More than a million people had already voted in early polling which opened up on October 3. New Zealanders were also asked to vote in two referendums alongside the general election.

According to the Electoral Commission, the Labour Party are on 49% of the vote, followed by the National Party on 27%, and the ACT New Zealand and Green parties on 8%.


“New Zealand has shown the Labour Party its greatest support in almost 50 years,” Ardern told her supporters after the victory. “We will not take your support for granted. And I can promise you we will be a party that governs for every New Zealander.”

National Party leader Judith Collins congratulated Ardern and promised her party would be a “robust opposition”.

“Three years will be gone in the blink of an eye,” she said, referring to the next scheduled election. “We will be back.”

Ardern’s Labour Party is projected to win 64 seats – enough for an outright majority. No party has managed to do so in New Zealand since it introduced a voting system known as Mixed Member Proportional representation (MMP) in 1996.

Before the vote, experts doubted whether the Labour Party could win such a majority. Professor Jennifer Curtin of the University of Auckland said previous party leaders had been tipped to win a majority, but failed to do so.

“New Zealand voters are quite tactical in that they split their vote, and close to 30% give their party vote to a smaller party, which means it is still a long shot that Labour will win over 50% of the vote.”

Ardern pledged to instil more climate-friendly policies, boost funding for disadvantaged schools and raise income taxes on top earners.

Collins and the National Party had pledged to increase investment in infrastructure, pay down debt and temporarily reduce taxes.
Tags: Incumbent