Monday, January 06, 2020

The neoconservative fantasy at the center of the Soleimani killing

No, Mike Pompeo, killing the military leader won’t give Iranians more freedom.
President Donald Trump speaks as Secretary of State Mike Pompeo 
and National Security Adviser Robert O’Brien listen during a meeting
 in the Oval Office of the White House on December 17, 2019, in 
Washington, DC. Alex Wong/Getty Images


Days before the US invaded Iraq in 2003, then-Vice President Dick Cheney told NBC News why the Bush administration believed the military mission would be successful. “I think things have gotten so bad inside Iraq, from the standpoint of the Iraqi people, my belief is we will, in fact, be greeted as liberators,” he said.

That didn’t happen, and the US instead got bogged down in a brutal years-long war, leading to thousands dead and injured and trillions of dollars spent. What’s more, US forces were quickly seen to be little more than imperialist occupiers across the Middle East.

But the ideology that American force can give people of the Middle East space for a democratic uprising persists. That became clear when Secretary of State Mike Pompeo offered an optimistic assessment of how citizens of Iraq and Iran will react to the Thursday night killing of Qassem Soleimani, who led Iranian covert operations and intelligence across the Middle East and was one of Tehran’s most senior military leaders.

“We have every expectation that people not only in Iraq, but in Iran, will view the American action last night as giving them freedom, freedom to have the opportunity for success and prosperity for their nations,” the top US diplomat told CNN on Friday morning. “While the political leadership may not want that, the people in these nations will demand it.”


Like all wishful thinking, Pompeo’s statement has a sprinkling of truth. Videos on Twitter showed Iraqis celebrating Soleimani’s demise. And some Iran experts, like the Council on Foreign Relations’ Ray Takeyh, told me that the repressive regime’s power now is somewhat lessened with Soleimani gone. “In a roundabout way, Pompeo’s statement does seem sound to me,” he said.

But let’s be clear about what Pompeo is really saying. His claim is that dropping bombs on Soleimani and other military leaders will prompt citizens of Iraq and Iran to rebel against their governments, thank the US, and push for something akin to American democracy. That, most experts say, is folly.


“I doubt any significant number of people in Iran and Iraq will see this as a gift of freedom,” says Trita Parsi, an Iran expert at the Quincy Institute for Responsible Statecraft.

Yet the long-held myth of America liberating Iran with military force appears to have taken hold in the Trump administration — and it could potentially cause more problems with Tehran down the line.
Why “neoconservatism” could make Trump’s Iran policy worse

Back in 2016, Max Fisher wrote for Vox about “neoconservatism,” and how that ideology proved the real culprit for why the Bush administration went to war in Iraq.


Neoconservatism, which had been around for decades, mixed humanitarian impulses with an almost messianic faith in the transformati[onal] virtue of American military force, as well as a deep fear of an outside world seen as threatening and morally compromised.

This ideology stated that authoritarian states were inherently destabilizing and dangerous; that it was both a moral good and a strategic necessity for America to replace those dictatorships with democracy — and to dominate the world as the unquestioned moral and military leader.

That same ideology — now focused on Iran — is championed by Pompeo and former National Security Adviser John Bolton. Killing Soleimani, they effectively argue, will help draw a straight line to eventual regime change in Iran.



Congratulations to all involved in eliminating Qassem Soleimani. Long in the making, this was a decisive blow against Iran's malign Quds Force activities worldwide. Hope this is the first step to regime change in Tehran.— John Bolton (@AmbJohnBolton) January 3, 2020

But Eric Brewer, a long-time intelligence official who recently left Trump’s National Security Council after working on Iran, doesn’t find that narrative compelling. “Soleimani’s death is not going to end Iranian influence in Iraq,” he told me, “nor is it likely to lead to some sort of regime change uprising in Iran.”

There are a few reasons for that.

First, Iranian influence is already well entrenched inside Iraq’s military and political structures; removing Soleimani from the equation doesn’t change that. Second, Iraqis and Iranians have shown they are willing to push for better governance without US military intervention spurring them to action. In fact, Iraqi protests recently led some of the leadership there to resign, partly fueled by the perception that Iran was really running Iraqi affairs of state. And today there are already large-scale anti-US demonstrations sweeping Iran after the Soleimani killing.

Third, US-Iran history over the last few decades makes everyday Iranians skeptical of American intentions in the country, especially Washington’s involvement in the 1953 coup of Iranian Prime Minister Mohammad Mossadegh. (There was an anti-government movement to remove President Mahmoud Ahmadinejad from power when Barack Obama was president, and he chose not to get involved so it didn’t seem like the US was meddling.)

Finally, there’s the hypocrisy problem: The US has no qualms about supporting other authoritarian regimes around the world, including Iran’s chief rival Saudi Arabia.

In other words, Iraqis and Iranians wanting a more democratic future don’t necessarily need the US to get it, and may not even trust Washington to begin with. “The idea that there will be a critical mass in Iran believing that the US has Iran’s best interests in mind is nonsensical,” the Quincy Institute’s Parsi told me.

But the belief in this idea is a persistently bad and uniquely American one. It smacks of thinking frozen in the Cold War, that all it will take is the toppling of a dictator to allow democracy to flourish. Yet time after time, from Libya to Egypt to elsewhere, that just hasn’t proven true.

More likely than not, Soleimani’s death will lead to an escalation of violence between the US and Iran. Tehran will retaliate — maybe not immediately, but eventually — putting Americans at risk. That will, in turn, likely lead to an escalation that puts thousands more in danger. Instead of “freedom,” then, everyone gets war.

The question now is if Trump administration leaders will continue to form Iran policy based on the misguided notion that American military might will bring about democracy in Tehran or the region. That hasn’t worked before, and it’s unlikely to now.

The strong economy is an opportunity for progressives

Time to tackle long-simmering problems: child care, poverty, and the environment.
Construction workers build a portion of a high-speed railway
 line in Fresno, California, on May 8, 2019, amid ongoing 
construction of the railway in California’s Central and 
San Joaquin Valleys. Frederic J. Brown/AFP via Getty Images


Many on the left hoped that the silver lining of the prolonged slump since the Great Recession of 2008 would be to discredit capitalism and build momentum for drastic change. Only the youngest voters have stayed wedded to this idea, with much of the broader electorate holding a fairly positive view of the status quo: 76 percent of voters rate economic conditions as either “very good” or “somewhat good,” according to a CNN poll in late December.

For liberals, this sets up a worrisome political dynamic ahead of 2020. Typically, positive attitudes about the economy are good news for incumbent presidents.

But one nice thing about a strong labor market is that it creates political space to finally pay attention to the myriad social problems that can’t be solved by a “good economy” alone — things like child care, health care, college costs, and environmental protection — that during, the Obama years, tended to be crowded out by a jobs-first mentality.

Good times, in other words, could be the perfect opportunity to finally tackle the many long-lingering problems for which progressives actually have solutions and about which conservatives would rather not talk.
Voters are happy with the economy

For years, there was a mostly true narrative that despite positive GDP growth, actual good economic news was largely limited to stock prices and corporate profits. More recently, however, the corner has turned.

The Bloomberg Consumer Comfort Index shows a high degree of optimism about the future of the economy. A Gallup poll found that 65 percent of adults think it’s a good time to find a quality job, and 55 percent rate economic conditions as either good or excellent. Fifty-six percent of Americans rate their personal financial situation as good or excellent, 66 percent say they have enough wealth and income to live comfortably, and 57 percent say their personal financial situation is improving.

Corporate profits, meanwhile, remain high but have actually been falling as a share of the economy since 2012.

At the same time, a low unemployment rate plus higher minimum wages in many states mean that pay is rising — especially for workers at the bottom end.

At the same time, according to voters, “the economy” no longer rates among the top four problems facing the nation.

That doesn’t change the fact that macroeconomic management remains, substantively speaking, one of the government’s most important tasks. But the mission for the next administration won’t be to heal a broken labor market, but to take advantage of a sound one to create huge benefits.
A strong labor market heals many ills

One nice thing about low unemployment is that it tends to lead to wage increases.


Employers, of course, don’t like to raise wages when they can get away with it. But in the context of a strong labor market, that stinginess brings its own benefits, since the only way to get away with avoiding big wage increases is to take a risk on workers who might otherwise be locked out. Companies have suddenly found themselves more open to hiring ex-convicts, for example, which is not only good for a very vulnerable population but also makes it much less likely that ex-offenders will end up committing new crimes. Similarly, people in recovery from drug and alcohol addiction aren’t normally an employer’s first choice of job applicants. But beggars can’t be choosers, and a strong labor market is a great chance for people who need of a second chance to get one.

A related issue is racial discrimination. For as long as we have records, the black unemployment rate has always been higher than the white unemployment rate. But the racial unemployment gap, which surged during the Great Recession, has been steadily narrowing ever since. Discrimination becomes more costly during periods of full employment, and continued strength in the labor market will continue to whittle away at this and other similar gaps.


Last, but by no means least, a strong labor market is the optimal time for labor militancy.

The threat of a strike is much more potent at a time when customers are plentiful but potential replacement workers are scarce. And periods in which it’s relatively easy for an experienced worker to get a new job with a new company are typically periods in which it’s hard for employers to intimidate workers out of organizing. Indeed, as Polish economist Michael Kalecki predicted way back in 1943, this is one reason why business interests somewhat counterintuitively fail to advocate for robust full employment policies. An actual recession is bad for almost everyone — but a healthy chunk of the population out of work makes for a decent disciplinary tool, and it keeps the political agenda occupied with things like the need to fix the mythical “skills gap” rather than with worker demands for a bigger piece of the pie.

Meanwhile, a reduced public obsession with the need to address short-term economic problems opens up more space to address the many longstanding problems that can’t be cured by a strong economy.
A good labor market doesn’t fix everything

Even as the labor market has gotten steadily healthier in recent years, the American birth rate continues to fall from its recession-era highs.

Women tell pollsters that’s not because the number of kids they’d ideally like to have has fallen. Instead, the No. 1 most-cited reason is the high cost of child care. Child care doesn’t get more affordable just because the unemployment rate is low. If anything, it’s the opposite — child care is extremely labor-intensive, and the prospects for introducing labor-saving technology into the mix look bad. To make child care broadly affordable would require government action; it’s just not going to happen in a free market, which doesn’t magically allocate extra income to people who have young kids.
Six-month-old Zachary Vizcaino plays on a mat while his mother, 
Emily Vizcaino, works nearby at Play, Work or Dash,
 a coworking space that offers child care, on January 29, 2017,
 in Vienna, Virginia. Sarah L. Voisin/The Washington Post via Getty Images

More broadly, America’s sky-high child poverty rate compared with peer countries is entirely attributable to our failure to enact a child allowance policy. A better labor market helps marginally, but it doesn’t address the fundamental issue that a new baby increases financial needs while also making it harder to work long hours.

By the same token, getting sick is expensive, and simultaneously, often leads to income loss. Absent a strong government role, there’s no way to ensure that care and other needed resources are there for those who need it most.


Last, but by no means least, there’s the environment. An unregulated economy generates a lot of pollution, and nothing about strong economic growth changes that. On the contrary, what happens is the long-term negative impacts of the pollution end up outweighing the short-term benefit of letting businesses operate unimpeded. Moving the ball forward on everything from climate change to lead cleanup to air pollution requires persuading voters to make the opposite calculation: that the economy is doing well enough to prioritize long-term concerns.

These are all policy areas in which progressives want to act regardless of the current state of the economy. But the mass public is more likely to give these ideas a hearing when there’s no real worry of a short-term economic emergency. And conservatives really have nothing to say about any of them.
Trump has no record outside the economy

The administration of President Donald Trump is steadily pursuing a policy agenda aimed at stripping as many people as possible of their health insurance, but the president never talks about it.

By the same token, his reelection campaign claims “we have the cleanest air on record” when, in fact, air quality has been declining under Trump, and his administration is working on a bunch of regulatory rollbacks that will make air pollution even worse. Meanwhile, Trump’s only child care proposal has been the idea of creating a one-off grant program designed to give states extra money if they agreed to lower quality standards for child care settings.
Donald Trump makes a video call to US troops stationed 
worldwide from his Mar-a-Lago resort in Palm Beach, 
Florida, on December 24, 2019. Nicholas Kamm/AFP 
via Getty Images

Progressives have ideas about how to boost economic growth, but conservatives have their own clearly articulated vision, one centered on tax cuts and business-friendly regulation. By contrast, when it comes to other social concerns that transcend the short-term state of the economy, progressives have a set of proposals and, well, conservatives have basically nothing. The strong economy is, itself, an asset for Trump during his reelection bid. But the recovery he’s presiding over plainly began under former President Barack Obama, and all Trump has really done is avoid rocking the boat too much. Meanwhile, growth itself is raising the salience of a whole range of other topics on which conservatives have essentially nothing to say.

Democrats’ best path forward isn’t in denying that economic progress has been made, but in emphasizing the extent to which it’s absurd that a rich and stable country like ours is also home to sky-high child poverty, middle-class families who can’t afford day care for their kids, and worsening air quality. Low unemployment is great, but it should be the start of good social policy — not the end.

CHANGING THINGS UP
Finland’s new prime minister wants her country on a four-day workweek

January 6, 2020

Finland has been at the forefront of flexible work schedules for years, starting with a 1996 law that gives most employees the right to adjust their hours up to three hours earlier or later than what their employer typically requires.

The country’s newly installed political leader, Sanna Marin, just upped the ante, though, proposing to put the entire country on a four-day workweek consisting of six-hour workdays.

Marin, the world’s youngest sitting prime minister and the leader of a five-party center-left coalition, said the policy would allow people to spend more time with their families and that this could be “the next step” in working life.

Marin is not the first politician to recently float the idea of scaling back work hours. Neighboring Sweden tested out six-hour work days a couple of years ago. And the UK’s Labour Party said in September that if elected, it would bring a 32-hour working week to the UK within 10 years. (It wasn’t elected, however, and details on how the hours would be structured were in any case vague.) In France, the standard work week is 35 hours, reduced from 39 hours in 2000.

A slew of companies around the world have been running their own experiments lately. Perpetual Guardian, a small New Zealand firm that helps clients manage financial estates, trialed a four-day work week before formally adopting the policy in November 2018. Its CEO, Andrew Barnes, is now an evangelist for the idea. In Ireland, a recruiting firm called ICE Group shifted to a four-day workweek and found that people’s habits changed, with staffers taking fewer breaks and checking social media less often.

Both firms are small—Perpetual Guardian trialed the schedule with 240 employees; ICE Group has a staff of about 50 people in Ireland. But larger companies have been experimenting, too. Microsoft Japan, for example, implemented a four-day workweek this past summer. The company said employees reported being 40% more productive, and that the policy was particularly popular among younger workers.

While shorter work weeks can bring clear benefits to employees’ well-being, they also can be difficult to implement. The Wellcome Trust, a science research foundation in London, dropped plans for a four-day workweek last year, saying it would be “too operationally complex to implement” for its staff of 800.

But for those that have latched onto the idea, there is the prospect of baking even more flexibility into the system. At Perpetual Guardian, for example, a four-day workweek isn’t the only model; after measuring the productivity of its staff during a typical, five-day workweek, the firm set a standard benchmark and then allowed its employees to work out how to get there in 80% of the time, which could mean fewer workdays per week, or shortened hours spread across five days.

Finland’s new prime minister backs four-day working week
Jon Stone
The Independent January 6, 2020



Finland's Prime Minister Sanna Marin took office in December at the head of a broad left-of-centre coalition: AFPMore


Finland’s new prime minister is a supporter of cutting the working week to four-day days, and has argued that the change would let people spend more time with their families.

Sanna Marin, a social democrat, who took office in December, leads a broad coalition that also includes greens, leftists and centrists.

“I believe people deserve to spend more time with their families, loved ones, hobbies and other aspects of life, such as culture,” she had previously said at her party's conference in the autumn of 2019.

“This could be the next step for us in working life.”

While the idea is not government policy under her coalition administration, her recent support for the radical move raises the prospect that Finland could eventually become the latest country to experiment with cutting working hours.

Ms Marin, who is the world’s youngest serving national leader, also suggested that as an alternative the standard working day could be reduced to six hours, down from the current eight.

The working week in Europe was progressively shortened around the turn of the 20th century, largely under pressure from the labour movement – with the gradual introduction of the modern two-day weekend and the eight-hour day.

But change has been slower in recent decades, with the five-day week and eight-hour day becoming the standard benchmark across the developed world.

An attempt by former French prime minister Lionel Jospin to bring in a 35-hour workweek at the beginning of the 21st century produced only limited success, with many loopholes and low uptake.

Ahead of last year’s general election, the UK’s Labour Party said it wanted to work towards a four-day week as a long-term aim within a decade, though the party remains in opposition.

Critics say reducing the working week while paying people the same amount would impose a cost on business, but proponents say the difference would be made up because of increased productivity.

Some local councils in Finland’s neighbour Sweden have been experimenting with six-hour days in recent years, with early results suggesting the move increased productivity.

The political backdrop to the Finnish prime minister’s call is months of industrial unrest, which brought down the previous government. The strikes were brought to an end by a pay deal between unions and employers, which saw improvements in pay rises and working conditions.

Finland has one of the highest levels of trade union coverage in Europe, with 91 per cent of employees covered by collective agreements guaranteeing working time, pay and conditions.

This figure compares with an EU average of 60 per cent. The corresponding coverage for the UK is 29 per cent of workers, one of the lowest in the bloc – while the highest are found in France, Belgium and Austria, where collective bargaining coverage is near-universal.

This article has been updated to clarify that Sanna Marin's comments were made in 2019 before she became prime minister.


Finnish prime minister wants 4-day workweek, 6-hour workday


Brittany De Lea
Fox BusinessJanuary 6, 2020

Finland’s new prime minister, Sanna Marin, wants to encourage Finnish workers to have a better work-life balance.

The 34-year old, who has been serving as prime minister since December, has detailed plans to introduce an abridged workweek in the country as a means to allow people to spend more time at home.

Not only is Marin aiming for a four-day workweek, she is also weighing a six-hour working day, according to New Europe.

“I believe people deserve to spend more time with their families, loved ones, hobbies and other aspects of life, such as culture. This could be the next step for us in working life,” Marin said, as reported by multiple news outlets.

NEW ZEALAND FIRM'S 4-DAY WORKWEEK WORKS, OTHERS SHOULD FOLLOW

4-DAY WORKWEEKS ARE BETTER FOR BUSINESS, MICROSOFT FINDS – AND HOW TO MAXIMIZE YOUR TIME AT THE OFFICE

Other countries and businesses have also considered similar ideas.

As previously reported by FOX Business, a New Zealand company tested and later officially implemented a four-day workweek after deeming it was beneficial for business and staff.

Perpetual Guardian – an estate planning business – conducted a study over the course of two months whereby employees were still paid for five days of work. An independent study of the shortened workweek concluded that staff stress levels decreased, engagement increased – as did measures of leadership, commitment, stimulation and empowerment.

The company’s CEO is even encouraging other businesses to take up the model.

Microsoft tested a four-day week in Japan, finding productivity levels increased and business expenses declined.

In Sweden, a 23-month study was conducted among nurses at a care center for seniors, which found that nurses took fewer sick days and absences and had more energy when they left their jobs.


Microsoft Japan’s four-day week is new evidence that working less is good for productivity

November 4, 2019


The theory behind introducing a four-day work week—without cutting pay—is that employees will be so delighted to have time gifted back to them that they’ll work harder in the hours remaining. The latest trial to emerge, from the large workforce at Microsoft Japan, suggests it might be applicable at scale, and even in one of the world’s most notoriously “workaholic” cultures.

Microsoft Japan ran a trial in August 2019, when every Friday it closed the office and gave roughly 2,300 full-time employees a paid holiday, according to Sora News 24, which first reported the story in English. The result was an enormous jump in productivity. Based on sales per employee, workers were almost 40% more productive in the compressed hours of August 2019 as they were the same month a year earlier.

Other productivity hacks were also encouraged, including limiting meetings to 30 minutes and suggesting that instead of calling meetings at all, employees could more fully utilize software available for online collaboration (in this case, of course, that software was Microsoft Teams, though other systems are available). On their day off, workers were encouraged to make use of the time by volunteering, learning, and taking rest “to further improve productivity and creativity,” according to a company blog (link in Japanese).

In the coming months, another trial will run with slightly different parameters, the blog adds. This trial won’t cut hours in the same way, but rather suggests that employees focus on resting well and coming together to share ideas about how to work, rest, and learn.

Other companies that have trialed and implemented four-day weeks have found, similarly, that their productivity is boosted. Perpetual Guardian, the New Zealand estate management firm that was one of the first to go public with a research-backed assessment of its trial, and then adopted the policy in November 2018, found that productivity was unharmed by the shortened work week, while staff stress levels were dramatically improved. More recently, recruitment firm ICE Group this year became the first company in Ireland to adopt a four-day week for all its staff.

Microsoft Japan’s trial is significant because it’s the biggest yet in terms of both staff numbers and the apparent effect on productivity. It’s caught the global imagination, perhaps, because Japan’s work culture is seen as particularly punishing. If a big Japanese tech company can change its ways and achieve startlingly better results, perhaps there’s hope for combatting other long-hours work cultures, like the US.

With translation assistance from Tatsuya Oiwa.




Iran can't hit back over Soleimani's killing because America has only fictional heroes like SpongeBob SquarePants, a prominent cleric said
Qassem Soleimani and SpongeBob SquarePants. Press Office of Iranian Supreme Leader/Anadolu Agency/Getty Images; YouTube/SpongeBobSquarePantsofficial


An Iranian cleric, Shahab Moradi, said Iran would struggle to hit back against the US by striking a parallel figure to Maj. Gen. Qassem Soleimani because the US has only "fictional" heroes.
"Think about it. Are we supposed to take out Spiderman and SpongeBob?" he said in a live interview on Iran's IRIB Ofogh TV channel.
The US assassinated Soleimani in Iraq on Thursday.
The strike, orchestrated by President Donald Trump, was criticized by US politicians and European leaders who urged de-escalation.
Visit Business Insider's homepage for more stories.

An Iranian cleric mocked the US by saying Iran would not be able to strike back in kind after the assassination of Maj. Gen. Qassem Soleimani because the US has only fictional heroes such as Spider-Man and SpongeBob SquarePants.


The cleric, Shahab Moradi, made the comment in a live TV interview on Iran's IRIB Ofogh channel. A clip of the segment was posted on Saturday evening on Twitter, but it is unclear when the program aired.

In the clip, Moradi says:

"In your opinion, if anyone around the world wants to take their revenge on the assassination of Soleimani and intends to do it proportionately in the way they suggest — that we take one of theirs now that they've got one of ours — who should we consider to take out in the context of America?

"Think about it. Are we supposed to take out Spider-Man and SpongeBob? They don't have any heroes. We have a country in front of us with a large population and a large landmass, but it doesn't have any heroes. All of their heroes are cartoon characters — they're all fictional."
Shahab Moradi called in to Iran's IRIB Ofogh TV channel. 
Screenshot/Asranetv/Twitter

A US airstrike late Thursday killed Soleimani and an Iraqi militia commander, Abu Mahdi al-Muhandis, at the airport in Baghdad.

Soleimani's death has sparked fresh tensions between Iran and the US as Iran promised to take revenge for the death of the revered general.

Iran's supreme leader, Ayatollah Ali Khamenei, announced three days of mourning in Iran as well as "severe revenge," though it remains unclear how the country would carry that out.






How the Modern Workplace Fails Women

The world of work needs a wholesale redesign — led by data on female bodies and female lives


Caroline Criado Perez
Follow
Dec 24, 2019 · 11 min read




Illustration: solarseven/Getty Images


Invisible Women by Caroline Criado Perez (Abrams Press) was the winner of the 2019 Financial Times and McKinsey Business Book of the Year Award. In this excerpt from the book, the author describes some of the many ways in which women are harmed by workplaces designed without their needs in mind.

Itwas in 2008 that the big bisphenol A (BPA) scare got serious. Since the 1950s, this synthetic chemical had been used in the production of clear, durable plastics, and it was to be found in millions of consumer items from baby bottles to food cans to main water pipes. By 2008, 2.7 million tons of BPA was being produced globally every year, and it was so ubiquitous that it had been detected in the urine of 93% of Americans over the age of six. And then a U.S. federal health agency came out and said that this compound that we were all interacting with on a daily basis may cause cancer, chromosomal abnormalities, brain and behavioral abnormalities, and metabolic disorders. Crucially, it could cause all these medical problems at levels below the regulatory standard for exposure. Naturally, all hell broke loose.

Fearing a major consumer boycott, most baby-bottle manufacturers voluntarily removed BPA from their products, and while the official U.S. line on BPA is that it is not toxic, the EU and Canada are on their way to banning its use altogether. But the legislation that we have exclusively concerns consumers: no regulatory standard has ever been set for workplace exposure.

The story of BPA is about gender and class, and a cautionary tale about what happens when we ignore female medical health data. We have known that BPA can mimic the female hormone estrogen since the mid-1930s. And since at least the 1970s we have known that synthetic estrogen can be carcinogenic in women. But BPA carried on being used in hundreds of thousands of tons of consumer plastics, in CDs, DVDs, water and baby bottles, and laboratory and hospital equipment.

“It was ironic to me,” says occupational health researcher Jim Brophy, “that all this talk about the danger for pregnant women and women who had just given birth never extended to the women who were producing these bottles. Those women whose exposures far exceeded anything that you would have in the general environment. There was no talk about the pregnant worker who is on the machine that’s producing this thing.”

This is a mistake, says Brophy. Worker health should be a public health priority if only because “workers are acting as a canary for society as a whole. If we cared enough to look at what’s going on in the health of workers that use these substances every day,” it would have a “tremendous effect on these substances being allowed to enter into the mainstream commerce.”

But we don’t care enough. In Canada, where women’s health researcher Anne Rochon Ford is based, five women’s health research centers that had been operating since the 1990s, including Ford’s own, had their funding cut in 2013. It’s a similar story in the U.K., where “public research budgets have been decimated,” says Rory O’Neill. And so the “far better resourced” chemicals industry and its offshoots have successfully resisted regulation for years, dismissing studies and other evidence of the negative health impacts of their products.

The result is that workplaces remain unsafe. Brophy tells me that the ventilation he found in most auto-plastics factories was limited to “fans in the ceiling. So the fumes literally pass the breathing zone and head to the roof and in the summertime when it’s really hot in there and the fumes become visible, they will open the doors.” It’s the same story in Canadian nail salons, says Rochon Ford. There are no ventilation requirements, there are no training requirements. There is no legislation around wearing gloves and masks. And there is nobody following up on the requirements that do exist — unless someone makes a complaint.


The introduction of Big Data into a world full of gender data gaps can magnify and accelerate already-existing discriminations.

But who will make a complaint? Certainly not the women themselves. Women working in nail salons, in auto-plastics factories, in a vast range of hazardous workplaces, are poor, working class, often immigrants who can’t afford to put their immigration status at risk. And this makes them ripe for exploitation.

Auto-plastics factories tend not to be part of the big car companies like Ford. They are usually arms-length suppliers, “who tend to be nonunionized and tend to be able to get away with more employment-standard violations,” Rochon Ford tells me. Workers know that if they demand better protections the response will be “Fine, you’re out of here. There’s 10 women outside the door who want your job.” “We’ve heard factory workers tell us this in the exact same words,” says Rochon Ford.

If this sounds illegal, well, it may be. Employee rights vary from country to country, but they tend to include a right to paid sick and maternity leave, a right to a set number of hours, and protection from unfair and/or sudden dismissal. But these rights only apply if you are an employee. And, increasingly, many workers are not.

In many nail salons, technicians are technically independent contractors. This makes life much easier for the employers: the inherent risk of running a company based on consumer demand is passed on to workers, who have no guaranteed hours and no job security. Not enough customers today? Don’t come in and don’t get paid. Minor accident? You’re out of here, and forget about redundancy pay.

Nail salons are the tip of an extremely poorly regulated iceberg when it comes to employers exploiting loopholes in employment law. Zero-hour contracts, short-term contracts, employment through an agency, are all part of the Silicon Valley-built “gig economy.” But the gig economy is in fact often no more than a way for employers to get around basic employee rights. Casual contracts create a vicious cycle: the rights are weaker to begin with, which makes workers reticent to fight for the ones they do still have. And so those get bent too.

Naturally, the impact of what the International Trade Union Confederation (ITUC) has termed the “startling growth” of precarious work has barely been gender-analyzed. The ITUC reports that its feminized impact is “poorly reflected in official statistic and government policies,” because the “standard indicators and data used to measure developments on labor markets” are not gender-sensitive, and, as ever, data is often not sex-disaggregated, “making it sometimes difficult to measure the overall numbers of women.” There are, as a result, “no global figures related to the number of women in precarious work.”

But the regional and sector-specific studies that do exist suggest “an overrepresentation of women” in precarious jobs. A Harvard study on the rise of “alternative work” in America between 2005 and 2015 found that the percentage of women in such work “more than doubled,” meaning that “women are now more likely than men to be employed in an alternative work arrangement.”

This is a problem because while precarious work isn’t ideal for any worker, it can have a particularly severe impact on women. For a start, it is possible that it is exacerbating the gender pay gap: in the U.K. there is a 34% hourly pay penalty for workers on zero-hours contracts, a 39% hourly pay penalty for workers on casual contracts, and a 20% pay penalty for agency workers — which are on the increase as public services continue to be outsourced. But no one seems interested in finding out how this might be affecting women.

There is, to begin with, “limited scope for collective bargaining” in agency jobs. This is a problem for all workers, but can be especially problematic for women because evidence suggests that collective bargaining (as opposed to individual salary negotiation) could be particularly important for women. As a result, an increase in jobs like agency work that don’t allow for collective bargaining might be detrimental to attempts to close the gender pay gap.

But the negative impact of precarious work on women isn’t just about unintended side effects. It’s also about the weaker rights that are intrinsic to the gig economy. In the U.K., a female employee is only entitled to maternity leave if she is actually an employee. If she’s a “worker,” that is, someone on a short-term or zero-hours contract, she isn’t entitled to any leave at all, meaning she would have to quit her job and reapply after she’s given birth.

Another major problem with precarious work that disproportionately impacts female workers is unpredictable, last-minute scheduling. Women still do the vast majority of the world’s unpaid care work and, particularly when it comes to childcare, this makes irregular hours extremely difficult. The scheduling issue is being made worse by gender-insensitive algorithms. A growing number of companies use “just-in-time” scheduling software, which use sales patterns and other data to predict how many workers will be needed at any one time. They also respond to real-time sales analyses, telling managers to send workers home when consumer demand is slow.

It’s a system that works great for the companies that use the software to boost profits by shifting the risks of doing business onto their workers, and the increasing number of managers who are compensated on the efficiency of their staffing. It feels less great, however, for the workers themselves, particularly those with caring responsibilities. The introduction of Big Data into a world full of gender data gaps can magnify and accelerate already-existing discriminations: whether its designers didn’t know or didn’t care about the data on women’s unpaid caring responsibilities, the software has clearly been designed without reference to them.

The work that (mainly) women do (mainly) unpaid, alongside their paid employment is not an optional extra. This is work that society needs to get done. And getting it done is entirely incompatible with just-in-time scheduling designed entirely without reference to it. Which leaves us with two options: either states provide free, publicly funded alternatives to women’s unpaid work, or they put an end to just-in-time scheduling.


Awoman doesn’t need to be in precarious employment to have her rights violated. Women on irregular or precarious employment contracts have been found to be more at risk of sexual harassment (perhaps because they are less likely to take action against a colleague or employer who is harassing them) but as the #MeToo movement washes over social media, it is becoming increasingly hard to escape the reality that it is a rare industry in which sexual harassment isn’t a problem.

As ever, there is a data gap, with official statistics extremely hard to come by. The UN estimates (estimates are all we have) that up to 50% of women in EU countries have been sexually harassed at work. The figure in China is thought to be as high as 80%. In Australia a study found that 60% of female nurses had been sexually harassed.

The extent of the problem varies from industry to industry. Workplaces that are either male-dominated or have a male-dominated leadership are often the worst for sexual harassment. A 2016 study by the TUC found that 69% of women in manufacturing and 67% of women in hospitality and leisure “reported experiencing some form of sexual harassment” compared to an average of 52%. A 2011 U.S. study similarly found that the construction industry had the highest rates of sexual harassment, followed by transportation and utilities. One survey of senior level women working in Silicon Valley found that 90% of women had witnessed sexist behavior; 87% had been on the receiving end of demeaning comments by male colleagues; and 60% had received unwanted sexual advances. Of that 60%, more than half had been propositioned more than once, and 65% had been propositioned by a superior. One in three women surveyed had felt afraid for her personal safety.


Women have worked unpaid, underpaid, underappreciated, and invisibly, but they have always worked. But the modern workplace does not work for women.

Some of the worst experiences of harassment come from women whose work brings them into close contact with the general public. In these instances, harassment all too often seems to spill over into violence.

If incidents of physical violence aren’t a regular concern at your place of work, be grateful that you’re not a health worker. Research has found that nurses are subjected to “more acts of violence than police officers or prison guards.” A recent U.S. study similarly found that “health care workers required time off work due to violence four times more often than other types of injury.”

Following the research he conducted with fellow occupational health researcher Margaret Brophy, Jim Brophy concluded that the Canadian health sector was “one of the most toxic work environments that we had ever seen.” One worker recalled the time a patient “got a chair above his head,” noting that “the nursing station has been smashed two or three times.” Other patients used bed pans, dishes, even loose building materials as weapons against nurses.

But despite its prevalence, workplace violence in health care is “an underreported, ubiquitous, and persistent problem that has been tolerated and largely ignored.” This is partly because the studies simply haven’t been done. According to the Brophys’ research, prior to 2000, violence against health care workers was barely on the agenda: when in February 2017 they searched Medline for “workplace violence against nurses” they found “155 international articles, 149 of which were published from 2000 to the time of the search.”

But the global data gap when it comes to the sexual harassment and violence women face in the workplace is not just down to a failure to research the issue. It’s also down to the vast majority of women not reporting. And this in turn is partly down to organizations not putting in place adequate procedures for dealing with the issue. Women don’t report because they fear reprisals and because they fear nothing will be done — both of which are reasonable expectations in many industries. “We scream,” one nurse told the Brophys. “The best we can do is scream.”

The inadequacy of procedures to deal with the kind of harassment that female workers face is itself likely also a result of a data gap. Leadership in all sectors is male-dominated and the reality is that men do not face this kind of aggression in the way women do. And so, many organizations don’t think to put in procedures to deal adequately with sexual harassment and violence. It’s another example of how much a diversity of experience at the top matters for everyone — and how much it matters if we are serious about closing the data gap.

Women have always worked. They have worked unpaid, underpaid, underappreciated, and invisibly, but they have always worked. But the modern workplace does not work for women. From its location, to its hours, to its regulatory standards, it has been designed around the lives of men and it is no longer fit for purpose. The world of work needs a wholesale redesign — of its regulations, of its equipment, of its culture — and this redesign must be led by data on female bodies and female lives. We have to start recognizing that the work women do is not an added extra, a bonus that we could do without: women’s work, paid and unpaid, is the backbone of our society and our economy. As we enter a new decade, it’s about time we started valuing it.

Text adapted from Caroline Criado Perez’s Invisible Women, published by Abrams Press.

Marker


Making you smarter about business. A new publication from Medium.
Follow


560




WRITTEN BY

Caroline Criado Perez
Follow

Writer, broadcaster, feminist activist, and author of Invisible Women: Data Bias in a World Designed for Men.
How the Finance Industry Fueled Four Decades of Inequality in America

Starting in the ’80s, the rise of finance set forces in motion that have reshaped the economy
Photo: The Washington Post/Getty Images

Coauthored with Megan Tobias Neely, Postdoctoral Fellow in Sociology at The Clayman Institute for Gender Research at Stanford University.

These days, finance is so fundamental to our everyday lives that it is difficult to imagine a world without it. But until the 1970s, the financial sector accounted for a mere 15 percent of all corporate profits in the US economy. Back then, most of what the financial sector did was simple credit intermediation and risk management: banks took deposits from households and corporations and loaned those funds to homebuyers and business. They issued and collected checks to facilitate payment. For important or paying customers, they provided space in their vaults to safeguard valuable items. Insurance companies received premiums from their customers and paid out when a costly incident occurred.

By 2002, the financial sector had tripled, coming to account for 43% of all the corporate profits generated in the U.S. economy. These profits grew alongside increasingly complex intermediations such as securitization, derivatives trading, and fund management, most of which take place not between individuals or companies, but between financial institutions. What the financial sector does has become opaque to the public, even as its functions have become crucial to every level of the economy.

And as American finance expanded, inequality soared. Capital’s share of national income rose alongside compensation for corporate executives and those working on Wall Street. Meanwhile, among full-time workers, the Gini index (a measure of earnings inequality) increased 26%, and mass layoffs became a common business practice instead of a last resort. All these developments amplified wealth inequality, with the top 0.1% of U.S. households coming to own more than 20% of the entire nation’s wealth — a distribution that rivals the dominance of the robber barons of the Gilded Age. When the financial crisis of 2008 temporarily narrowed the wealth divide, monetary policies adopted to address it quickly resuscitated banks’ and affluent households’ assets but left employment tenuous and wages stagnant.

The past four decades of American history have therefore been marked by two interconnected, transformative developments: the financialization of the U.S. economy and the surge in inequality across U.S. society.

The rise of finance expanded the level of inequality since 1980 through three interrelated processes. First, it generated new intermediations that extract national resources from the productive sector and households to the financial sector without providing commensurate economic benefits. Second, it undermined the postwar accord between capital and labor by reorienting corporations toward financial markets and undermining the direct dependence on labor. And third, it created a new risk regime that transfers economic uncertainties from firms to individuals, which in turn increases the household demands for financial services.


As American corporations shift their focus from productive to financial activities, labor no longer represents a crucial component in the generation of profits, and the workers who perform productive tasks are devalued.
Economic rents

Where do all the financial sector’s profits come from? Most of the revenue for banks used to be generated by interest. By paying depositors lower interest rates than they charged borrowers, banks made profits in the “spread” between the rates. This business model began to change in the 1980s as banks expanded into trading and a host of fee-based services such as securitization, wealth management, mortgage and loan processing, service charges on deposit accounts (e.g., overdraft fees), card services, underwriting, mergers and acquisitions, financial advising, and market-making (e.g., IPOs [initial public offerings]). Altogether, these comprise the services that generate non-interest revenue.
 
Figure 1. Note: The sample includes all FDIC-insured commercial banks. Source: Federal Deposit Insurance Corporation Historical Statistics on Banking Table CB04

Figure 1 presents non-interest revenue as a percentage of commercial banks’ total revenue. Non-interest income constituted less than 10% of all revenue in the early 1980s, but its importance escalated and its share of income rose to more than 35% in the early 2000s. In other words, more than a third of all the bank revenue — particularly large banks’ revenue — is generated today by non-traditional banking activities. For example, JPMorgan Chase earned $52 billion in interest income but almost $94 billion in non-interest income right before the 2008 financial crisis. Half was generated from activities such as investment banking and venture capital, and a quarter from trading. In 2007, Bank of America earned about 47% of its total income from non-interest sources, including deposit fees and credit card services.

The ascendance of the new banking model led to a significant transfer of national resources into the financial sector in terms of not only corporate profits but also its elite employees’ compensation. Related industries, such as legal services and accounting, also benefit from the boom. However, whether these non-interest activities actually created values commensurate to their costs has been questioned, particularly when the sector has been dominated by only a handful of banks. In a way, these earnings could be considered economic rents — excessive returns without corresponding benefits.
The capital-labor accord

Besides extracting resources from the economy into the financial sector, financialization undermined the capital-labor accord by orienting non-financial firms toward financial markets. The capital-labor accord refers to an agreement and a set of production relations institutionalized in the late 1930s. The accord assigned managers full control over enterprise decision-making, and, in exchange, workers were promised real compensation growth linked to productivity, improved working conditions, and a high degree of job security. This agreement was reinforced by New Deal labor reforms such as unemployment insurance, the formal right to collective bargaining, maximum work hours, and minimum wages. As a result, for most of the 20th century, labor was considered a crucial driver for American prosperity. Its role, however, has been marginalized as corporations increasingly attend to the demands of the stock market.

To maximize the returns to their shareholders, American firms have adopted wide-ranging cost-cutting strategies, from automation to offshoring and outsourcing. Downsizing and benefit reductions are common ways that companies trim the cost of their domestic workforce. Many of these strategies are advocated by financial institutions, which earn handsome fees from mergers and acquisitions, spinoffs, and other corporate restructuring.

As non-financial firms expanded their operations to become lenders and traders, they came to earn a growing share of their profits from interest and dividends. The intensified foreign competition in the 1970s, combined with deregulated interest rates in the 1980s, drove this diversion, with large U.S. non-finance firms shifting investments from production to financial assets. Instead of targeting the consumers of their manufacturing or retail products to raise profits and reward workers, these firms extended their financial arms into leasing, lending, and mortgage markets to raise profits and reward shareholders.
 
Figure 2. Note: Financial assets include investments in governmental obligations, tax-exempt securities, loans to shareholders, mortgage and real estate loans, and other investments, but do not include cash and cash equivalence. Financial corporations include credit intermediation, securities, commodities, and other financial investments, insurance carriers, other financial vehicles and investment companies, and holding companies. Source: Internal Revenue Service Corporation Complete Report Table 6: Returns of Active Corporations

Figure 2 shows the amount of financial assets owned by U.S. corporations as a percentage of their total assets. Financial assets here consist of treasury, state, and municipal bonds, mortgages, business loans, and other financial securities. In theory, financial holding is countercyclical, meaning that firms hold more financial assets during economic contractions and then invest these savings in productive assets during economic booms. However, there has been a secular upsurge in financial holding since the 1980s, from about 35% of their total assets to more than half. Even when we remove financial corporations from the picture, we see a rise in financial holding from under 15% to more than 30% in the aftermath of the recession. Again, as American corporations shift their focus from productive to financial activities, purchasing financial instruments instead of stores, plants, and machinery, labor no longer represents a crucial component in the generation of profits, and the workers who perform productive tasks are devalued.

In addition to marginalizing labor, the rise of finance pushed economic uncertainties traditionally pooled at the firm level downward to individuals. Prior to the 1980s, large American corporations often operated in multiple product markets, hedging the risk of an unforeseen downturn in any particular market. Lasting employment contracts afforded workers promotion opportunities, health, pension, and other benefits, unaffected by the risks the company absorbed. Since the 1980s, fund managers have instead pressured conglomerates to specialize only in their most profitable activities, pooling risk at the fund level, not at the firm level. Consequently, American firms have become far more vulnerable to sudden economic downturns. To cope with that increased risk, financial professionals advised corporations to reconfigure their employment relationships from permanent arrangements to ones that emphasize flexibility — the firm’s flexibility, not the employees’. Workers began to be viewed as independent agents rather than members or stakeholders of the firm. As more and more firms adopt contingent employment arrangements, workers are promised low minimum hours but are required to be available whenever they are summoned.

The compensation principle shifted, too, from a fair-wage model that sustains long-term employment relationships to a contingent model that ties wages and employment to profits (meaning more workers are involved in productivity pay schemes than they realize; should their portion of the company lag in profits, their job, not just their compensation, is on the line). Retirement benefits also transformed from guarantees of financial security to ones dependent on the performance of financial markets. Of course, this principle mostly benefits high-wage workers who can afford the fluctuations. Many low-wage workers, not knowing how many hours they will work and how much pay they will receive, are forced to borrow to meet their short-term needs.
Atomized risk regime

The dispersion of economic risks and the widening labor market divide are reflected in the growing consumption of financial products at the household level. As defined-contribution plans gradually replaced defined-benefit pensions as the default benefit in the private sector, mutual funds and retirement accounts flourished. This new retirement system allows workers to carry benefits over as they move across different employers (helpful when jobs are evermore precarious), but it ties their economic prospects to the fluctuation of financial markets. Families became responsible for making investment decisions and securing retirement funds for themselves.

Retirement in the United States, thus, is no longer an age but a financial status. Many middle-class families have had to cash out their retirement accounts to cover emergency expenses. Many others fear that they cannot afford to exit the workforce when the time comes. And these are the lucky ones.


The expansion of credit is supposed to narrow consumption inequality across households and smooth volatility. Instead, it adds to economic uncertainty.

About half of American workers have neither defined-benefit nor defined-contribution plans; that rate declines to a third among millennials. Affluent families, who allocate an increasing proportion of their wealth to financial assets, benefit, since they have sufficient resources to buffer downturns and can gain substantially from financialization. Still, the only sure winners are financial advisors and fund managers, who charge a percentage of these savings annually, without having to pay out when there are great losses.

The expansion of credit is supposed to narrow consumption inequality across households and smooth volatility across the life course. Instead, it, too, adds to economic uncertainty. The debate about whether Americans borrow too much obscures the reality that the consequences of debt vary dramatically across the economic spectrum (as well as by race and gender). The abundance of credit provides affluent families the opportunity to invest or meet short-term financial needs at low cost. At the same time, middle-income households carry increasingly heavy debt burdens, curtailing their ability to invest and save, and low-income households are either denied credit or face enormously high borrowing rates that go beyond preventing savings to imprison the impoverished in a cycle of debt payments.

More and more Americans are in the last category: unable to service their obligations (that is, to pay the bills on their debts), an increasing number of families have become insolvent, owning less than they owe. The credit market has been revealed as a regressive system of redistribution benefiting the rich and devastating the poor.

In this atomized risk regime, financial failure is attributed to individuals’ lack of morality or sophistication. Outside academic and leftist political circles, few question the overwhelming demand for toxic financial products such as payday loans, let alone the creation of those products. Instead, everyday workers are urged to educate themselves about the market, enhancing their financial literacy. “Financial inclusion” has become the buzzword of the day. Financial self-help like the perennial best-seller Rich Dad, Poor Dad and Secrets of the Millionaire Mind fly off the shelves, while entire governmental agencies and public outreach programs are established to promote the “savvy” use of financial products.

Taken together, the rising inequality in the United States is not a “natural” result of apolitical technological advancement and globalization. Economic inequality is not a necessary price we need to pay for economic growth. Instead, the widening economic divide reflects a deeper transformation of how the economy is organized and how resources are distributed.
From Divested by Ken-Hou Lin and Megan Tobias Neely. Copyright © 2020 by the authors and reprinted by permission of Oxford University Press. Full set of citations for the facts and figures in this excerpt can be found in the full text.
Walrus shortage may have caused collapse of Norse Greenland
Communities vanished in 15th century after walrus hunted to near extinction, study finds

Agence France-Presse
Mon 6 Jan 2020
 
Norse communities hunted walruses for their 
tusks, a valuable medieval commodity. 
Photograph: Joel Garlich-Miller/AP

The mysterious disappearance of Greenland’s medieval Norse society in the 15th century came after walruses were hunted almost to extinction, researchers have said.

Norse communities thrived for more than 400 years in the Arctic, hunting walruses for their tusks, a valuable medieval commodity.

But a mixture of overexploitation and economic pressure from a flood of elephant ivory into European markets in the 13th century contributed to their downfall, according to a study.

A team of researchers from the universities of Cambridge, Oslo and Trondheim examined pre-1400s walrus tusk artefacts from across Europe and found almost all of them came from walruses hunted in seas only accessible to Greenland Norse communities.

They also found later items were hunted from smaller animals – likely females and infants – signalling stocks were rapidly dwindling.

James Barrett from Cambridge University’s archaeology department said: “Norse Greenlanders needed to trade with Europe for iron and timber, and mainly had walrus products to export in exchange.


Vikings were not spurred to Greenland by warm weather, research shows

“Norse hunters were forced to venture deeper into the Arctic Circle for increasingly meagre ivory harvests.” As walrus populations declined, so did the Norse communities.

The authors of the study, published in the Quarternary Science Reviews journal, said there were likely to have been other factors that contributed to the eventual disappearance of Norse Greenlanders.

These include climate change as the northern hemisphere underwent a “little ice age”, and unsustainable farming techniques.

Bastiaan Star of Oslo University said: “If both the population and price of walrus started to tumble, it must have badly undermined the resilience of the settlements. Our study suggests the writing was on the wall.”

---30---

The Coming Climate Crisis

The Little Ice Age could offer a glimpse of our tumultuous future.




Firefighters try to control a blaze as it spreads toward the towns of Douglas City and Lewiston in California on July 31, 2018. (Mark Ralston/AFP/Getty Images)
Firefighters try to control a blaze as it spreads toward the towns of Douglas City and Lewiston in California on July 31, 2018. (Mark Ralston/AFP/Getty Images)

Over the last couple of decades, as the impact of global warming has intensified, the discussion of climate change has spilled out of the scientific and technocratic circles within which it was long confined. Today, the subject has also become an important concern in the humanities and arts.
Discussions of climate tend to focus on the future. Yet even scientific projections depend crucially on the study of the past: Proxy data, such as tree rings, pollen deposits, and ice cores, have proved indispensable for the modeling of the future impact of climate change. Based on evidence of this kind, scientists can tell us a great deal about how trees, glaciers, and sea levels will respond to rising temperatures.
But what about the political and social impact of global warming? What effects might a major shift in climate have on governments, public institutions, warfare, and belief systems? For answers to these questions, we have to turn to history (keeping in mind that historical inferences are necessarily impressionistic).
Of course, there has never been anything directly comparable to the current cycle of human-induced global warming. But there have been several periods, now intensely studied by historians, during which climate has drastically shifted, either locally or globally.
Perhaps the most intensively researched of these periods is the Little Ice Age, which reached its peak between the late 15th and early 18th centuries. This early modern era is of particular interest because some of the most important geopolitical processes of our own time trace back to it. This was the period, for example, when the first stages of globalization were inaugurated. It was also in this period that great-power conflicts began to be conducted on a global scale. The struggles for supremacy among the Spanish, Dutch, and British that unfolded during the Little Ice Age were thus the precursors of the strategic rivalries of the 20th and 21st centuries.
During part of the Little Ice Age, decreased solar irradiance and increased seismic activity resulted in temperatures that, as Geoffrey Parker writes in Global Crisis, a groundbreaking global history of the period, were “more than 1 [degree Celsius] cooler than those of the later twentieth century.”
The current cycle of human-induced global warming is likely to lead to a much greater climatic shift than that of the Little Ice Age.
The current cycle of human-induced global warming is likely to lead to a much greater climatic shift than that of the Little Ice Age.
 What is striking then is the sheer magnitude of the ecological, social, and political upheavals of the era.
Droughts struck many parts of the world—including Mexico, Chile, the Mediterranean Sea basin, west and central Africa, India, China, and Indonesia—frequently bringing famine in their wake. These disasters were often accompanied by mass uprisings, rebellions, and war. England endured the greatest internal upheaval in its history, Europe was convulsed by the Thirty Years’ War, and China was torn by decades of strife following the overthrow of the Ming dynasty. Ottoman Turkey, Mughal India, and the Russian and Spanish empires were all shaken by rebellions. And from England to China, millenarian sects sprang up, seized by visions of apocalypse.
Parker estimates that in the 17th century “more wars took place around the world than in any other era.” So terrible was the devastation that contemporary observers around the world produced similar records of famine, plague, and death. One French abbess, for example, believed that the global population declined by a third.
But some states still thrived, most notably the Dutch Republic, which became the world’s preeminent naval and financial power. According to Dagomar Degroot, the author of The Frigid Golden Age, the Dutch owed their success in no small part to their flexibility in adapting to the changed environmental conditions of the period. Moreover, the Dutch status as an emergent power gave them an advantage in relation to the Spanish empire, which was weighed down by its size and historical legacy.
What lessons can be drawn from this history for our own time?
The first is that the sensitivity of human societies to climatic factors may exceed all expectations. The sensitivity of human societies to climatic factors may exceed all expectations.Climate-related conflicts and displacements are already changing the political complexion of many of the world’s most important countries, most notably in Europe. Ten years ago, few would have predicted the extent to which immigration would become the spark for political upheavals across Europe and the Americas.
Second, the history of the Little Ice Age suggests that, apart from catalyzing all manner of political and economic crises, a major climatic shift would also affect the global order, favoring those who are best able to adapt to changing conditions. Whether these conditions favor emergent powers will depend on the degree to which the status quo powers of our time are impeded by their historical legacy, as the Spanish empire was.
In this way, the legacies of the carbon economy may themselves prove to be major impediments. Fossil fuels are much more than mere sources of energy; they have also engendered a wide array of cultural and social practices. Fossil fuel use has shaped the physical, cultural, and imaginative landscapes of the United States, Canada, and Australia to such a degree that significant sections of their populations remain psychologically and politically resistant to recognizing changing environmental realities.
Similarly, fossil fuels—oil and natural gas in particular—have shaped the United States’ strategic commitments in ways that may also hinder its ability to adapt. One example of this is the long-standing U.S. alliance with Saudi Arabia, which has proved as much a constraint as an asset, especially regarding a transition to renewable energy.
To the same degree that these legacy commitments serve to impede the adaptive abilities of the United States (and the West in general), they also serve as incentives for emergent powers to adapt as quickly as possible. For Beijing, a transition from fossil fuels to renewable energy is desirable not only for ecological and economic reasons but also because it could effectively set China free from an energy regime in which the rules were largely set by Western powers and their allies.
There are, of course, very significant limits to what can be extrapolated from history, not least because the great powers of the past did not possess weapons that could destroy the (human) world many times over. The crucial question for the future is whether the established and emergent powers of our time will be able to manage their rivalries even as their own populations become increasingly subject to the disruptive and destabilizing effects of climate change. If not, then human beings could bring about a catastrophe that would far exceed anything wrought by the warming of the planet.
This article originally appeared in the Winter 2019 issue of Foreign Policy magazine.



Amitav Ghosh is the author of The Great Derangement: Climate Change and the Unthinkable. Twitter: @GhoshAmitav

Amitav Ghosh

WRITER

Lauren Tamaki illustration for Foreign Policy
Amitav Ghosh is best known for his intricate works of historical fiction, often set in or around his native India. But his 2016 book, The Great Derangement, is a searing piece of nonfiction that questions why writers and artists consistently fail to use environmental disasters as centerpieces in their stories. Ghosh blames these omissions for the lack of public will to confront climate change—a point he tirelessly reiterates in speeches around the world.
Lauren Tamaki illustration for Foreign Policy