Monday, January 06, 2020


CHANGING THINGS UP
Finland’s new prime minister wants her country on a four-day workweek

January 6, 2020

Finland has been at the forefront of flexible work schedules for years, starting with a 1996 law that gives most employees the right to adjust their hours up to three hours earlier or later than what their employer typically requires.

The country’s newly installed political leader, Sanna Marin, just upped the ante, though, proposing to put the entire country on a four-day workweek consisting of six-hour workdays.

Marin, the world’s youngest sitting prime minister and the leader of a five-party center-left coalition, said the policy would allow people to spend more time with their families and that this could be “the next step” in working life.

Marin is not the first politician to recently float the idea of scaling back work hours. Neighboring Sweden tested out six-hour work days a couple of years ago. And the UK’s Labour Party said in September that if elected, it would bring a 32-hour working week to the UK within 10 years. (It wasn’t elected, however, and details on how the hours would be structured were in any case vague.) In France, the standard work week is 35 hours, reduced from 39 hours in 2000.

A slew of companies around the world have been running their own experiments lately. Perpetual Guardian, a small New Zealand firm that helps clients manage financial estates, trialed a four-day work week before formally adopting the policy in November 2018. Its CEO, Andrew Barnes, is now an evangelist for the idea. In Ireland, a recruiting firm called ICE Group shifted to a four-day workweek and found that people’s habits changed, with staffers taking fewer breaks and checking social media less often.

Both firms are small—Perpetual Guardian trialed the schedule with 240 employees; ICE Group has a staff of about 50 people in Ireland. But larger companies have been experimenting, too. Microsoft Japan, for example, implemented a four-day workweek this past summer. The company said employees reported being 40% more productive, and that the policy was particularly popular among younger workers.

While shorter work weeks can bring clear benefits to employees’ well-being, they also can be difficult to implement. The Wellcome Trust, a science research foundation in London, dropped plans for a four-day workweek last year, saying it would be “too operationally complex to implement” for its staff of 800.

But for those that have latched onto the idea, there is the prospect of baking even more flexibility into the system. At Perpetual Guardian, for example, a four-day workweek isn’t the only model; after measuring the productivity of its staff during a typical, five-day workweek, the firm set a standard benchmark and then allowed its employees to work out how to get there in 80% of the time, which could mean fewer workdays per week, or shortened hours spread across five days.

Finland’s new prime minister backs four-day working week
Jon Stone
The Independent January 6, 2020



Finland's Prime Minister Sanna Marin took office in December at the head of a broad left-of-centre coalition: AFPMore


Finland’s new prime minister is a supporter of cutting the working week to four-day days, and has argued that the change would let people spend more time with their families.

Sanna Marin, a social democrat, who took office in December, leads a broad coalition that also includes greens, leftists and centrists.

“I believe people deserve to spend more time with their families, loved ones, hobbies and other aspects of life, such as culture,” she had previously said at her party's conference in the autumn of 2019.

“This could be the next step for us in working life.”

While the idea is not government policy under her coalition administration, her recent support for the radical move raises the prospect that Finland could eventually become the latest country to experiment with cutting working hours.

Ms Marin, who is the world’s youngest serving national leader, also suggested that as an alternative the standard working day could be reduced to six hours, down from the current eight.

The working week in Europe was progressively shortened around the turn of the 20th century, largely under pressure from the labour movement – with the gradual introduction of the modern two-day weekend and the eight-hour day.

But change has been slower in recent decades, with the five-day week and eight-hour day becoming the standard benchmark across the developed world.

An attempt by former French prime minister Lionel Jospin to bring in a 35-hour workweek at the beginning of the 21st century produced only limited success, with many loopholes and low uptake.

Ahead of last year’s general election, the UK’s Labour Party said it wanted to work towards a four-day week as a long-term aim within a decade, though the party remains in opposition.

Critics say reducing the working week while paying people the same amount would impose a cost on business, but proponents say the difference would be made up because of increased productivity.

Some local councils in Finland’s neighbour Sweden have been experimenting with six-hour days in recent years, with early results suggesting the move increased productivity.

The political backdrop to the Finnish prime minister’s call is months of industrial unrest, which brought down the previous government. The strikes were brought to an end by a pay deal between unions and employers, which saw improvements in pay rises and working conditions.

Finland has one of the highest levels of trade union coverage in Europe, with 91 per cent of employees covered by collective agreements guaranteeing working time, pay and conditions.

This figure compares with an EU average of 60 per cent. The corresponding coverage for the UK is 29 per cent of workers, one of the lowest in the bloc – while the highest are found in France, Belgium and Austria, where collective bargaining coverage is near-universal.

This article has been updated to clarify that Sanna Marin's comments were made in 2019 before she became prime minister.


Finnish prime minister wants 4-day workweek, 6-hour workday


Brittany De Lea
Fox BusinessJanuary 6, 2020

Finland’s new prime minister, Sanna Marin, wants to encourage Finnish workers to have a better work-life balance.

The 34-year old, who has been serving as prime minister since December, has detailed plans to introduce an abridged workweek in the country as a means to allow people to spend more time at home.

Not only is Marin aiming for a four-day workweek, she is also weighing a six-hour working day, according to New Europe.

“I believe people deserve to spend more time with their families, loved ones, hobbies and other aspects of life, such as culture. This could be the next step for us in working life,” Marin said, as reported by multiple news outlets.

NEW ZEALAND FIRM'S 4-DAY WORKWEEK WORKS, OTHERS SHOULD FOLLOW

4-DAY WORKWEEKS ARE BETTER FOR BUSINESS, MICROSOFT FINDS – AND HOW TO MAXIMIZE YOUR TIME AT THE OFFICE

Other countries and businesses have also considered similar ideas.

As previously reported by FOX Business, a New Zealand company tested and later officially implemented a four-day workweek after deeming it was beneficial for business and staff.

Perpetual Guardian – an estate planning business – conducted a study over the course of two months whereby employees were still paid for five days of work. An independent study of the shortened workweek concluded that staff stress levels decreased, engagement increased – as did measures of leadership, commitment, stimulation and empowerment.

The company’s CEO is even encouraging other businesses to take up the model.

Microsoft tested a four-day week in Japan, finding productivity levels increased and business expenses declined.

In Sweden, a 23-month study was conducted among nurses at a care center for seniors, which found that nurses took fewer sick days and absences and had more energy when they left their jobs.


Microsoft Japan’s four-day week is new evidence that working less is good for productivity

November 4, 2019


The theory behind introducing a four-day work week—without cutting pay—is that employees will be so delighted to have time gifted back to them that they’ll work harder in the hours remaining. The latest trial to emerge, from the large workforce at Microsoft Japan, suggests it might be applicable at scale, and even in one of the world’s most notoriously “workaholic” cultures.

Microsoft Japan ran a trial in August 2019, when every Friday it closed the office and gave roughly 2,300 full-time employees a paid holiday, according to Sora News 24, which first reported the story in English. The result was an enormous jump in productivity. Based on sales per employee, workers were almost 40% more productive in the compressed hours of August 2019 as they were the same month a year earlier.

Other productivity hacks were also encouraged, including limiting meetings to 30 minutes and suggesting that instead of calling meetings at all, employees could more fully utilize software available for online collaboration (in this case, of course, that software was Microsoft Teams, though other systems are available). On their day off, workers were encouraged to make use of the time by volunteering, learning, and taking rest “to further improve productivity and creativity,” according to a company blog (link in Japanese).

In the coming months, another trial will run with slightly different parameters, the blog adds. This trial won’t cut hours in the same way, but rather suggests that employees focus on resting well and coming together to share ideas about how to work, rest, and learn.

Other companies that have trialed and implemented four-day weeks have found, similarly, that their productivity is boosted. Perpetual Guardian, the New Zealand estate management firm that was one of the first to go public with a research-backed assessment of its trial, and then adopted the policy in November 2018, found that productivity was unharmed by the shortened work week, while staff stress levels were dramatically improved. More recently, recruitment firm ICE Group this year became the first company in Ireland to adopt a four-day week for all its staff.

Microsoft Japan’s trial is significant because it’s the biggest yet in terms of both staff numbers and the apparent effect on productivity. It’s caught the global imagination, perhaps, because Japan’s work culture is seen as particularly punishing. If a big Japanese tech company can change its ways and achieve startlingly better results, perhaps there’s hope for combatting other long-hours work cultures, like the US.

With translation assistance from Tatsuya Oiwa.




Iran can't hit back over Soleimani's killing because America has only fictional heroes like SpongeBob SquarePants, a prominent cleric said
Qassem Soleimani and SpongeBob SquarePants. Press Office of Iranian Supreme Leader/Anadolu Agency/Getty Images; YouTube/SpongeBobSquarePantsofficial


An Iranian cleric, Shahab Moradi, said Iran would struggle to hit back against the US by striking a parallel figure to Maj. Gen. Qassem Soleimani because the US has only "fictional" heroes.
"Think about it. Are we supposed to take out Spiderman and SpongeBob?" he said in a live interview on Iran's IRIB Ofogh TV channel.
The US assassinated Soleimani in Iraq on Thursday.
The strike, orchestrated by President Donald Trump, was criticized by US politicians and European leaders who urged de-escalation.
Visit Business Insider's homepage for more stories.

An Iranian cleric mocked the US by saying Iran would not be able to strike back in kind after the assassination of Maj. Gen. Qassem Soleimani because the US has only fictional heroes such as Spider-Man and SpongeBob SquarePants.


The cleric, Shahab Moradi, made the comment in a live TV interview on Iran's IRIB Ofogh channel. A clip of the segment was posted on Saturday evening on Twitter, but it is unclear when the program aired.

In the clip, Moradi says:

"In your opinion, if anyone around the world wants to take their revenge on the assassination of Soleimani and intends to do it proportionately in the way they suggest — that we take one of theirs now that they've got one of ours — who should we consider to take out in the context of America?

"Think about it. Are we supposed to take out Spider-Man and SpongeBob? They don't have any heroes. We have a country in front of us with a large population and a large landmass, but it doesn't have any heroes. All of their heroes are cartoon characters — they're all fictional."
Shahab Moradi called in to Iran's IRIB Ofogh TV channel. 
Screenshot/Asranetv/Twitter

A US airstrike late Thursday killed Soleimani and an Iraqi militia commander, Abu Mahdi al-Muhandis, at the airport in Baghdad.

Soleimani's death has sparked fresh tensions between Iran and the US as Iran promised to take revenge for the death of the revered general.

Iran's supreme leader, Ayatollah Ali Khamenei, announced three days of mourning in Iran as well as "severe revenge," though it remains unclear how the country would carry that out.






How the Modern Workplace Fails Women

The world of work needs a wholesale redesign — led by data on female bodies and female lives


Caroline Criado Perez
Follow
Dec 24, 2019 · 11 min read




Illustration: solarseven/Getty Images


Invisible Women by Caroline Criado Perez (Abrams Press) was the winner of the 2019 Financial Times and McKinsey Business Book of the Year Award. In this excerpt from the book, the author describes some of the many ways in which women are harmed by workplaces designed without their needs in mind.

Itwas in 2008 that the big bisphenol A (BPA) scare got serious. Since the 1950s, this synthetic chemical had been used in the production of clear, durable plastics, and it was to be found in millions of consumer items from baby bottles to food cans to main water pipes. By 2008, 2.7 million tons of BPA was being produced globally every year, and it was so ubiquitous that it had been detected in the urine of 93% of Americans over the age of six. And then a U.S. federal health agency came out and said that this compound that we were all interacting with on a daily basis may cause cancer, chromosomal abnormalities, brain and behavioral abnormalities, and metabolic disorders. Crucially, it could cause all these medical problems at levels below the regulatory standard for exposure. Naturally, all hell broke loose.

Fearing a major consumer boycott, most baby-bottle manufacturers voluntarily removed BPA from their products, and while the official U.S. line on BPA is that it is not toxic, the EU and Canada are on their way to banning its use altogether. But the legislation that we have exclusively concerns consumers: no regulatory standard has ever been set for workplace exposure.

The story of BPA is about gender and class, and a cautionary tale about what happens when we ignore female medical health data. We have known that BPA can mimic the female hormone estrogen since the mid-1930s. And since at least the 1970s we have known that synthetic estrogen can be carcinogenic in women. But BPA carried on being used in hundreds of thousands of tons of consumer plastics, in CDs, DVDs, water and baby bottles, and laboratory and hospital equipment.

“It was ironic to me,” says occupational health researcher Jim Brophy, “that all this talk about the danger for pregnant women and women who had just given birth never extended to the women who were producing these bottles. Those women whose exposures far exceeded anything that you would have in the general environment. There was no talk about the pregnant worker who is on the machine that’s producing this thing.”

This is a mistake, says Brophy. Worker health should be a public health priority if only because “workers are acting as a canary for society as a whole. If we cared enough to look at what’s going on in the health of workers that use these substances every day,” it would have a “tremendous effect on these substances being allowed to enter into the mainstream commerce.”

But we don’t care enough. In Canada, where women’s health researcher Anne Rochon Ford is based, five women’s health research centers that had been operating since the 1990s, including Ford’s own, had their funding cut in 2013. It’s a similar story in the U.K., where “public research budgets have been decimated,” says Rory O’Neill. And so the “far better resourced” chemicals industry and its offshoots have successfully resisted regulation for years, dismissing studies and other evidence of the negative health impacts of their products.

The result is that workplaces remain unsafe. Brophy tells me that the ventilation he found in most auto-plastics factories was limited to “fans in the ceiling. So the fumes literally pass the breathing zone and head to the roof and in the summertime when it’s really hot in there and the fumes become visible, they will open the doors.” It’s the same story in Canadian nail salons, says Rochon Ford. There are no ventilation requirements, there are no training requirements. There is no legislation around wearing gloves and masks. And there is nobody following up on the requirements that do exist — unless someone makes a complaint.


The introduction of Big Data into a world full of gender data gaps can magnify and accelerate already-existing discriminations.

But who will make a complaint? Certainly not the women themselves. Women working in nail salons, in auto-plastics factories, in a vast range of hazardous workplaces, are poor, working class, often immigrants who can’t afford to put their immigration status at risk. And this makes them ripe for exploitation.

Auto-plastics factories tend not to be part of the big car companies like Ford. They are usually arms-length suppliers, “who tend to be nonunionized and tend to be able to get away with more employment-standard violations,” Rochon Ford tells me. Workers know that if they demand better protections the response will be “Fine, you’re out of here. There’s 10 women outside the door who want your job.” “We’ve heard factory workers tell us this in the exact same words,” says Rochon Ford.

If this sounds illegal, well, it may be. Employee rights vary from country to country, but they tend to include a right to paid sick and maternity leave, a right to a set number of hours, and protection from unfair and/or sudden dismissal. But these rights only apply if you are an employee. And, increasingly, many workers are not.

In many nail salons, technicians are technically independent contractors. This makes life much easier for the employers: the inherent risk of running a company based on consumer demand is passed on to workers, who have no guaranteed hours and no job security. Not enough customers today? Don’t come in and don’t get paid. Minor accident? You’re out of here, and forget about redundancy pay.

Nail salons are the tip of an extremely poorly regulated iceberg when it comes to employers exploiting loopholes in employment law. Zero-hour contracts, short-term contracts, employment through an agency, are all part of the Silicon Valley-built “gig economy.” But the gig economy is in fact often no more than a way for employers to get around basic employee rights. Casual contracts create a vicious cycle: the rights are weaker to begin with, which makes workers reticent to fight for the ones they do still have. And so those get bent too.

Naturally, the impact of what the International Trade Union Confederation (ITUC) has termed the “startling growth” of precarious work has barely been gender-analyzed. The ITUC reports that its feminized impact is “poorly reflected in official statistic and government policies,” because the “standard indicators and data used to measure developments on labor markets” are not gender-sensitive, and, as ever, data is often not sex-disaggregated, “making it sometimes difficult to measure the overall numbers of women.” There are, as a result, “no global figures related to the number of women in precarious work.”

But the regional and sector-specific studies that do exist suggest “an overrepresentation of women” in precarious jobs. A Harvard study on the rise of “alternative work” in America between 2005 and 2015 found that the percentage of women in such work “more than doubled,” meaning that “women are now more likely than men to be employed in an alternative work arrangement.”

This is a problem because while precarious work isn’t ideal for any worker, it can have a particularly severe impact on women. For a start, it is possible that it is exacerbating the gender pay gap: in the U.K. there is a 34% hourly pay penalty for workers on zero-hours contracts, a 39% hourly pay penalty for workers on casual contracts, and a 20% pay penalty for agency workers — which are on the increase as public services continue to be outsourced. But no one seems interested in finding out how this might be affecting women.

There is, to begin with, “limited scope for collective bargaining” in agency jobs. This is a problem for all workers, but can be especially problematic for women because evidence suggests that collective bargaining (as opposed to individual salary negotiation) could be particularly important for women. As a result, an increase in jobs like agency work that don’t allow for collective bargaining might be detrimental to attempts to close the gender pay gap.

But the negative impact of precarious work on women isn’t just about unintended side effects. It’s also about the weaker rights that are intrinsic to the gig economy. In the U.K., a female employee is only entitled to maternity leave if she is actually an employee. If she’s a “worker,” that is, someone on a short-term or zero-hours contract, she isn’t entitled to any leave at all, meaning she would have to quit her job and reapply after she’s given birth.

Another major problem with precarious work that disproportionately impacts female workers is unpredictable, last-minute scheduling. Women still do the vast majority of the world’s unpaid care work and, particularly when it comes to childcare, this makes irregular hours extremely difficult. The scheduling issue is being made worse by gender-insensitive algorithms. A growing number of companies use “just-in-time” scheduling software, which use sales patterns and other data to predict how many workers will be needed at any one time. They also respond to real-time sales analyses, telling managers to send workers home when consumer demand is slow.

It’s a system that works great for the companies that use the software to boost profits by shifting the risks of doing business onto their workers, and the increasing number of managers who are compensated on the efficiency of their staffing. It feels less great, however, for the workers themselves, particularly those with caring responsibilities. The introduction of Big Data into a world full of gender data gaps can magnify and accelerate already-existing discriminations: whether its designers didn’t know or didn’t care about the data on women’s unpaid caring responsibilities, the software has clearly been designed without reference to them.

The work that (mainly) women do (mainly) unpaid, alongside their paid employment is not an optional extra. This is work that society needs to get done. And getting it done is entirely incompatible with just-in-time scheduling designed entirely without reference to it. Which leaves us with two options: either states provide free, publicly funded alternatives to women’s unpaid work, or they put an end to just-in-time scheduling.


Awoman doesn’t need to be in precarious employment to have her rights violated. Women on irregular or precarious employment contracts have been found to be more at risk of sexual harassment (perhaps because they are less likely to take action against a colleague or employer who is harassing them) but as the #MeToo movement washes over social media, it is becoming increasingly hard to escape the reality that it is a rare industry in which sexual harassment isn’t a problem.

As ever, there is a data gap, with official statistics extremely hard to come by. The UN estimates (estimates are all we have) that up to 50% of women in EU countries have been sexually harassed at work. The figure in China is thought to be as high as 80%. In Australia a study found that 60% of female nurses had been sexually harassed.

The extent of the problem varies from industry to industry. Workplaces that are either male-dominated or have a male-dominated leadership are often the worst for sexual harassment. A 2016 study by the TUC found that 69% of women in manufacturing and 67% of women in hospitality and leisure “reported experiencing some form of sexual harassment” compared to an average of 52%. A 2011 U.S. study similarly found that the construction industry had the highest rates of sexual harassment, followed by transportation and utilities. One survey of senior level women working in Silicon Valley found that 90% of women had witnessed sexist behavior; 87% had been on the receiving end of demeaning comments by male colleagues; and 60% had received unwanted sexual advances. Of that 60%, more than half had been propositioned more than once, and 65% had been propositioned by a superior. One in three women surveyed had felt afraid for her personal safety.


Women have worked unpaid, underpaid, underappreciated, and invisibly, but they have always worked. But the modern workplace does not work for women.

Some of the worst experiences of harassment come from women whose work brings them into close contact with the general public. In these instances, harassment all too often seems to spill over into violence.

If incidents of physical violence aren’t a regular concern at your place of work, be grateful that you’re not a health worker. Research has found that nurses are subjected to “more acts of violence than police officers or prison guards.” A recent U.S. study similarly found that “health care workers required time off work due to violence four times more often than other types of injury.”

Following the research he conducted with fellow occupational health researcher Margaret Brophy, Jim Brophy concluded that the Canadian health sector was “one of the most toxic work environments that we had ever seen.” One worker recalled the time a patient “got a chair above his head,” noting that “the nursing station has been smashed two or three times.” Other patients used bed pans, dishes, even loose building materials as weapons against nurses.

But despite its prevalence, workplace violence in health care is “an underreported, ubiquitous, and persistent problem that has been tolerated and largely ignored.” This is partly because the studies simply haven’t been done. According to the Brophys’ research, prior to 2000, violence against health care workers was barely on the agenda: when in February 2017 they searched Medline for “workplace violence against nurses” they found “155 international articles, 149 of which were published from 2000 to the time of the search.”

But the global data gap when it comes to the sexual harassment and violence women face in the workplace is not just down to a failure to research the issue. It’s also down to the vast majority of women not reporting. And this in turn is partly down to organizations not putting in place adequate procedures for dealing with the issue. Women don’t report because they fear reprisals and because they fear nothing will be done — both of which are reasonable expectations in many industries. “We scream,” one nurse told the Brophys. “The best we can do is scream.”

The inadequacy of procedures to deal with the kind of harassment that female workers face is itself likely also a result of a data gap. Leadership in all sectors is male-dominated and the reality is that men do not face this kind of aggression in the way women do. And so, many organizations don’t think to put in procedures to deal adequately with sexual harassment and violence. It’s another example of how much a diversity of experience at the top matters for everyone — and how much it matters if we are serious about closing the data gap.

Women have always worked. They have worked unpaid, underpaid, underappreciated, and invisibly, but they have always worked. But the modern workplace does not work for women. From its location, to its hours, to its regulatory standards, it has been designed around the lives of men and it is no longer fit for purpose. The world of work needs a wholesale redesign — of its regulations, of its equipment, of its culture — and this redesign must be led by data on female bodies and female lives. We have to start recognizing that the work women do is not an added extra, a bonus that we could do without: women’s work, paid and unpaid, is the backbone of our society and our economy. As we enter a new decade, it’s about time we started valuing it.

Text adapted from Caroline Criado Perez’s Invisible Women, published by Abrams Press.

Marker


Making you smarter about business. A new publication from Medium.
Follow


560




WRITTEN BY

Caroline Criado Perez
Follow

Writer, broadcaster, feminist activist, and author of Invisible Women: Data Bias in a World Designed for Men.
How the Finance Industry Fueled Four Decades of Inequality in America

Starting in the ’80s, the rise of finance set forces in motion that have reshaped the economy
Photo: The Washington Post/Getty Images

Coauthored with Megan Tobias Neely, Postdoctoral Fellow in Sociology at The Clayman Institute for Gender Research at Stanford University.

These days, finance is so fundamental to our everyday lives that it is difficult to imagine a world without it. But until the 1970s, the financial sector accounted for a mere 15 percent of all corporate profits in the US economy. Back then, most of what the financial sector did was simple credit intermediation and risk management: banks took deposits from households and corporations and loaned those funds to homebuyers and business. They issued and collected checks to facilitate payment. For important or paying customers, they provided space in their vaults to safeguard valuable items. Insurance companies received premiums from their customers and paid out when a costly incident occurred.

By 2002, the financial sector had tripled, coming to account for 43% of all the corporate profits generated in the U.S. economy. These profits grew alongside increasingly complex intermediations such as securitization, derivatives trading, and fund management, most of which take place not between individuals or companies, but between financial institutions. What the financial sector does has become opaque to the public, even as its functions have become crucial to every level of the economy.

And as American finance expanded, inequality soared. Capital’s share of national income rose alongside compensation for corporate executives and those working on Wall Street. Meanwhile, among full-time workers, the Gini index (a measure of earnings inequality) increased 26%, and mass layoffs became a common business practice instead of a last resort. All these developments amplified wealth inequality, with the top 0.1% of U.S. households coming to own more than 20% of the entire nation’s wealth — a distribution that rivals the dominance of the robber barons of the Gilded Age. When the financial crisis of 2008 temporarily narrowed the wealth divide, monetary policies adopted to address it quickly resuscitated banks’ and affluent households’ assets but left employment tenuous and wages stagnant.

The past four decades of American history have therefore been marked by two interconnected, transformative developments: the financialization of the U.S. economy and the surge in inequality across U.S. society.

The rise of finance expanded the level of inequality since 1980 through three interrelated processes. First, it generated new intermediations that extract national resources from the productive sector and households to the financial sector without providing commensurate economic benefits. Second, it undermined the postwar accord between capital and labor by reorienting corporations toward financial markets and undermining the direct dependence on labor. And third, it created a new risk regime that transfers economic uncertainties from firms to individuals, which in turn increases the household demands for financial services.


As American corporations shift their focus from productive to financial activities, labor no longer represents a crucial component in the generation of profits, and the workers who perform productive tasks are devalued.
Economic rents

Where do all the financial sector’s profits come from? Most of the revenue for banks used to be generated by interest. By paying depositors lower interest rates than they charged borrowers, banks made profits in the “spread” between the rates. This business model began to change in the 1980s as banks expanded into trading and a host of fee-based services such as securitization, wealth management, mortgage and loan processing, service charges on deposit accounts (e.g., overdraft fees), card services, underwriting, mergers and acquisitions, financial advising, and market-making (e.g., IPOs [initial public offerings]). Altogether, these comprise the services that generate non-interest revenue.
 
Figure 1. Note: The sample includes all FDIC-insured commercial banks. Source: Federal Deposit Insurance Corporation Historical Statistics on Banking Table CB04

Figure 1 presents non-interest revenue as a percentage of commercial banks’ total revenue. Non-interest income constituted less than 10% of all revenue in the early 1980s, but its importance escalated and its share of income rose to more than 35% in the early 2000s. In other words, more than a third of all the bank revenue — particularly large banks’ revenue — is generated today by non-traditional banking activities. For example, JPMorgan Chase earned $52 billion in interest income but almost $94 billion in non-interest income right before the 2008 financial crisis. Half was generated from activities such as investment banking and venture capital, and a quarter from trading. In 2007, Bank of America earned about 47% of its total income from non-interest sources, including deposit fees and credit card services.

The ascendance of the new banking model led to a significant transfer of national resources into the financial sector in terms of not only corporate profits but also its elite employees’ compensation. Related industries, such as legal services and accounting, also benefit from the boom. However, whether these non-interest activities actually created values commensurate to their costs has been questioned, particularly when the sector has been dominated by only a handful of banks. In a way, these earnings could be considered economic rents — excessive returns without corresponding benefits.
The capital-labor accord

Besides extracting resources from the economy into the financial sector, financialization undermined the capital-labor accord by orienting non-financial firms toward financial markets. The capital-labor accord refers to an agreement and a set of production relations institutionalized in the late 1930s. The accord assigned managers full control over enterprise decision-making, and, in exchange, workers were promised real compensation growth linked to productivity, improved working conditions, and a high degree of job security. This agreement was reinforced by New Deal labor reforms such as unemployment insurance, the formal right to collective bargaining, maximum work hours, and minimum wages. As a result, for most of the 20th century, labor was considered a crucial driver for American prosperity. Its role, however, has been marginalized as corporations increasingly attend to the demands of the stock market.

To maximize the returns to their shareholders, American firms have adopted wide-ranging cost-cutting strategies, from automation to offshoring and outsourcing. Downsizing and benefit reductions are common ways that companies trim the cost of their domestic workforce. Many of these strategies are advocated by financial institutions, which earn handsome fees from mergers and acquisitions, spinoffs, and other corporate restructuring.

As non-financial firms expanded their operations to become lenders and traders, they came to earn a growing share of their profits from interest and dividends. The intensified foreign competition in the 1970s, combined with deregulated interest rates in the 1980s, drove this diversion, with large U.S. non-finance firms shifting investments from production to financial assets. Instead of targeting the consumers of their manufacturing or retail products to raise profits and reward workers, these firms extended their financial arms into leasing, lending, and mortgage markets to raise profits and reward shareholders.
 
Figure 2. Note: Financial assets include investments in governmental obligations, tax-exempt securities, loans to shareholders, mortgage and real estate loans, and other investments, but do not include cash and cash equivalence. Financial corporations include credit intermediation, securities, commodities, and other financial investments, insurance carriers, other financial vehicles and investment companies, and holding companies. Source: Internal Revenue Service Corporation Complete Report Table 6: Returns of Active Corporations

Figure 2 shows the amount of financial assets owned by U.S. corporations as a percentage of their total assets. Financial assets here consist of treasury, state, and municipal bonds, mortgages, business loans, and other financial securities. In theory, financial holding is countercyclical, meaning that firms hold more financial assets during economic contractions and then invest these savings in productive assets during economic booms. However, there has been a secular upsurge in financial holding since the 1980s, from about 35% of their total assets to more than half. Even when we remove financial corporations from the picture, we see a rise in financial holding from under 15% to more than 30% in the aftermath of the recession. Again, as American corporations shift their focus from productive to financial activities, purchasing financial instruments instead of stores, plants, and machinery, labor no longer represents a crucial component in the generation of profits, and the workers who perform productive tasks are devalued.

In addition to marginalizing labor, the rise of finance pushed economic uncertainties traditionally pooled at the firm level downward to individuals. Prior to the 1980s, large American corporations often operated in multiple product markets, hedging the risk of an unforeseen downturn in any particular market. Lasting employment contracts afforded workers promotion opportunities, health, pension, and other benefits, unaffected by the risks the company absorbed. Since the 1980s, fund managers have instead pressured conglomerates to specialize only in their most profitable activities, pooling risk at the fund level, not at the firm level. Consequently, American firms have become far more vulnerable to sudden economic downturns. To cope with that increased risk, financial professionals advised corporations to reconfigure their employment relationships from permanent arrangements to ones that emphasize flexibility — the firm’s flexibility, not the employees’. Workers began to be viewed as independent agents rather than members or stakeholders of the firm. As more and more firms adopt contingent employment arrangements, workers are promised low minimum hours but are required to be available whenever they are summoned.

The compensation principle shifted, too, from a fair-wage model that sustains long-term employment relationships to a contingent model that ties wages and employment to profits (meaning more workers are involved in productivity pay schemes than they realize; should their portion of the company lag in profits, their job, not just their compensation, is on the line). Retirement benefits also transformed from guarantees of financial security to ones dependent on the performance of financial markets. Of course, this principle mostly benefits high-wage workers who can afford the fluctuations. Many low-wage workers, not knowing how many hours they will work and how much pay they will receive, are forced to borrow to meet their short-term needs.
Atomized risk regime

The dispersion of economic risks and the widening labor market divide are reflected in the growing consumption of financial products at the household level. As defined-contribution plans gradually replaced defined-benefit pensions as the default benefit in the private sector, mutual funds and retirement accounts flourished. This new retirement system allows workers to carry benefits over as they move across different employers (helpful when jobs are evermore precarious), but it ties their economic prospects to the fluctuation of financial markets. Families became responsible for making investment decisions and securing retirement funds for themselves.

Retirement in the United States, thus, is no longer an age but a financial status. Many middle-class families have had to cash out their retirement accounts to cover emergency expenses. Many others fear that they cannot afford to exit the workforce when the time comes. And these are the lucky ones.


The expansion of credit is supposed to narrow consumption inequality across households and smooth volatility. Instead, it adds to economic uncertainty.

About half of American workers have neither defined-benefit nor defined-contribution plans; that rate declines to a third among millennials. Affluent families, who allocate an increasing proportion of their wealth to financial assets, benefit, since they have sufficient resources to buffer downturns and can gain substantially from financialization. Still, the only sure winners are financial advisors and fund managers, who charge a percentage of these savings annually, without having to pay out when there are great losses.

The expansion of credit is supposed to narrow consumption inequality across households and smooth volatility across the life course. Instead, it, too, adds to economic uncertainty. The debate about whether Americans borrow too much obscures the reality that the consequences of debt vary dramatically across the economic spectrum (as well as by race and gender). The abundance of credit provides affluent families the opportunity to invest or meet short-term financial needs at low cost. At the same time, middle-income households carry increasingly heavy debt burdens, curtailing their ability to invest and save, and low-income households are either denied credit or face enormously high borrowing rates that go beyond preventing savings to imprison the impoverished in a cycle of debt payments.

More and more Americans are in the last category: unable to service their obligations (that is, to pay the bills on their debts), an increasing number of families have become insolvent, owning less than they owe. The credit market has been revealed as a regressive system of redistribution benefiting the rich and devastating the poor.

In this atomized risk regime, financial failure is attributed to individuals’ lack of morality or sophistication. Outside academic and leftist political circles, few question the overwhelming demand for toxic financial products such as payday loans, let alone the creation of those products. Instead, everyday workers are urged to educate themselves about the market, enhancing their financial literacy. “Financial inclusion” has become the buzzword of the day. Financial self-help like the perennial best-seller Rich Dad, Poor Dad and Secrets of the Millionaire Mind fly off the shelves, while entire governmental agencies and public outreach programs are established to promote the “savvy” use of financial products.

Taken together, the rising inequality in the United States is not a “natural” result of apolitical technological advancement and globalization. Economic inequality is not a necessary price we need to pay for economic growth. Instead, the widening economic divide reflects a deeper transformation of how the economy is organized and how resources are distributed.
From Divested by Ken-Hou Lin and Megan Tobias Neely. Copyright © 2020 by the authors and reprinted by permission of Oxford University Press. Full set of citations for the facts and figures in this excerpt can be found in the full text.
Walrus shortage may have caused collapse of Norse Greenland
Communities vanished in 15th century after walrus hunted to near extinction, study finds

Agence France-Presse
Mon 6 Jan 2020
 
Norse communities hunted walruses for their 
tusks, a valuable medieval commodity. 
Photograph: Joel Garlich-Miller/AP

The mysterious disappearance of Greenland’s medieval Norse society in the 15th century came after walruses were hunted almost to extinction, researchers have said.

Norse communities thrived for more than 400 years in the Arctic, hunting walruses for their tusks, a valuable medieval commodity.

But a mixture of overexploitation and economic pressure from a flood of elephant ivory into European markets in the 13th century contributed to their downfall, according to a study.

A team of researchers from the universities of Cambridge, Oslo and Trondheim examined pre-1400s walrus tusk artefacts from across Europe and found almost all of them came from walruses hunted in seas only accessible to Greenland Norse communities.

They also found later items were hunted from smaller animals – likely females and infants – signalling stocks were rapidly dwindling.

James Barrett from Cambridge University’s archaeology department said: “Norse Greenlanders needed to trade with Europe for iron and timber, and mainly had walrus products to export in exchange.


Vikings were not spurred to Greenland by warm weather, research shows

“Norse hunters were forced to venture deeper into the Arctic Circle for increasingly meagre ivory harvests.” As walrus populations declined, so did the Norse communities.

The authors of the study, published in the Quarternary Science Reviews journal, said there were likely to have been other factors that contributed to the eventual disappearance of Norse Greenlanders.

These include climate change as the northern hemisphere underwent a “little ice age”, and unsustainable farming techniques.

Bastiaan Star of Oslo University said: “If both the population and price of walrus started to tumble, it must have badly undermined the resilience of the settlements. Our study suggests the writing was on the wall.”

---30---

The Coming Climate Crisis

The Little Ice Age could offer a glimpse of our tumultuous future.




Firefighters try to control a blaze as it spreads toward the towns of Douglas City and Lewiston in California on July 31, 2018. (Mark Ralston/AFP/Getty Images)
Firefighters try to control a blaze as it spreads toward the towns of Douglas City and Lewiston in California on July 31, 2018. (Mark Ralston/AFP/Getty Images)

Over the last couple of decades, as the impact of global warming has intensified, the discussion of climate change has spilled out of the scientific and technocratic circles within which it was long confined. Today, the subject has also become an important concern in the humanities and arts.
Discussions of climate tend to focus on the future. Yet even scientific projections depend crucially on the study of the past: Proxy data, such as tree rings, pollen deposits, and ice cores, have proved indispensable for the modeling of the future impact of climate change. Based on evidence of this kind, scientists can tell us a great deal about how trees, glaciers, and sea levels will respond to rising temperatures.
But what about the political and social impact of global warming? What effects might a major shift in climate have on governments, public institutions, warfare, and belief systems? For answers to these questions, we have to turn to history (keeping in mind that historical inferences are necessarily impressionistic).
Of course, there has never been anything directly comparable to the current cycle of human-induced global warming. But there have been several periods, now intensely studied by historians, during which climate has drastically shifted, either locally or globally.
Perhaps the most intensively researched of these periods is the Little Ice Age, which reached its peak between the late 15th and early 18th centuries. This early modern era is of particular interest because some of the most important geopolitical processes of our own time trace back to it. This was the period, for example, when the first stages of globalization were inaugurated. It was also in this period that great-power conflicts began to be conducted on a global scale. The struggles for supremacy among the Spanish, Dutch, and British that unfolded during the Little Ice Age were thus the precursors of the strategic rivalries of the 20th and 21st centuries.
During part of the Little Ice Age, decreased solar irradiance and increased seismic activity resulted in temperatures that, as Geoffrey Parker writes in Global Crisis, a groundbreaking global history of the period, were “more than 1 [degree Celsius] cooler than those of the later twentieth century.”
The current cycle of human-induced global warming is likely to lead to a much greater climatic shift than that of the Little Ice Age.
The current cycle of human-induced global warming is likely to lead to a much greater climatic shift than that of the Little Ice Age.
 What is striking then is the sheer magnitude of the ecological, social, and political upheavals of the era.
Droughts struck many parts of the world—including Mexico, Chile, the Mediterranean Sea basin, west and central Africa, India, China, and Indonesia—frequently bringing famine in their wake. These disasters were often accompanied by mass uprisings, rebellions, and war. England endured the greatest internal upheaval in its history, Europe was convulsed by the Thirty Years’ War, and China was torn by decades of strife following the overthrow of the Ming dynasty. Ottoman Turkey, Mughal India, and the Russian and Spanish empires were all shaken by rebellions. And from England to China, millenarian sects sprang up, seized by visions of apocalypse.
Parker estimates that in the 17th century “more wars took place around the world than in any other era.” So terrible was the devastation that contemporary observers around the world produced similar records of famine, plague, and death. One French abbess, for example, believed that the global population declined by a third.
But some states still thrived, most notably the Dutch Republic, which became the world’s preeminent naval and financial power. According to Dagomar Degroot, the author of The Frigid Golden Age, the Dutch owed their success in no small part to their flexibility in adapting to the changed environmental conditions of the period. Moreover, the Dutch status as an emergent power gave them an advantage in relation to the Spanish empire, which was weighed down by its size and historical legacy.
What lessons can be drawn from this history for our own time?
The first is that the sensitivity of human societies to climatic factors may exceed all expectations. The sensitivity of human societies to climatic factors may exceed all expectations.Climate-related conflicts and displacements are already changing the political complexion of many of the world’s most important countries, most notably in Europe. Ten years ago, few would have predicted the extent to which immigration would become the spark for political upheavals across Europe and the Americas.
Second, the history of the Little Ice Age suggests that, apart from catalyzing all manner of political and economic crises, a major climatic shift would also affect the global order, favoring those who are best able to adapt to changing conditions. Whether these conditions favor emergent powers will depend on the degree to which the status quo powers of our time are impeded by their historical legacy, as the Spanish empire was.
In this way, the legacies of the carbon economy may themselves prove to be major impediments. Fossil fuels are much more than mere sources of energy; they have also engendered a wide array of cultural and social practices. Fossil fuel use has shaped the physical, cultural, and imaginative landscapes of the United States, Canada, and Australia to such a degree that significant sections of their populations remain psychologically and politically resistant to recognizing changing environmental realities.
Similarly, fossil fuels—oil and natural gas in particular—have shaped the United States’ strategic commitments in ways that may also hinder its ability to adapt. One example of this is the long-standing U.S. alliance with Saudi Arabia, which has proved as much a constraint as an asset, especially regarding a transition to renewable energy.
To the same degree that these legacy commitments serve to impede the adaptive abilities of the United States (and the West in general), they also serve as incentives for emergent powers to adapt as quickly as possible. For Beijing, a transition from fossil fuels to renewable energy is desirable not only for ecological and economic reasons but also because it could effectively set China free from an energy regime in which the rules were largely set by Western powers and their allies.
There are, of course, very significant limits to what can be extrapolated from history, not least because the great powers of the past did not possess weapons that could destroy the (human) world many times over. The crucial question for the future is whether the established and emergent powers of our time will be able to manage their rivalries even as their own populations become increasingly subject to the disruptive and destabilizing effects of climate change. If not, then human beings could bring about a catastrophe that would far exceed anything wrought by the warming of the planet.
This article originally appeared in the Winter 2019 issue of Foreign Policy magazine.



Amitav Ghosh is the author of The Great Derangement: Climate Change and the Unthinkable. Twitter: @GhoshAmitav

Amitav Ghosh

WRITER

Lauren Tamaki illustration for Foreign Policy
Amitav Ghosh is best known for his intricate works of historical fiction, often set in or around his native India. But his 2016 book, The Great Derangement, is a searing piece of nonfiction that questions why writers and artists consistently fail to use environmental disasters as centerpieces in their stories. Ghosh blames these omissions for the lack of public will to confront climate change—a point he tirelessly reiterates in speeches around the world.
Lauren Tamaki illustration for Foreign Policy
WORLD CHANGING IDEAS
The U.S. just set a new record for the longest time without a federal minimum wage increase



The U.S. just hit a new record for minimum wage stagnation

[Source Photo: Neonbrand/Unsplash]

BY EILLIE ANZILOTTI 
6-17-19

Nine years, 10 months, three weeks, and three days. That’s exactly how long, as of June 16, it’s been since the federal minimum wage last budged. It’s a new record for the amount of time the minimum wage has been stagnant, edging out the last dry spell, which lasted from September 1997 to July 2007.


When the federal minimum wage was last raised in 2009, it went up to its current threshold of $7.25 an hour (adjusted for inflation, it’s now worth less than it was in 1950). In contrast, 29 states have now enacted minimum wages higher than the federal mandate: These moves range from New Mexico, whose $7.50 minimum wage is just a mere 25 cents higher than the federal, to California, which is phasing in a $15 wage floor.

Many companies too, most notably Amazon, have also enacted a $15 minimum for workers. More than 700 businesses across the country, from small locally owned shops to regional and national chains, have signed onto Business for a Fair Wage’s statement supporting a federal wage hike to $15 per hour by 2024, which is what the proposed congressional Raise the Wage Act would do.

Even though Holly Sklar, CEO of the organization Business for a Fair Minimum Wage, applauds businesses that have enacted a higher wage floor independently of a change in the federal minimum wage, she says it’s not enough: “You really need a decent federal floor, so the floor is something higher to build on.” It goes back, she says, to the whole reason for why the minimum wage was enacted to begin with in 1938. In response to the Great Depression, the government recognized that it had an obligation not only to help people struggling with poverty for their own sake, but to support purchasing power in the country. “There was not enough purchasing power to buy the products that would create good jobs,” Sklar says.

Today, wage stagnation is creating a similar problem. Even enormously successful and wealthy business owners in the U.S. are now recognizing that wage inequality and low earnings are leading to a hollowing out of the economy, which could threaten their own livelihoods. As this situation persists, the economy runs the risk of grinding to a halt. It’s no wonder that hundreds of business owners feel a personal stake in this issue. “The minimum wage has always been tied into this notion of good for workers and good for business,” Sklar says.


ABOUT THE AUTHOR

Eillie Anzilotti is an assistant editor for Fast Company's Ideas section, covering sustainability, social good, and alternative economies. Previously, she wrote for CityLab.

More
AMELIORATING CAPITALISM

Making Stakeholder Capitalism a Reality
The recent push by big business in favor of a more socially and environmentally conscious corporate-governance model is not just empty rhetoric. With the public losing trust in business and markets, it is now in everyone’s interest to reform the system so that it delivers prosperity for the many, rather than the few.

Laura Tyson, Lenny Mendonca
Published on January 6, 2020
Photo by Shutterstock

BERKELEY – For a half-century, American corporations (and many others around the world) have embraced shareholder primacy, which holds that the only responsibility of business is to maximize profits. But this principle is now being challenged by corporate leaders themselves, with the United States Business Roundtable announcing last year that it will adopt a stakeholder approach that focuses not just on shareholders but also on customers, employees, suppliers and communities, all of which are deemed essential for business performance.

RELATED The Business of Business is Creating Value (Which Leads to Profits)

When U.S. business leaders join their global counterparts this month at the 50th-anniversary meeting of the World Economic Forum, they will discuss how to give concrete meaning to stakeholder capitalism, a concept first articulated by the WEF’s founder, Klaus Schwab, in the 1970s. In anticipation of this year’s gathering, Schwab has proposed a new “Davos Manifesto” that identifies potential tradeoffs between the interests of various stakeholders and looks for ways to reduce or eliminate them through the common goal of long-term value creation.

Critics have pounced on both the Business Roundtable and WEF statements, dismissing them as “empty rhetoric” or like something out of The Wizard of Oz. Others have decried the travesty of elites talking among themselves rather than with those they have harmed. Yet while some skepticism is justified, there are already promising signs that a change in corporate behavior is coming. Both the WEF and the Business Roundtable have begun to develop blueprints for implementing stakeholder capitalism.

Ultimately, long-term self-interest will drive corporate commitments. Customers are the engine of top-line revenue growth, and have always been recognized as essential business stakeholders. As more consumers have begun to seek out “green” goods and services, for example, companies have started investing in new growth opportunities geared toward sustainability.

RELATED How Your Investing Can Prevent Climate Change

Skilled employees are also essential corporate stakeholders, and with labor markets so tight, they are demanding fair compensation and benefits as well as upskilling and reskilling opportunities. They are also demanding transparency in pay and promotion practices, inclusive and diverse workplaces, and respect for human rights and the environment throughout company supply chains. A growing number of younger workers are seeking employers with both “purpose” and profits.

Meanwhile, investment motivated by environmental, social and governance concerns has risen to $30 trillion in recent years, and now accounts for one-third of funds under professional management. Pension funds, asset managers’ biggest clients, are increasingly using ESG rankings as a guide to portfolio decisions. There is mounting evidence that strong ESG performance correlates with higher equity returns from both a tilt and momentum perspective.


But perhaps the strongest motives behind the growing corporate support for stakeholder capitalism are the erosion of trust in business and the accompanying rise of populism. Citizens are lashing out at what they perceive to be a “rigged” economic system. Income and wealth inequality have surged in recent decades as middle-class incomes have stagnated and as labor markets have become more polarized. The 2008 financial crisis and its aftermath, along with the growing costs and urgency of climate change, have undermined public confidence in globalization and market capitalism.

These conditions are highly reminiscent of the period leading up to the Progressive Era of reform at the turn of the 20th century, when policymakers broke up monopolies, introduced new protections for natural resources and strengthened participatory democracy. Today, business leaders are understandably concerned that citizens will press for a new era of progressive policies that will curtail their freedom to operate. To head off or influence such efforts, companies are seeking ways to demonstrate their commitment to the countries, regions, and communities where they do business.

Hence, even if the only reason is self-preservation, the business community’s multi-stakeholder rhetoric is likely to be accompanied by real changes in corporate behavior. But self-imposed changes will not be enough. Government action will be necessary to ensure that democratic market capitalism remains politically and environmentally sustainable over the long run. Of particular importance are policies to encourage competition, combat climate change, contain inequality and bolster democratic institutions.

RELATED Can Capitalism Be Fixed? 

Policy action may appear impossible at the federal level in the U.S., where partisan and ideological divisions are paralyzing policymaking. But change is coming at the state level, not least in California. A new report from the UCLA Anderson School of Management shows that even with U.S. economic growth set to slow in 2020, California will continue to outpace the country as a whole.

At the same time, California is exceeding its aggressive climate-change goals and introducing new measures to leverage the state’s pension fund in the interest of climate mitigation. Building climate resilience is now an urgent priority. Recent wildfires and electricity shutdowns point to the need for significant investments in the state’s power grid and other critical infrastructure.

California is also leading the way with new legislation to support gig workers (a measure that is ambitious enough to have provoked a challenge in federal court). And, along with 23 other states, California has pushed through a major increase in the minimum wage. Now, much more work needs to be done to address rising homelessness and soaring housing costs, which are driven largely by the lack of new housing construction. Early steps by major corporations to address this problem are encouraging, but not yet scaled to the challenge.

RELATED How Microsoft is Tackling Homelessness in Seattle

Again, government policy will be essential. As the investor Warren Buffett recently pointed out, “The government has to play the part of modifying a market system.” History suggests that he is right. Earlier periods of populist insurgency have led to significant policy reforms that restored stability and trust in capitalism both in the U.S. and Europe.

As we head into 2020, it is clear that it will take the death of “shareholder-only capitalism” to renew capitalism for today’s political economy, which currently resembles a dangerous mix of 1920s capitalism and 1930s politics. Let’s hope we’ve learned the right lessons from the past. Now is the time to pursue a sustainable democratic capitalism that works for all stakeholders.

Laura Tyson, a former chair of the U.S. President’s Council of Economic Advisers in the Obama administration, is a professor at the University of California, Berkeley’s Haas School of Business, a senior adviser at the Rock Creek Group and a senior external adviser to the McKinsey Global Institute.

Lenny Mendonca is Chief Economic and Business Adviser and director of the California Office of Business and Economic Development.

© Project Syndicate, 2020.