Wednesday, August 21, 2024

Speculative Algorithms are the New Invisible Cage for Workers


 
 August 20, 2024
Facebook

It was barely a decade ago that many of us became enamored by the “gig” economy. Booking a room, ride, or restaurant took seconds and could be done at virtually any time or place.

A major factor enabling the gig economy to grow was organizations using algorithms to create these services. Organizations, such as Airbnb, Uber, Upwork, and TaskRabbit, used algorithms to create digital platforms and used them to search, match, monitor, rate, and rank products, people, and places on a global scale.

For many organizations and users, algorithms provide breathtaking speed, scale, and efficiency. Algorithms, however, have always been an imperfect solution for undergirding the growth of the gig economy.

Algorithms have been used for centuries in primarily mathematical and engineering domains to establish fixed, reproducible relationships between different variables. Think about the algorithm to calculate the area of a square or volume of a sphere.

On the other hand, in the gig economy, organizations are using what I call “speculative algorithms” because they are using algorithms to establish fixed relationships of phenomenon that are inherently subjective and changing. Think about a worker’s reputation or the quality of a restaurant. These are inherently subjective—people have different opinions based on many different factors.

Yet, in the gig economy, platforms and people often treat the output of speculative algorithms as definitive. A three “star” rating for a product on Amazon will show up lower in search results and few will stop to interrogate if the rating is accurate or how the product is different from a product with a higher/lower rating.

Second, over time, organizations have made speculative algorithms increasingly opaque to prevent people from gaming them. Almost every algorithmic system—from YouTube and TripAdvisor to eBay and Alibaba—has experienced users unduly manipulating their algorithms. In response, many organizations have introduced more opacity into their algorithms. The algorithm’s inputs, processing, and outputs are increasingly obscured to its users. While opaque algorithms may prevent them from being gamed, they also create new problems.

Inside the Invisible Cage

In my book Inside the Invisible Cage, I argue that organizations’ use of speculative, opaque algorithms on digital platforms, particularly online labor markets for high-skilled workers, is creating an invisible cage: an environment in which organizations embed the rules and guidelines for how workers should behave in opaque algorithms that shift without providing notice, explanation, or recourse for workers.

It is ‘invisible’ because organizations can use AI and algorithms to change the rules and criteria for success at an unprecedented speed and scale without notice or explanation. It is a ‘cage’ because these algorithms increasingly control our opportunities without our say.

For workers, the invisible cage is like playing a game where the rules keep shifting without warning. Except it is not a game. The platform’s algorithms predominantly control their ability to get jobs and how they appear in search results. As a result, many workers find themselves stuck, navigating a system that controls their professional fate in ways they can’t fully understand or predict.

The Invisible Cage Beyond the Gig Economy

I am increasingly observing the dynamics I discovered in my book in areas beyond the gig economy. OpenAI, Google and the like use speculative, opaque algorithms in ways that do not provide explanation, recourse, or compensation to those whose data organizations use to train generative artificial intelligence (AI) systems. The systems also largely do not provide attribution or convey the speculative nature of the outputs. Reports are already emerging about how easily people can use these systems to produce misinformation.

It is not just generative AI. Organizations are increasingly using opaque AI systems for hiring, evaluating, and choosing which workers to fire.

In 2018, reporters revealed that Amazon’s AI hiring system unfairly discriminated against women. Who knows how many women’s careers were and continue to be altered by such algorithms?

To be clear, it is an organizational choice to use these opaque systems. Organizations are not compelled to use algorithms and AI systems, nor are they required to make them opaque. Ultimately, the way algorithms are designed and implemented reveal what organizations choose to (de)value and (de)prioritize.

So where do we go from here? While there are many research-based insights into this question, there are also new initiatives attempting to redress the power and information asymmetries favoring platform companies and organizations. As one example, Fair.Work is providing a research-based rating system evaluating different platform organizations on their policies and practices related to fair pay, fair conditions, fair contracts, fair management, and fair representation.

Suffice to say we need all hands-on deck and multiple, simultaneous approaches to make sustainable changes to the way organizations are designing, developing, and implementing algorithms in the gig economy and beyond. Policy-makers, academics, workers, and consumers all have a role in ensuring more equitable outcomes to ensure the invisible cage can be dismantled.

This originally appeared on the UCPress blog and is reprinted here with permission.

No comments: