Monday, January 24, 2022

Is Corporate Personhood White?




 Facebook

The modern concept of “race” seems to rise above the cultural structures by which society organizes itself. Though that privilege was called in question by the suggestion that “race” is a “social construct,” it still remains to articulate the structure of that construct? For instance, the issue of whether “race” is a noun or a verb has still to be debated. As a verb, it is an avatar for “to racialize,” which refers to what one group of people does to others. And that suggests that “race” cannot be understood apart from its history.

The issue we raise here is that the history of race cannot itself be divorced from the parallel development of the corporate structure. Is it possible that the corporate structure was actually midwife to the birth of race, whiteness, and white supremacy? Would that not suggest an intimate relation between corporate personhood and white racialized identity. How could that be? A relation between an artificial social structure and a form of human identity? What would the unwitting revelation be in such a relation?

The complete history interlinking “race” with the corporate structure is too complex for a small article like this. We can however get a taste of it by examining two points in that double trajectory: the birth of white racialized identity in the 17th century Virginia colony, and the 19th century invention of corporate personhood.

The discussion here will not be an anthropology. Nothing that occurs with respect to the corporate structure has not been intentional, while anthropology pretends to study what evolves impersonally from a people. Conversely, to think that racism is simply an effect of prejudice or false consciousness is coherent only by ignoring the cultural effects of contextualizing social dynamics, such as colonialism, and the rise of the multinational corporation.

***

Return with me, please, to the Virginia colony, founded in 1606 by English nobles and a passel of English bond-laborers under contract. Administratively, the colony was organized as a corporation (“the Virginia Company”), directed by a Board of Directors in London, with an on-site subsidiary Board called the Colonial Council. The overall purpose of the enterprise was 1) to be profitable, and 2) to provide dividends for investors (that is, to be profitable). The responsibilities of the Virginia Board of Directors (Va BoD) were to parcel out the land, to find markets for what the colony produced, and to insure the existence of sufficient labor for colony productivity.

The English laborers, held under indentured contracts, were essentially “chattel,” or “objects” owned for the term of their contracts. The typical contract held a person in thrall for 14 years, granting a small parcel of land and some money upon release. Whether the money and land were provided by the contractor (owner) or by the colony was a condition stated in the contract. Insofar as “chattel” signified that persons were “things,” they entered into bartering exchanges as economic value, and could be used also for payment, to cover the interest on debt, or to be given to someone as a gift, etc. The “sale” of a bond-laborer was accomplished by exchanging the contract for money. The buyer of the contract would thus become the new “owner” of the person.

At first, the colony came close to perishing, unable to figure out how to make this un-English land produce like English land. Though the Algonquian societies nearby were accomplished agrarians, the corporate BoD scorned them and their advice. Indigenous life did not correspond to BoD’s preconceptions of what the colony needed to do. As English people got sick and died, the colony raided the indigenous and stole food from them. In the corporate mind, there was an inherent right to seize the indigenous, their land, and their skill as corporate assets. Even today, that same arrogance infects US corporate relations with the rest of the world. Many Latin Americans migrate to the US because US corporations have impoverished their home countries, seizing their “assets” (resources, labor, productivity, financial reserves, etc.) through imposed debt or forced sale.

The first successful product of the colony was tobacco, a drug. (Cotton didn’t become a primary export crop until the invention of the cotton gin.) Tobacco was a drug that became immediately profitable in Europe. And it came to serve as a form of colonial currency used for payment. It was the cultivation of tobacco that turned the colonial landholdings into plantations.

Tobacco production is labor-intensive. When a Dutch slave ship showed up in Jamestown in 1619 with 20 Africans unpurchased in the Caribbean Islands, the Va BoD paid for them and distributed them to the plantations.

At first, these and subsequent African arrivals were held under typical contract conditions, 14 years and out (with land and money). The Va BoD however decided that the Africans were not eligible for the “protections” of English law, and so they were not given contracts. Some were actually released after 14 years of labor. But the colony gradually ignored its own principles, and started to hold Africans for longer and longer periods. The reason for this was the exchange market for laborers. When bond-laborers under contract were sold, the proximity of their release date was a factor depressing their price. That decline in laborer value, which was accounted as estate wealth, could be increasingly ignored with respect to the Africans, who had no official release date. This neglect empowered the colonial tendency to impose perpetual servitude on the Africans, giving them a different form of asset value – though that tendency remained a contested issue well into the 1660s.

What solidified African bond-laborer existence as assets (tradable elements of structural wealth) was the auction market system. Transference of an English laborer was accomplished by transferring the contract. Since Africans didn’t have contracts, they could only be sold in person. As a marketplace developed, so auctions were institutionalized. They created a sense of “standard” value for different body types, in the same way that securities markets today establish asset values for the securities of different corporations. The existence of such standards meant that plantation owners could calculate their total estate value, and claim political advantages based on that (in colonial society, Virginia and Maryland by that time, political power accrued directly to wealth). Thus, under the force of economic and political power, the drive to hold all Africans in perpetual servitude was irresistible.

This historical process was quite distinct from that evolving in the Caribbean and South America with respect to enslavement. The Spanish and Portuguese plantation systems were based on royal military conquest, and worked laborers under an attrition system. It was cheaper to subject them to starvation conditions and replace them with new arrivals from the slave ships. It was a continuation of the initial colonization, which totally depleted indigenous populations on islands and areas of South America. It was the English labor contract system, originally using English bond-laborers, that set the English colonies on a different trajectory, and eventually produced the notion of person as a corporate asset.

The denial of legal rights to Africans was the first step toward the invention of “race.” The second step was the colony-wide shift of the labor force to Africans (after the 1650s), which augmented the distinction between colonial subject and colonial asset. It was that distinction from which a white identity first appeared in Va., in the 1690s. When the English first came to North America, they didn’t see themselves as white (racially). Their first official reference to themselves as white occurs in 1691. It was only as a form of social identity. It became a cultural (racialized) identity in the 1720s with the organization of the slave patrols. They raised violence against the black bond-laborers to the level of a norm. [This history is recounted in greater detail in my book, “The Rule of Racialization.”]

In short, it was the confluence of economic and political structures in the colony that produced the white racialized identity that then fostered its own culture of racialization. As a footnote to this process, and the fact that it took the English almost a century to racially identify themselves as white, when the first Africans were introduced to the colony in 1619, and with the subsequent arrival of others, intermarriage was common. Apparently the Africans and English found each other fairly attractive. The landowners’ desire to set the Africans aside for perpetual servitude led to the passage of a string of anti-miscegenation laws, all of which failed miserably in their goal. In the process, the VA BoD went so far as to overturn the most basic European patriarchal norm, namely that children take the class status of the father. In 1662, totally in service to the augmentation of corporate asset values and plantation wealth, the Va BoD ruled that a child of racially mixed parentage would take the servitude status of the mother rather than the father. The child of a black woman would be black, and enslaved. This in turn led to severe penalties for a white women who bore the child of a black man. When the anti-abortion movement seeks to legislate women’s relation to their child producing capacity, it is following in the footsteps of colonial development of enslavement.

As another footnote, the emergence of a white identity from a social distinction imposed on the Africans reveals that whiteness came first as a racial identity. It formed the basis on which certain European narcissistic taxonomists, such as Linnaeus or Buffon, thought to divide humans as a whole into races as subgroups. Their theorizations create a mindset of white race primacy and hegemony that then served to rationalize European conquest of others.

Corporate Personhood

Thus we see that, at the foundation of what eventually became the US, the corporate structure was already acting to change the concept of a person through its role in the emergence of “racial” difference. Let us move to a later moment when US jurisprudence decided that “personhood” itself could be changed to fit the corporate structure. It was a moment in proximity to that in which black people were denied any juridical claim to personhood itself.

In popular belief, the source of corporate personhood lies in an 1886 Santa Clara railroad case. That is a mistake. Its real source is to be found in another railroad case that came before the Supreme Court in 1844, called the Letson case. The chief justice on the court at that time was Roger Taney, whom we remember as the author of the Dred Scott decision in 1856. The latter decision barring black people from US citizenship and personhood, and the former granting those same designations to corporations, both emerged from the same mind, using parallel reasoning.

The Letson case arose because railroads were involved in interstate commerce, over which Congress was Constitutionally given jurisdiction. Problems arose because corporations were chartered by states, so malfeasance resulting in court action in a state other than the home state of the enterprise could only be heard in federal court. For a corporation to be able to respond to such suits, it would have to have standing in the federal court.

Constitutionally, only “persons” had such recognition. For Taney, the needs of interstate commerce implied that, if a corporation was to have federal court standing, it had to have personhood. And that, in turn, implied federal citizenship. It was Judge Taney’s concept of corporate personhood that more recently was the basis for the Supreme Court decision in the “Citizens United” case. That decision held that corporations had the right to participate in elections as persons. Though they could not vote as entities, they could express themselves (under the 1st Amendment) by other means, which included donating large sums of money.

When, in 1856, Mr. Dred Scott appeared before Taney’s Supreme Court, he was suing for his freedom. He had been taken to a free state from his home state of Missouri by his “owner.” While there, he was considered a free man, and had gotten married. While still under tutelage to his erstwhile owner, he returned to his “home state” and sought to live as a free man, having been recognized as one during his sojourn in the other state. When his erstwhile “owner” denied him that right, he sued. And the case went to the Supreme Court.

Once again, that structural linkage between the corporate structure and the structure of racialization raised its ugly head. Taney denied Scott his freedom using the same arguments he had used in the Letson case, only in reverse, or rather, turned upside down. Black people had never had standing or citizenship at the federal level, he claimed, and so could not claim personhood or freedom at the state level. Taney referred back to English colonialism, and to its reduction of black people to their status as assets, wealth, things (three-fifths, etc.) to be controlled by “persons” (white or artificial). Thus, while the difference between federal and state power was used to give corporations personhood, the same difference was used to deny black people their existence as persons.

As a Supreme Court decision, the case guaranteed the sovereignty of the slaveholding class, and put a huge crimp in the abolitionist movement of that time.

This quirk of dual power in the US enabled Taney to complete a transformation in jurisprudence that the corporate structure had initiated in 17th century Virginia. Today, at the core of US jurisprudence, and still at odds with US ostensive philosophical principles, there is an acceptance of rule, control, and cultural hegemony by artificial (non-human) entities (the corporate structure) that takes its initial colonial establishment as its precedent. In a strong sense, the system later established by Jim Crow legislation both rendered black people non-persons as corporate assets and deputized all white people as a “patrol” or managerial class over them. And today, not as political representatives of a constituency, but as corporate entities acting in the name of political party competition, half the states of the US are working hard to obstruct or deprive black people of full voting rights.

As we watch oceans die, tornadoes arrive in teams, glaciers melt, and entire communities get destroyed in the interest of resource extraction, we can recognize that there is a destructive core to the corporate structure to which mere capitalism has never been able to lay claim. The primary difference between the two has been capitalism’s dependence on labor for its earnings, whereas, as we see more and more clearly every day, the corporate structure profits most from the “earnings” in securities trading, and the ever rising prices of stocks and derivatives (as a financial reflection of corporate amassing of asset value). Capitalism needed to guarantee a certain level of survival for its labor forces, whereas the corporate structure can dispense with any community that gets in its way (witness that fracking is still going on in the US, destroying both geologies and ecologies).

Where real estate corporations today enrich themselves by buying houses (rented or not) as an increasing mass of assets (whose primary effect is to increase housing prices and rents), so colonial corporations enriched themselves by turning people into assets (and which is still the primary effect of racialization).

Corporate personhood evolved in tandem with the structures of racialization, and the production of racialized groups. If “race” is the verb “to racialize,” as something that one group of people does to others, it means that black people were not born black but were made black by white supremacy in the same sense that white people were not born white but were made white by white supremacy. And if the corporate structure is intimately involved in this process, it not only forms part of the structure that makes black people black in the process of making white people white, but therefore implies that corporate personhood is itself actually white.

As we seek to grasp the true meaning of our climate crisis, we must not neglect to recognize the cultural contribution of white colonialism and white supremacy to the pillaging of the planet and its people, through its corporate structure.

Steve Martinot is Instructor Emeritus at the Center for Interdisciplinary Programs at San Francisco State University. He is the author of The Rule of Racialization: Class, Identity, Governance, Forms in the Abyss: a Philosophical Bridge between Sartre and Derrida (both Temple) and The Machinery of Whiteness. He is also the editor of two previous books, and translator of Racism by Albert Memmi. He has written extensively on the structures of racism and white supremacy in the United States, as well as on corporate culture and economics, and leads seminars on these subjects in the Bay Area.

The Cold War Reborn and Resurgent

 Facebook

In the early 1960s, at the height of America’s original Cold War with the Soviet Union, my old service branch, the Air Force, sought to build 10,000 land-based nuclear missiles. These were intended to augment the hundreds of nuclear bombers it already had, like the B-52s featured so memorably in the movie Dr. Strangelove. Predictably, massive future overkill was justified in the name of “deterrence,” though the nuclear war plan in force back then was more about obliteration. It featured a devastating attack on the Soviet Union and communist China that would kill an estimated 600 million people in six months (the equivalent of 100 Holocausts, notes Daniel Ellsberg in his book, The Doomsday Machine). Slightly saner heads finally prevailed — in the sense that the Air Force eventually got “only” 1,000 of those Minuteman nuclear missiles.

Despite the strategic arms limitation talks between the U.S. and the Soviet Union, the dire threat of nuclear Armageddon persisted, reaching a fresh peak in the 1980s during Ronald Reagan’s presidency. At the time, he memorably declared the Soviet Union to be an “evil empire,” while nuclear-capable Pershing II and ground-launched cruise missiles were rushed to Europe. At that same moment, more than a few Europeans, joined by some Americans, took to the streets, calling for a nuclear freeze— an end to new nuclear weapons and the destabilizing deployment of the ones that already existed. If only…

It was in this heady environment that, in uniform, I found myself working in the ultimate nuclear redoubt of the Cold War. I was under 2,000 feet of solid granite in a North American Aerospace Defense (NORAD) command post built into Cheyenne Mountain at the southern end of the Colorado front range that includes Pikes Peak. When off-duty, I used to hike up a trail that put me roughly level with the top of Cheyenne Mountain. There, I saw it from a fresh perspective, with all its antennas blinking, ready to receive and relay warnings and commands that could have ended in my annihilation in a Soviet first strike or retaliatory counterstrike.

Yet, to be honest, I didn’t give much thought to the possibility of Armageddon. As a young Air Force lieutenant, I was caught up in the minuscule role I was playing in an unimaginably powerful military machine. And as a hiker out of uniform, I would always do my best to enjoy the bracing air, the bright sunshine, and the deep blue skies as I climbed near the timberline in those Colorado mountains. Surrounded by such natural grandeur, I chose not to give more than a moment’s thought to the nightmarish idea that I might be standing at ground zero of the opening act of World War III.  Because there was one thing I knew with certainty: if the next war went nuclear, whether I was on-duty under the mountain or off-duty hiking nearby, I was certainly going to be dead.

Then came 1991 and the collapse of the Soviet Union. The Cold War was over! America had won! Rather than nightmares of the Red Storm Rising sort that novelist Tom Clancy had imagined or Hollywood’s Red Dawn in which there was an actual communist invasion of this country, we could now dream of “peace dividends,” of America becoming a normal country in normal times.

It was, as the phrase went, “morning again in America” — or, at least, it could have been. Yet here I sit, 30 years later, at sea level rather than near the timberline, stunned by the resurgence of a twenty-first-century version of anticommunist hysteria and at the idea of a new cold war with Russia, the rump version of the Soviet Union of my younger days, joined by an emerging China, both still ostensibly conspiring to endanger our national security, or so experts in and out of the Pentagon tell us.

Excuse me while my youthful 28-year-old self asks my cranky 58-year-old self a few questions: What the hell happened? Dammit, we won the Cold War three decades ago. Decisively so! How, then, could we have allowed a new one to emerge? Why would any sane nation want to refight a war that it had already won at enormous cost? Who in their right mind would want to hit the “replay” button on such a costly, potentially cataclysmic strategic paradigm as deterrence through MAD, or mutually assured destruction?

Meet the New Cold War – Same as the Old One

Quite honestly, the who, the how, and the why depress me. The “who” is simple enough: the military-industrial-congressional complex, which finds genocidal nuclear weapons to be profitable, even laudable. Leading the charge of the latest death brigade is my old service, the Air Force. Its leaders want new ICBMs, several hundred of them in fact, with a potential price tag of $264 billion, to replace the Minutemen that still sit on alert, waiting to inaugurate death on an unimaginable scale, not to speak of a global nuclear winter, if they’re ever launched en masse. Not content with such new missiles, the Air Force also desires new strategic bombers, B-21 Raiders to be precise (the “21” for our century, the “Raider” in honor of General Jimmy Doolittle’s morale-boosting World War II attack on Tokyo a few months after Pearl Harbor). The potential price tag: somewhere to the north of $200 billion through the year 2050.

New nuclear missiles and strategic bombers obviously don’t come cheap. Those modernized holocaust-producers are already estimated to cost the American taxpayer half-a-trillion dollars over the next three decades. Honestly, though, I doubt anyone knows the true price, given the wild cost overruns that seem to occur whenever the Air Force builds anything these days. Just look at the $1.7 trillion F-35 fighter, for example, where the “F” apparently stands for Ferrarior, if you prefer brutal honesty, failure.

The “how” is also simple enough. The vast military machine I was once part of justifies such new weaponry via the tried-and-true (even if manifestly false) tactics of the Cold War. Start with threat inflation. In the old days, politicians and generals touted false bomber and missile “gaps.” Nowadays, we hear about China building missile silos, as if these would pose a new sort of dire threat to us. (They wouldn’t, assuming that China is dumb enough to build them.) A recent New Yorker article on Iran’s ballistic missile program is typical of the breed. Citing a Pentagon estimate, the author suggests “that China could have at least a thousand [nuclear] bombs by 2030.” Egad! Be afraid!

Yet the article neglects to mention America’s overwhelmingly superior nuclear weapons and the actual number of nuclear warheads and bombs our leaders have at their disposal. (The current numbers: roughly 5,600 nuclear warheads for the U.S., 350 for China.) At the same time, Iran, which has no nuclear weapons, is nonetheless defined as a serious threat, “an increasingly shrewd rival,” in the same article. A “rival” – how absurd! A nation with no nukes isn’t a rival to the superpower that nuked Hiroshima and Nagasaki in 1945, killing 250,000 Japanese, and planned to utterly destroy the Soviet Union and China in the 1960s. Believe me, nobody, but nobody, rivals this country’s military when it comes to apocalyptic scenarios — and the mindset as well as the ability to achieve them.

On a nuclear spectrum, Iran poses no threat and China is readily deterred, indeed completely overmatched, just with the U.S. Navy’s fleet of Trident-missile-firing submarines. To treat Iran as a “rival” and China as a nuclear “near-peer” is the worst kind of threat inflation (and imagining nuclear war of any sort is a horror beyond all measure).

The “why” is also simple enough, and it disgusts me. Weapons makers, though driven by profit, pose as job-creators. They talk about “investing” in new nukes; they mention the need to “modernize” the arsenal, as if nuclear weapons have an admirable return on investment as well as an expiration date. What they don’t talk about (and never will) is how destabilizing, redundant, unnecessary, immoral, and unimaginably ghastly such weapons are.

Nuclear weapons treat human beings as matter to be irradiated and obliterated. One of the better cinematic depictions of this nightmare came in the 1991 movie Terminator II when Sarah Connor, who knows what’s coming, is helpless to save herself, no less children on a playground, when the nukes start exploding. It’s a scenethat should be seared into all our minds as we think about the hellish implications of the weapons the U.S. military is clamoring for.

In the late 1980s, when I was still in Cheyenne Mountain, I watched the tracks of Soviet nuclear missiles as they terminated at American cities. Sure, it only happened on screen in the missile warning center, driven by a scenario tape simulating an attack, but that was more than enough for me. Yet, today, my government is moving in a direction — both in funding the “modernization” of the American arsenal and in creating a new version of the Cold War of my Air Force days — that could once again make that old scenario tape I saw plausible in what remains of my lifetime.

Excuse me, but where has the idea of nuclear disarmament gone? A scant 15 years ago, old Cold War hands like Henry Kissinger, George Schultz, and Sam Nunn, joined by our “hope and change” president Barack Obama, promoted the end of nuclear terror through the actual elimination of nuclear weapons. But in 2010 Obama threw that possibility away in an attempt to secure Senate support for new strategic arms reduction talks with the Russians. Unsurprisingly, senators and representatives in western states like Wyoming and North Dakota, which thrive off Air Force bases that bristle with nuclear bombers and missiles, quickly abandoned the spirit of Obama’s grand bargain and to this day remain determined to field new nuclear weapons.

Not More, But No More

This country narrowly averted disaster in the old Cold War and back then we had leaders of some ability and probity like Dwight D. Eisenhower and John F. Kennedy. All this new cold war rhetoric and brinksmanship may not end nearly as well in a plausible future administration led, if not by Donald Trump himself, then by some self-styled Trumpist warrior like former Secretary of State Mike Pompeo or Senator Tom Cotton. They would, I suspect, be embraced by an increasing number of evangelicals and Christian nationalists in the military who might, in prophetic terms, find nuclear Armageddon to be a form of fulfillment.

Ironically, I read much of Red Storm Rising, Tom Clancy’s World War III thriller, in 1987 while working a midnight shift in Cheyenne Mountain. Thankfully, that red storm never rose, despite a climate that all too often seemed conducive to it. But why now recreate the conditions for a new red storm, once again largely driven by our own fears as well as the profit- and power-driven fantasies of the military-industrial-congressional complex? Such a storm could well end in nuclear war, despite pledges to the contrary. If a war of that sort is truly unwinnable, which it is, our military shouldn’t be posturing about fighting and “winning” one.

I can tell you one thing with certainty: our generals know one word and it’s not “win,” it’s more. More nuclear missiles. More nuclear bombers. They’ll never get enough. The same is true of certain members of Congress and the president. So, the American people need to learn two words, no more, and say them repeatedly to those same generals and their enablers, when they come asking for almost $2 trillion for that nuclear modernization program of theirs.

In that spirit, I ask you to join a young Air Force lieutenant as he walks past Cheyenne Mountain’s massive blast door and down the long tunnel. Join him in taking a deep breath as you exit that darkness into clear crystalline skies and survey the city lights beneath you and the pulse of humanity before you. Another night’s duty done; another night that nuclear war didn’t come; another day to enjoy the blessings of this wonder-filled planet of ours.

America’s new cold war puts those very blessings, that wonder, in deep peril. It’s why we must walk ever so boldly out of tunnels built by fear and greed and never return to them. We need to say “no more” to new nuclear weapons and recommit to the elimination of all such weaponry everywhere. We had a chance to embark on such a journey 30 years ago in the aftermath of the first Cold War. We had another chance when Barack Obama was elected. Both times we failed.

It’s finally time for this country to succeed in something again — something noble, something other than the perpetuation of murderous war and the horrific production of genocidal weaponry.  After all, only fools replay scenarios that end in doomsday.

This column is distributed by TomDispatch.

William Astore is a retired lieutenant colonel (USAF) and professor of history. His personal blog is Bracing Views.

Take This Job and Shove It!: The Growing Revolt Against Work


 
 JANUARY 21, 2022Facebook

 “Experience demonstrates that there may be a slavery of wages only a little less galling and crushing in its effects than chattel slavery, and that this slavery of wages must go down with the other”

-Frederick Douglass

“A worker is a part time slave”

-Bob Black

There is something very strange going on with the US economy, dearest motherfuckers, and you don’t have to look at the Dow Jones to see it. All you have to do is take a stroll down main street. Everywhere you look on every storefront and shop window from your local Wendy’s to the bank, there are signs screaming ‘Help Wanted!’, ‘Parttime and Fulltime Positions!’, ‘Jobs Available!’, ‘Seriously Dude, Fucking Work Here!’ Businesses of nearly every variety are practically begging for employees but there are no employees to be found. The bean counters in the Federal Government have taken notice too. The latest job reports from the US Bureau of Labor Statistics reveal the sheer Grand Canyon magnitude of this thing in flashing red numbers. Over 20 million Americans have quit their jobs in the second half of 2021, a record 4.5 million in November alone, that’s fucking holiday season! This is literally unlike anything we’ve ever seen before and there are no signs of it slowing down in the foreseeable future. Some clever motherfucker has coined this dumbfounding phenomenon the Great Resignation. We have an economy drowning in job offerings, but no one wants to work, and all the experts seem beside themselves explaining why.

Far be it for me to call myself an expert. I don’t have any Ivy League degrees hanging over the credenza or a position at some smarmy Randian think tank, but I am one of these unemployed people these experts seem to be so mystified by and I do have a theory that might shine some light on their conundrum. Are you ready? Listen very carefully so as not to miss the subtle nuances of my argument. Work fucking blows! It sucks and plebian scum like me don’t wanna live like that anymore and why the fuck would we? It’s not natural and it’s not fucking healthy, spending 80% of your life stewing in traffic jams, slaving behind deep fryers, and punching numbers into computers. We’re not descended from ants. People are monkeys. God designed us to eat, fuck, fight, shit, repeat, and we’re done with civilization’s fucking capitalist zoo.

We’ve all been duped into accepting wage slavery as the natural order of existence but even a cursory glance at history tells us that this is total bullshit. The modern concept of work as we know is only a few centuries old. It’s a byproduct of the 16th Century Protestant Reformation and it wasn’t designed for productivity; it was designed as a means of social control. Those assholes taught us that working ourselves to death would bring us closer to God so they could keep us from jacking off. When the bank took the church’s place during the Industrial Revolution, the bourgeoisie simply replaced God with a dollar bill and used the death cult of the Protestant Work Ethic to squeeze every last drop of man-hours out of the proletariat like a dirty dishrag. They work us so goddamn hard that we don’t even have enough free time to revolt, let alone masturbate, and that’s not just a happy coincidence.

It wasn’t supposed to be this way. Human beings spent 90% of our history in hunter-gatherer societies and we were a hell of a lot happier as savages. We spent no more than 3 to 5 hours a day hunting, foraging, fishing, and preparing food and spent the rest of our time fucking off, smoking dope, and inventing wheels. The great anthropologist Marshall Sahlins called it the original affluent society and it existed for thousands of years without managers, bathroom breaks, Muzak, and office Christmas parties. In the centuries since most of mankind abandoned to nature, we’ve developed such smashing advances in modern technology as prisons, genocide, factories, compulsory schooling, nuclear bombs, shrinking icecaps, and islands of garbage the size of continents. This lifestyle choice known as work is an anomaly, an offramp to human devolution like monotheism, Celine Dion, and the gender binary. If the gods wanted us to live this way, we wouldn’t be choking to death on our own exhaust fumes. Climate change is a warning and so was Covid.

Whether it was the fruit of gain of function or just too many condos too close to the bat caves, the Pandemic served as a violent alarm clock to many hardworking Americans. In a matter of months, 25 million people either lost their jobs or had their hours drastically cut. People also lost their homes, their cars, and their healthcare insurance. In the blink of an eye, all the lies of our statist consumer culture had been stripped naked and people didn’t like what they saw. It turns out that our advanced western society offered no real security when the shit got real, and Americans started to seriously rethink this whole work thing.

The Antiwork Movement isn’t exactly new. It’s as old as Proudhon and Marx. Anarchists and socialists like William Morris, Paul Lafargue, Ivan Illich, Bob Black, and David Graeber have been telling any working stiff who’ll listen for generations that we’ve been hoodwinked, but it took the Pandemic to make a modest movement into a monster. Since the lockdowns, a once humble antiwork page on Reddit has rapidly ballooned into one of the site’s fastest-growing subreddits with over 1.4 million members and growing. They call themselves Idlers and they’ve made their voices heard well beyond the echo chambers of social media by organizing Black Friday boycotts and by shutting down Kellog’s job portal with a deluge of spam in solidarity with striking factory workers.

It’s a big fucking movement and like any big fucking movement there is a diverse array of opinions and not everyone is on the same page. Speaking as a post-left panarchist, I’m a strong believer in creating an endless variety of stateless options provided that they’re all 100% voluntary. I see no reason why primitivists and hunter-gatherers can’t coexist with small autonomous syndicalist factories and family farms as long as we all agree to hang the boss man by his tie. Hierarchy needs to be recognized for the modern social pollution that it is but there are all kinds of funky dance moves that can be used to stomp out that fire before it engulfs us all. I’m inspired by these cyber idlers getting together and sharing information on everything from how to get paid while slacking off to eking out a sustainable living-making soap. I believe that this is a movement that plausibly has more potential than Occupy because it calls on its members to take back their own God-given autonomy and totally rethink the way we organize society. But for disabled people like me this has never really been a choice to begin with.

I don’t work, dearest motherfuckers, because I can’t work. I was born with more mental illnesses than I can count, and I’ve only collected more over the years. They have all kinds of fancy intellectual labels, dysthymia, agoraphobia, social anxiety disorder, gender dysphoria, OCD, PTSD, ADD, but they all essentially add up to one thing. Like many other Americans, I am simply pathologically unemployable. I couldn’t make it in the straight world if I wanted too and Kali knows I’ve tried. Just the idea of a 9-to-5 existence, with its fast-paced monotony, erratic hours, swollen crowds of irate customers, and role-crazy teenage despots, gives me a nervous breakdown. It has forced me to do the unthinkable as an anarchist and go on government disability just to make ends meet, and I used to kick my own ass for being this way but the more I think about it the more I’ve come to realize that I’m not the one with the fucking problem.

The definition of mental illness and disability is essentially suffering from an inability to conform to the confines of mainstream society but considering how corrupt and downright evil this pandemic has proven mainstream society to be, I believe people like me aren’t so much crippled as we are allergic to the social toxins of a malignant civilization and just like the ranks of the Antiwork Movement, our numbers are growing. I refuse to accept that it’s merely a coincidence that more Americans than ever are emotionally unstable and neurodivergent in times like these. We are the human equivalent of climate change. We have evolved into something incompatible with the evils of perpetual growth.

I’m done beating myself up for being a freak. Just like my decidedly unconventional gender identity, I’m proud of being biologically driven to break with modern society. But I’m also done with welfare. It’s a form of social control just like employment and I’m committed to escaping its cage as soon as I can pick the lock. This is why I oppose Universal Basic Income, an idea very popular with many in the antiwork community. Programs like these do nothing to upend the power imbalance of the workplace. They merely replace the boss man with a bureaucrat and offer you a steady trickle of income as long as you obey the state that doles it out.

They’re essentially paying us not to revolt while they rape the planet blind with their toxic sludge-belching genocidal war machine. I refuse to be complicit. The Antiwork Movement has inspired me to plot my escape. Instead of working or preparing myself mentally to assimilate into the workforce, I’m devoting my time to strengthening my people in my local Queer and disabled communities. I’m preparing to learn how to shoot, make scented candles from scratch, freelance as a dominatrix, and get certified to give blood tests at local shelters. All while I finish writing the great Queer American novel and every day, I come a little closer to chewing through the leash.

But this thing is bigger than me. It’s bigger than any of us individually. The Antiwork Movement has the awesome capability of being nothing short of revolutionary. If we can get enough people to drop out of mainstream society and go off the economic grid, we can slay this beast called empire. By building a new economy based totally on freelance subsistence labor and voluntary grey market exchanges we can simultaneously starve both the Federal Government of our tax revenue and their friends in the Fortune 500 of our wage-slave labor, killing the incestuous tag-team of big business and big government with one stone without so much as firing a single bullet. We can destroy the old system by merely rendering it obsolete with a new one. And it all starts with a pink slip that reads ‘take this job and shove it.’

Nicky Reid is an agoraphobic anarcho-genderqueer gonzo blogger from Central Pennsylvania and assistant editor for Attack the System. You can find her online at Exile in Happy Valley.

Would Dogs Be Better Off Without Us?


 Facebook

Would dogs be better off without us? This may be a difficult question to consider if you live with a dog, love dogs, and find beauty in the enduring loyalty of the human-dog partnership. If you are reading this book with a dog curled up next to you on the couch or on her fluffy dog bed, happily licking peanut butter out of a Kong, this question might even be too painful to contemplate: How would my dog survive, naked and afraid, set adrift in a frightening new reality, without me to keep her safe? Yet try to imagine for a few moments not only what your dog might lose, but also what she might gain. Better yet, think about the whole range of individual dogs who currently share the planet with humans and consider the potential losses and the potential gains of having the world to themselves. And think about dogs who might come after the transition, who have never known life with humans. Maybe dogs as a species would have a better go of things on a planet that they didn’t have to share with people, if the 20,000-year-long domestication experiment—which, arguably, has had its problems—were called off once and for all.

Dogs would be challenged by living on their own in a posthuman world. But a posthuman world is also full of what you might call “dog possibilities”—the various ways in which dogs would adapt, innovate, and expand their experiential worlds. We’ve seen that there is far more to the lives of dogs than being a house pet, spending the day chasing balls, barking at the postal delivery person, or waiting anxiously for their person to come home from work. A dogs’ world is a bustling place, with dogs working on their own and with others to solve the puzzle of survival and to reap the rewards of life. Trying to catalog what dogs might stand to gain and lose if humans disappeared can help bring into focus some of the ways in which humans make life hard for dogs. More pertinent for those of us who live with companion dogs are the potential insights about how we might, without even realizing it, be asking our dogs to live in ways that constrain who they are and who they might become, the many ways in which we compromise the “dogness” of dogs. Having a sense of the whole experiential range of dog possibilities may help us become better companions to our dogs.

To explore whether the dog sitting next to us on the couch is fantasizing about a humanless world, we’ve tried to identify the potential gains and losses for dogs in a world without us. As you might expect, the question, “Would dogs be better off?” does not yield a simple “yes” or “no” answer, and the further you dive into the question the murkier the waters become.

Variables at Play in the Gains and Losses Game

We’ve constructed a comprehensive list of what dogs stand to gain or lose if humans go extinct in our book. Here are a few thoughts on why judgments about gains and losses are complicated.

What dogs may gain or lose as a species is distinct from what an individual dog stands to gain or lose. The sudden disappearance of humans will result in broadscale losses at the individual level. Many dogs will be ill-equipped to survive, not having had any lived experiences of obtaining their own food, finding shelter, or forming a workable pair bond. Depending on how humans disappear, individual dogs in captivity—for example, inside homes with no way to get out, or locked inside shelters or laboratory cages—will perish. Overcrowding of dogs in some areas may lead to intense competition for scarce food resources. Moreover, large numbers of individual dogs will be unable to reproduce because they have been desexed, and so even if individuals manage to survive, they will be at a genetic dead end. Nevertheless, enough dogs may survive this first wave so that viable populations will be able to take hold in habitable ecosystems. Dogs as a species may very well go on to flourish.

The gains and losses for dogs in a world without humans will be unique and will depend a great deal on where a dog begins this unprecedented journey into a posthuman future. The unique characteristics of where and how each dog is living when humans disappear will greatly influence what challenges they face and what is experienced as a loss or gain. How well they cope will depend on an individual dog’s personality, past experiences, learning, social and emotional intelligence, and physical attributes.

Dogs currently live in wildly diverse relationships with humans, and while some dogs may keenly miss humans, others will be glad to see us go. A pet dog with a well-informed, motivated, and empathic human caregiver has more to lose than a dog caged at a research laboratory or in a puppy mill. Feral dogs will miss the enormous piles of garbage that humans produce but may not experience any loss of human companionship. Although the challenges for pet, free-ranging, and feral dogs will be different, the loss of humans and the transition from human selection to natural selection will be abrupt, and it won’t be pretty for many of the dogs on the planet.

There will be far fewer posthuman dogs inhabiting the planet. A reduction in total numbers should not necessarily be viewed as a complete loss because arguably there are too many dogs, their population having been bloated by intensive human breeding and careless pet-keeping practices. The size of dog populations, especially in dog-dense areas, will need to be much smaller to be sustainable, with sustainability depending on the carrying capacity—the maximum population size of a species that can be sustained within a given environment—of different habitats in which dogs are trying to survive.

Posthuman dogs may form short- or long-term groups. What might be a gain for a group isn’t necessarily a gain for all individuals within the group, and much will depend on who else is in the group and the ecological conditions with which the group must contend. Groups of animals tend to be most robust when they contain a broad range of behavioral phenotypes. It may be good for a group to have a combination of high-ranking and low-ranking individuals, but life might be difficult for those individuals who are of lower rank.

If humans disappeared, some gains and losses would be felt immediately, such as loss of human food subsidies and the gain of freedom from physical constraint, but the effects of human disappearance will reverberate and shift over generations.

Editor’s Note: For more about the book, see “Science and Speculation Say Dogs Would Do Well Without Us” (Psychology Today, October 21, 2021) by co-author Marc Bekoff. For a deeper dive into many of the topics addressed in the book, please visit Dr. Bekoff’s blog, Animal Emotions.

This excerpt is from A Dog’s World: Imagining the Lives of Dogs in a World Without Humans, by Jessica Pierce and Marc Bekoff (Princeton University Press, 2021). This web adaptation was produced by Earth | Food | Life, a project of the Independent Media Institute.

Jessica Pierce is a faculty affiliate at the Center for Bioethics and Humanities at the University of Colorado Anschutz Medical School. Her books include Run, Spot, Run: The Ethics of Keeping Pets. Find her at jessicapierce.net. Marc Bekoff is professor emeritus of ecology and evolutionary biology at the University of Colorado Boulder. His books include Canine Confidential: Why Dogs Do What They Do. Find him at marcbekoff.comand on Twitter @MarcBekoff.