Monday, February 10, 2020

The Impossible Future of the Futures Market



New Centennial Review: Speculative Finance/Speculative Fiction

vol. 19, No. 1



Published Spring 2019
MSU Press
301 Pages
 
The Impossible Future of the Futures Market

By John Rieder


NOVEMBER 1, 2019

THE FUTURE HAS invaded the present. Or, rather, it has merged into the present, constituting a new “hyper now” that, far from intensifying our material presence in the world, more than ever before abstracts and alienates the contingencies of human labor, time, and agency, through the systematic imperatives of contemporary capitalism. The force driving this invasion is speculative finance, embodied most tellingly in the financial instruments called derivatives. Derivatives are essentially bets on the future prices of things, and the traffic in derivatives, enabled by digital technology and algorithmically driven trading programs that compress and volatilize time (while reducing place to a mere anachronism in the financial market), has now overleaped material reality to such an extent that, as one commentator estimates, it would take the entire resources of at least four Earth-sized planets to actually pay out the fictional capital currently invested in them. The effect of this hypertrophied financial speculation is to evacuate the circuit of capital production that runs from investment of money to production of commodities to realization of profit — schematically represented, in Marx’s rendering, as M-C-M’ — to the seemingly magical birth of profit from money itself (M-M’), where capital seems to increase through mere circulation.

Thus runs the organizing narrative of this hefty special issue of New Centennial Review, co-edited by David Higgins and Hugh C. O’Connell. “Speculative Finance/Speculative Fiction” is made up of 11 substantial critical essays and one piece of “speculative financial fiction” addressing the topic, as announced by the editors, of “the contemporary era’s characteristic shift from production to financialization and the consequences of this shift for the social, political, and aesthetic spheres.” To the economic premise that 21st-century financial speculation marks a decisive change in the overall character of capital accumulation, the editors add a cultural corollary: that the apparent obsolescence of actual commodities in the finance sector’s generation of profits “creates new demands on representation, resulting in a crisis for contemporary narrative.” The special issue’s project is to explore that cultural crisis by “mapping the global systemic effects [speculative finance] creates as well as the ways in which it colonizes the individual at the level of everyday life.”

Following the editors’ introduction, the collection leads off with an essay on “Promissory Futures: Reality and Imagination in Finance and Fiction” by well-established and highly respected scholar Sherryl Vint. In a kind of keynote to the rest of the volume, Vint advances a thesis about the opportunities speculative fiction affords in relation to the fictions promulgated by speculative finance. Vint’s essay more than any other in the volume delves into the particular devices of what might be called the storytelling practices of contemporary financial speculation. These include mark-to-market accounting, which “allows a corporation to count projected or future profit (after purchasing an asset; based on ongoing research) as ‘real’ profit when calculating the present worth of its stock,” and the inclusion of “forward-looking statements,” which are projections of expected income based on the “plans and objectives of management for future operations” in the prospectuses that corporations furnish investors. Such practices, Vint proposes, can be thought of as “fictions, speculations of the kind we find in speculative fiction, extrapolations of possible futures based on elements in the present.” Indeed, she points out, in some industries, such as genomics research, corporate value is based far more heavily on the promise of future information and its possible applications than on the present state of affairs.

But, Vint notes, the extrapolated futures of speculative finance suffer from a certain narrowness of vision. Primary among the limiting assumptions underlying these practices is “the inviolable importance of shareholder value,” such that the duty of maximizing it allows traders to “continue to believe in their narrative” in spite of the widespread human immiseration and environmental degradation that neoliberal restructuring of the global economy has patently exacerbated. Thus the chronic indebtedness of much of the world and the growing inequality of the distribution of wealth are casually accepted as inevitable or brushed aside as outside the realm of corporate responsibility.

In contrast to such speculative finance, speculative fiction offers the possibility of imagining the future in different ways, “making visible what is expelled from this narrow pursuit of profit, working against the normalizing effects of the discourses of speculative finance.” Speculative fiction can envision futures shaped by meaningful social change rather than being limited to the assessment of financial risk and probability within the narrow purview of corporate forecasting. “The speculative finance iconography,” says Vint, “is all setting, images of what a future might look like, and graphs that show possibilities for increased profit. […] In contrast, speculative fiction remains propelled by characters,” so that we are made to think through how “patterns of daily life, from ways of earning a living to kinds of family structures,” might be performed and experienced differently. It is not, she is quick to point out, that all speculative fiction achieves or even attempts a critical vision of the future, but rather that speculative fiction has “a greater potential to be oriented toward the genuinely transformative rather than the predictably profitable.” One possible function of criticism at the present time is to encourage writers to realize that potential.

Each of the other 10 critical essays in “Speculative Finance/Speculative Fiction” focuses on one or a small set of fictional narratives through the lenses of financial and fictional speculation. The major 21st-century speculative fiction under inspection includes the novels Pattern Recognition (2003) by William Gibson, Jerusalem (2016) by Alan Moore, New York 2140 (2017) by Kim Stanley Robinson, and The Dervish House (2010) by Ian McDonald, along with Frank Herbert’s 1965 classic, Dune. Film and TV texts include the Twilight (2008–2012) and The Hunger Games (2012–15) series, Duncan Jones’s Moon (2009) and Source Code (2011), Andrew Niccol’s In Time (2011), an episode of the anthology series Black Mirror (2011–), and Christopher Nolan’s Interstellar (2014). The basic evaluative options are that a piece of fiction successfully offers critical and potentially transformative insight into late capitalist society or that it falls prey to the logic underpinning speculative finance even as, in many cases, it appears to criticize its practices. Most of the readings discern both of these possibilities in the same texts: in other words, sometimes the fictions are held to successfully resist neoliberalism’s insistence that “there is no alternative” to the current status quo, while at others they are said to act as its unintentionally symptomatic exemplars. All of the essays are well written, intelligently conceived, and coherently argued, and all of them offer striking insights that will help readers better appreciate the particular texts they examine as well as the cultural moment we are living through.

A set of texts left out of the previous summary are those addressed in Steve Asselin’s essay on three late-19th-century science-fictional short stories — by Robert Barr, George C. Griffith, and John Mills — about ecological catastrophe. While he notes the same pattern of critical acumen and ideological limitation in these stories as many of the other readings, Asselin’s essay adds to the collection a welcome reminder that the boom-and-bust cycles of capitalism, the unsustainable fantasy of eternal economic growth, and the environmental irresponsibility that fantasy entails were indeed objects of anxiety and criticism long before the late capitalist developments discussed elsewhere in the volume.

Asselin recounts a moment delightfully, if grotesquely, resonant with contemporary climate change denial in Robert Barr’s “Within an Ace of the End of the World” (1900). A new industrial technology extracts nitrogen from the air to be used as agricultural fertilizer, to the vast enrichment of its inventor and the monopolistic corporation he founds. But it also produces an excessively oxygen-rich atmosphere that causes everyone breathing it to be seized by an irrational, baseless euphoria that renders them incapable of dealing with or even recognizing the ongoing environmental disaster — which culminates in the over-oxygenated air catching fire and killing almost everyone on earth. Asselin concludes:

Just as the narratives I discussed illustrated a period of prosperity upon the exploitation of a new resource followed by a crash reified in the natural world, so too would I suggest that, from a historical perspective, our exploitation of fossil fuels has been one long bubble, and the onrushing threat of climate change is the consequent crash.

The least homogeneous ingredient in Higgins and O’Connell’s thoughtful collection is a piece that the editors describe as a “speculative theory-fiction interlude.” “The Great Dividuation” by Joel E. Mason (with Michael Hornblow and anique yael vered) is difficult to categorize or even to describe. It is a loosely moored narrative punctuated by verse, some of which turns out to have been sent back by the narrator from where he arrives at the end of the story “into the middle of the text / without explanation.” Indeed, a deficit of explanation is one of the piece’s core strategies, as it immerses the reader in a surreal landscape that can only gradually be comprehended as a kind of post-cyberpunk future structured by virtuality, corporate power, contracts, and credit exchanges. The story climaxes with the narrator and his companion stepping through a portal — a device neither foreshadowed, described, nor explained — and undergoing a transformation that strikes me as a kind of ode to accelerationism, the idea that the only way out of the current impasses of our society is to go through them:

We step through the portal, arm in arm, and I drift away in 10,000 different directions, my vision, my sense, my sight all dividualize. all of a sudden, I distend. I am broken, loosed, freed, bound to movement, all at the same time. i am north, south, east, west. i am bird wing and bottom feeder. bacon shoulder. i am the hedge, the short, the spread, the bet — all in not one, not all in a many-fingered one. we all capsize and spread. [sic]

The fantasy, I take it, is that of attaining a subject position structured by the unmappable space and temporal dissolution of global finance but free of its economic grasp: “[T]he portal was power and we built it so no one and everyone owned it.” I do not think this is necessarily what Vint has in mind when she praises speculative fiction’s ability to help us imagine different “patterns of daily life,” but providing what’s asked for in quite unexpected ways has always been thought a sign of artistic success, hasn’t it?

Nonetheless the distended, “dividuated” subject imagined in the theory/fiction interlude can be contrasted with a more common-sense insistence on the material embodiment of value in several of the critical essays. The missing component in speculative finance’s transformation of the circuit of capital from M-C-M’ to M-M’ — that is, the addition of value that takes place in the production of commodities — is crucially dependent on the contribution of labor power, a quantity measurable in terms of the socially necessary time spent in a task. This is the true basis of value as such, according to Marx and the political economists whose work he built upon.

Joe Conway explores the film In Time precisely because it imagines a future in which that equation of labor power with value has been literalized by turning time itself into the society’s currency — a currency immediately embodied by determining the lifespan of the individual holding it. Similarly, David P. Pierson analyzes the role of the clones and avatars in the films Moon and Source Code as ways to extend into speculative infinity the expropriation of the workers’ labor power. Hugh C. O’Connell’s reading of McDonald’s Dervish House emphasizes the way the corporate logic depicted there attempts to turn humans into machines while making machines perform the tasks of humans, thus eliding the difference between what Marx called constant and variable capital and evacuating labor power as such from the circuit of capital reproduction. And, in the final essay in the volume, Marcia Klotz analyzes the recent popularity of time-loop narratives, where an intervention from the future solves a problem (in the case of Interstellar, a scientific problem) in the present. According to Klotz,

What gets elided in the process […] is the moment of knowledge production, the actual labor of scientific discovery […] The elision of that moment of production captures what is at stake in the increasing reliance on futures trading — the broad shift in wealth production from M-C-M’ to finance’s foreshortened formula of M-M’.

She concludes that “the contemporary economic emphasis on finance is unsustainable because its growth no longer fosters but ultimately serves to undermine the value-producing economic activity on which it is based.”

As fictional capital accumulates its vast but unrealizable profits, the material world spirals into irredeemable debt — yet another version of the self-destructive dilemma that has produced global warming. “Speculative Finance/Speculative Fiction” does not issue any explicit call to political contestation of the contemporary political-economic order, but it clearly outlines why the survival of our civilization may depend upon it.

¤

John Rieder is Professor Emeritus of English at the University of Hawaiʻi at Mānoa.







LA REVIEW OF BOOKS WILLIAM GIBSON INTERVIEW

                                                         
LA REVIEW OF BOOKS WILLIAM GIBSON INTERVIEW


WILLIAM GIBSON NOTICES THINGS others miss. While his science fiction novels are often described as prescient, what defines Gibson’s body of work is the extraordinary refinement of his focus on the present.

When everyone is talking about the features of the latest Silicon Valley gadget, he might peer at the physical thing itself — from what materials is it constructed? How does it feel cupped in the palm of a hand? What do the awkward lines of its design call to mind? Who might have access to the data it collects? When Wired asked him to write an essay about Tokyo in 2001, he spent a sleepless night wandering the interzone of Roppongi, noting how a particular sex worker looked like she might have stepped straight out of a neon-saturated, hustler-filled, pre-Bubble version of the city. By exploring the ragged edges of things, Gibson consistently manages to shed new light on the strange world we inhabit, coining terms like “cyberspace” and making oft-repeated observations like, “The future is already here — it’s just not evenly distributed.”

In his new novel, Agency, the future is very unevenly distributed. The book weaves together three interacting story lines: the first is set in an alternative 2017 San Francisco where — among other things — Trump didn’t become president. The second story line takes place in an apocalyptic American South that may or may not be in the process of renewal; the third occurs in a high-tech, post-apocalyptic 22nd-century London. All three story lines evolve along their own independent timelines as they indirectly influence each other. The characters, plot, and world are kinetic — spinning off ideas as they hurtle into something new. The story grapples with literally revisionist histories, the branching, unpredictable nature of all the possible futures that splay out from the fulcrum of our present, and just how difficult it is to achieve “agency” in a culture spiraling out of control. Agency reflects how aggressively weird life has become as we embark on the century’s third decade. Reading it feels like gazing into our collective Instagram feed, sans filter.

In the following conversation, we discuss how Gibson tracks reality’s “fuckedness quotient,” how to avoid terminal shortsightedness, and the creative process behind Agency.

¤

ELIOT PEPER: What is Agency’s origin story? How did it evolve from the first glimmer of an idea to the book I’m holding in my hands right now? How did events change the shape and direction of the novel?

WILLIAM GIBSON: The original working title was Tulpagotchi, from tulpa, an occultly projected humanoid thought-form, and Tamagotchi, the handheld digital pet circa 1996. My publisher didn’t seem fond of it at all! The story seemed to me to be a romp of sorts, a sort of upbeat digital Thelma & Louise.

As ever, I wanted to avoid writing any sort of second part of whatever most recent book was in danger of becoming a trilogy, something I’m yet to succeed in escaping. Verity, Eunice, and Joe-Eddy’s apartment presented themselves, more or less as they are at the start of Agency, and I imagined some snarky parodic transit through the sleazier depths of Silicon Valley. A number of characters were developed for that, never to be used. Then Donald Trump descended that escalator, to announce his candidacy, and I experienced a disturbance in the world’s fuckedness quotient (as Milgrim thinks of it in Spook Country). The FQ went up yet again with the Brexit Referendum vote, then entirely off the chart with the outcome of the presidential election. My proposed romp looked merely silly, and the zeitgeist I’d started it in gone, the new one profoundly unfamiliar.

Eventually, I began to feel as though we were in something like a stub in The Peripheral, a disrupted timeline, and finally I saw Verity and Eunice as inhabitants of a different stub, the creation of tinkering from the 22nd-century London of The Peripheral.

Which you’ll know took a while, if you advance-ordered Agency when it was first announced.

“Agency” is what many of the characters, human and AI alike, are seeking over the course of the story: agency in a strange, confusing, rapidly changing world. What did you learn about what it means to find agency in our own lives and world from inventing theirs?

I think I’ve learned that we need, individually, to find those areas in our lives where we do possess agency, and attempt to use it appropriately. And it seems to me that’s evidenced most attractively in maintaining an operative sense of humor.

You’ve said that while science fiction might appear to be about the future, it is actually about the present. What has tracking the “fuckedness quotient” of the present taught you about the future? How has it changed your understanding of the past, and the historical counterfactuals that might have been — or might be our stubs?

I’ve long assumed that historical fiction is fundamentally speculative. We revise factual history as we learn more about the past, and we alter our sense of how the past was in accordance. Our sense of what the Victorians were about bears little resemblance to our parents’ sense of that. If the Victorians were able to see what we think of them now, they’d consider us mad. Given that, the creation of an imagined past is like the creation of an imagined future, but even more demanding. The most demanding form of science fiction, it seems to me, is alternate history, of which I’d offer Kingsley Amis’s The Alteration as a singularly successful example.

What have you noticed in the interstitial spaces of our aggressively weird world since putting the final touches on Agency? Now that the book is working its way out of your system, what is catching your attention?

It hasn’t really worked its way out, alas. It’s dormant still. I’ll start to have a different sense of it in January, when I tour it. I find that reading it aloud to live audiences has a very profound effect on me. It reveals itself to me for the first time. In the meantime, I’ve had the very uneasy experience of watching an actual crisis affect Qamishli, the Syrian city whose name becomes familiar to readers of Agency, and the bittersweet minor luxury of not having to dream up Lowbeer’s eventual explanation of why the United Kingdom’s failure to actually Brexit still resulted in the terrible history she’s lived through.

“Terminal shortsightedness” seems to be at the heart of the multifaceted, slow-motion catastrophe of the jackpot, and of the authoritarian klept. How can taking the long view be more effectively encouraged? Might doing so on an institutional scale require some form of “Adjustor,” whether clandestine like Lowbeer or transparent like Eunice? What’s an example of a difficult, important choice you struggled with in your own life where you ultimately sacrificed the short-term in favor of the long-term?

I have a nagging suspicion that evolution (a wholly random process, though too few of us understand that) has left most of us unable to grasp the idea of an actual apocalypse being possibly of several centuries’ duration. The jackpot began one or two hundred years ago, it seems to me. I myself can dimly recall a world before utterly ubiquitous injection-molded plastics. Toys were of metal, wood, rubber. Styrene was as exotic as Gore-tex, briefly. I’m yet to discover any record of a culture whose imagined apocalypse was a matter of centuries. I doubt anyone has ever stood out on a street corner wearing a sandwich board reading, “THE WORLD IS COMING TO AN END IN A FEW HUNDRED YEARS.” Even before we became as aware as some of us now are of climate change, and of the fact that our species has inadvertently caused it, we seemed to be losing our sense of a capital-F Future. Few phrases were as common throughout the 20th century as “the 21st century,” yet how often do we see “the 22nd century”? Effectively, never.

What did writing Agency teach you about the craft of writing fiction? How are you different for having written it?

Not to give up. To keep writing. I have to relearn that each time, but this time was really something else.

¤

Eliot Peper is the author of Breach, Borderless, Bandwidth, Cumulus, True Blue, Neon Fever Dream, and the Uncommon Series.


Who Runs the Technology? That’s the Problem




For the Provocations series, in conjunction with UCI’s “The Future of the Future: The Ethics and Implications of AI” conference.

In the Spy v. Spy endless battle between the techno-optimists and techno-pessimists, the pessimists lately have taken the upper hand.
In part, that’s because we’ve become immune to the optimists’ charms, taking for granted global connectivity and immediate access to the world’s books, music, and video. Meanwhile, the problems are all around to see and many argue that they are inherent to the technology itself. Just look at the essays that surround this one on the Provocations blog for this conference: bots are born liars; algorithms and AI stifle creativity and tolerance by reflexively (and stupidly) giving us more of what it thinks we like; AI technologies have an innate will to power and domination, just like the living organisms in whose image they were made.
But to understand our current predicament, and begin to solve it, we need to look beyond qualities supposedly inherent in digital platforms and technologies and question the people who are deploying those platforms and technologies. Do they seek power? If so, how do they use that power when they get it? What kind of society do they say they want to see? How do they define justice, equality, the good life?
Conversations that were centered on the technology itself have offered a free pass to Silicon Valley — a decade or more without serious scrutiny from the public or government as we evaluated problems with how people used social networks, rather than problems with the people — and corporations — who designed and deployed those platforms. After all, we’d hear, isn’t disruption inevitable whenever a new technology arrives? There’s damage, things get broken, but not with intent — technology sees inefficiencies and removes them indiscriminately. Arrogant tech leaders were happy to pretend they had no agency.
Lately the spotlight has shone bright on these leaders, however, and everyone is seeing the battle over how we use technology as a political one, not a technological one. In a recent conference call announcing fourth-quarter profits of $7.35 billion on $21 billion in revenues, Mark Zuckerberg explained his new thinking. “Because we wanted to be liked, we didn’t always communicate our views as clearly, because we were worried about offending people,” he said. “This led to some positive but shallow sentiments towards us and the company. And my goal for this next decade isn’t to be liked, but to be understood.”
This is a classic example of what’s called a “pivot” in the world of Silicon Valley venture capital — trying to be liked had been a worthwhile goal while amassing power, but now that the power has been consolidated, the better play is to make your point clearly and be understood. Under this new approach, Facebook’s bizarre decision to allow politicians to lie in ads on the site makes perfect sense.
If you read commentary about this decision you’ll see two different and contradictory takes — some note how much money the Trump campaign alone has spent on Facebook ads, $30 million since May 2018. Others take the same data point and make the opposite point: would Facebook really do something it thought was wrong to retain such an insignificant revenue stream?
Zuckerberg alluded to those relatively minor financial stakes in his speech to Georgetown University in October in which he defended the company’s position concerning false political ads. “In a democracy,” he said by way of explanation, “I believe people should decide what is credible, not tech companies.” He said further that he considered eliminating political ads entirely as a way to stem the criticism.
“From a business perspective,” Zuckerberg said, “the controversy certainly isn’t worth the small part of our business they make up. But political ads are an important part of voice — especially for local candidates, up-and-coming challengers, and advocacy groups that may not get much media attention otherwise.”
Let’s be real. Facebook gains incalculable influence from being the tool that helps a president get elected — whether honestly or through deceit. In order to get his message out on the platform, a transactional president like Trump needs to ensure that Facebook is thriving and happy — that has a value to Facebook well beyond whatever checks his campaign is cutting. The transaction is this: Facebook permits a candidate’s lies, and that candidate, become president, protects Facebook. In other words, this policy is directly relevant to Facebook’s business strategy; it just happens to endanger democracy as well.
Not a very likable position, but one that is understood by everyone involved. Understood by Trump as well as his potential opponents, most notably Elizabeth Warren, who challenged the rule by placing a provocatively false ad on the site — saying that Zuckerberg had endorsed Trump. Should Warren win, she promises to break up and rein in these companies — not because the technology is dangerous, but because the people who are in charge of it are.
This will not be an easy fight to win. The tech companies are well funded and no longer need to play nice or fair. They have the resources to buy up potential rivals or copy their best ideas and use their own monopoly strength to make their rivals succumb. Arguably, the biggest problem with unfettered social networks isn’t related to the services themselves but to the income inequality they have produced in the field and beyond — destroying smaller organizations and dispersing unprecedented wealth among a few winners.
Fortunately, this is a problem that can’t be tied to the technologies themselves, but to the people who have controlled them and the officials who have chosen to look away. We can change course as soon as the fall of this year.

Noam Cohen is the author of The Know-It-Alls: The Rise of Silicon Valley as a Political Powerhouse and Social Wrecking Ball. He is an Ideas columnist for Wired.





Anniversary of the Birth of Jules Verne 
ESSAYS  IN MEMORIAM

By Howard A. Rodman02/08/2020

Jules Verne, who can be said to have imagined, if not invented, the 20th century, was born on February 8, 1828 — 192 years ago today. He died at age 77 in Amiens, France, of complications from diabetes.

Or we might say: died for the first time. Because his work lives on, and has not yet succumbed.

Captain Nemo, perhaps Verne’s greatest and most enduring figure, first died on June 2, 1868, at the end of Twenty Thousand Leagues Under the Sea when his submarine, the Nautilus, was spiraled to the bottom of the sea by a catastrophic and devastating maelstrom, sinking it — perhaps forever — beneath the waves.

Yet Verne’s Nemo has had more than one demise. In a later novel, The Mysterious Island, Nemo is among us once more, and dies once more. On his deathbed in this subsequent novel we learn that after the sinking of the Nautilus both submarine and captain were somehow resurrected, making port on Lincoln Island. Nemo’s second death occurs on October 15, 1868.


We learn little of Nemo’s murderous motivations in Twenty Thousand Leagues; they’re rendered a bit more legibly in Mysterious Island. But what we know depends on where we live.

In England and America — American editions of Verne followed the initial UK translations — Nemo’s pre-Nautilus life was cleansed, its details bowdlerized, its politics gutted. In the Anglophone version, the one you are likely to have read as a child, we learn that Nemo was born Prince Dakkar of Bundelkund, India — a land that the British had benevolently “brought… out of a state of anarchy and constant warfare and misery”; that a few ambitious and unscrupulous princes had in 1857 fomented a revolt, one in which our Dakkar was somehow swept up; that, when the revolt ended, Dakkar disappeared, to live out the rest of his days undersea.

What readers of French, or of the more recent unabridged translations know, is that Nemo was less a victim of that revolt than an enthusiastic participant in it, as successful a general as he had been a prince. And that the British, in savage retaliation, killed his wife and children. Hence: Nemo’s rebellion was far more conscious and purposeful, his flight far more motivated. The ships that in Twenty Thousand Leagues we learn that Nemo attacked and sunk are now contextualized: these are acts of unimaginable grief, legible revenge.

192 years after Nemo’s deaths, 115 years after the death of his creator, the story of Nemo still speaks to us. His unfathomable misery drove him to unfathomed depths. And his destructions of the Governor-Higginson of the Calcutta and Burnach Steam Navigation Company, the Cristobal-Colon of the West India and Pacific Steam Navigation Company, the Shannon of the Royal Mail, now seem less like random acts than strands in a skein of retaliation, transporting the merchant ships of empire to a depth equivalent to that of his own despair.

Even the name embodies these contradictions: Nemo is Latin for “no one,” and also Greek (νέμω) for “I give what is due.” Alienation or vengeance, depending on which classical language should be given precedence. And it’s precisely the ambiguity or, perhaps, complexity of Nemo’s agenda that causes Verne’s figure to echo, from the 19th century to our own, down the corridors of time.

In the current era Nemo appears as caricature in the Disney mash-up crossover series Once Upon a Time, his universe merged with that of Captain Hook. He’s an essential component of Alan Moore’s masterful League of Extraordinary Gentlemen, his elegance, science, violence rendered fully intact in a way one suspects that Verne would have applauded. And of course: he’s lent his name to the world’s most famous animated clownfish. These are the three contemporary faces of Nemo. Reduced to icon. Deeply understood. And as the one thing that Nemo never was: lovable.

The original Nemo was far more Old Testament that that. The murderous acts of Verne’s captain are simultaneously well-lit and obscure, political and personal: precisely calculated yet wildly arbitrary. Is this not the essence of our own age, in which the hideous protagonists who leap out at us from our news feeds are at one and the same time overdetermined and incomprehensible?

We separate parents from children at our border as a deliberate strategy of deterrence, but also savoring the cruelty. Our most brutal leaders see themselves as victims. And of course: the barbarous efforts of the Honourable East India Company to keep those of darker skin snapped to the colonial grid — as chronicled by Verne, at least in the original French — find sickening parallel in today’s Christchurch, in today’s Yemen, in other outposts of our allegedly postcolonial era. In all of these ways, the darker side of Twenty Thousand Leagues never died at all.

There is a difference, though, and it’s a crucial one: Captain Nemo was a man of great intellect and, at times, great heart. He knows who he is. One can empathize (even as one acknowledges his crimes). It is hard if not impossible to muster a parallel empathy for today’s captains of empire, who lack intellect, heart, or indeed any humanizing shred of self-awareness. In history’s long arc, these men will be given what is due. Even as Nemo dies again, lives again, as called upon by each new era — even as his creator, Jules Verne, age 192, dreams on in his eternal slumber.

Howard A. Rodman, past president of the Writers Guild of America West, wrote The Great Eastern (Melville House Books), in which Nemo’s second death is assumed to be as provisional as his first.


http://blog.lareviewofbooks.org/essays/192nd-anniversary-birth-jules-verne/


Quantum Conversations, Entanglement, and the American Cold War “Physics Bubble”

By Michael D. Gordin

FEBRUARY 7, 2020

TRAINING TO BECOME a physicist is really hard work. I know because I’m not one. A long, long time ago, in a galaxy far, far away, I thought I might become a physicist but instead, early in college, I became captivated with the history of science and have never looked back. Well, almost never. My advisor in graduate school, out of what I am sure to him felt like benevolence but to me more like sadism, insisted that I keep enrolling in advanced courses in physics. As he put it in 1998: “In two years the physics of today will be last century’s physics, and you will want to be its historian.”

One of the physics courses I took was an advanced laboratory. It contained two kinds of students: undergraduate whizzes in experimental physics, who were rendering helium superfluid and measuring sound waves, and those in the other half of the room, all of them standing agog. Required to be there in order to prove their bona fides as “real physicists,” this second half (excluding me, the historian) was composed of graduate students in theoretical physics who were only marginally more competent at manipulating voltmeters than I was.

My lab partner was one of these theorists, earning a joint PhD in physics and the history of science. Our lab reports contained the best “historical overviews” any of the instructors had ever seen, if not the best results. Perhaps the most comic of the three experiments we performed was an attempt to verify the irreducible weirdness of quantum mechanics, a quantity called “Bell’s inequality.” In brief, this result involves measuring two quantities that are related by a Heisenberg uncertainty relation in order to show that you can’t get around the quantum. After enduring ritual humiliation from the 18-year-olds levitating positrons (or something like that), we managed to get a serviceable result. I made sure this was the last physics course I ever took, while the theorist embarked on a stellar and perhaps unique career as both a practicing theoretical physicist and a historian of science. His name was David Kaiser.

Kaiser is today ensconced at the Massachusetts Institute of Technology, where he is the Germeshausen Professor of the History of Science in the Program in Science, Technology, and Society; professor of Physics in the Department of Physics; and associate dean of Social and Ethical Responsibilities of Computing. Alongside all that, which surely keeps him busy, he publishes books, including two award-winning ones — Drawing Theories Apart: The Dispersion of Feynman Diagrams in Postwar Physics (University of Chicago Press, 2005) and How the Hippies Saved Physics: Science, Counterculture, and the Quantum Revival (Norton, 2011). And, as if this weren’t more than enough, he also produces a stream of physics papers, mostly in inflationary cosmology and quantum theory, and a host of popular essays and mainstream book reviews that bring both science and its history to a wider audience.

Hence Quantum Legacies: Dispatches from an Uncertain World. The book consists of an introduction and 19 chapters, ranging in length from short to shorter, all of them (except the introduction and chapter eight, on quantum mechanics textbooks) adapted or merged from essays he has published in venues like The New Yorker, The London Review of Books, and academic journals. The essays flow together locally with their neighbors, but sometimes, over longer stretches, one is surprised to find oneself migrating from Schrödinger’s cat to President Eisenhower’s Science Advisory Committee. It is best read piecemeal over an extended period, rather than in three sittings as I did.

The result is a hybrid of genres. If forced at gunpoint to classify it, you could do worse than label it “popular science,” but the science comes along with carefully researched historical context — complete with scores of footnotes to unpublished archival documents — that proves indispensable to the narrative. For the same reason, the book isn’t “popular history of science” either. At several points, Quantum Legacies reads like a memoir: we witness Kaiser’s fascinating experiments in Vienna and the Canary Islands, see his personal reactions to the launch of the Large Hadron Collider at CERN or the death of Stephen Hawking, and learn a little bit about his twin children and (more briefly) his mother.

Nonetheless, the book has a coherent story to tell — or, rather, two coherent stories that interact with each other across the four sections of the book, somewhat-but-not-entirely-helpfully named Quanta, Calculating, Matter, and Cosmos. For the sake of convenience, I will call the first story the “quantum narrative,” which begins in terror and metamorphoses into sublimity, and the other the “scale narrative,” which starts out optimistically but ends in tears.

¤

Beginning with a joke, the book turns dark fast. On the seventh line of the first page, we come across the familiar name of Albert Einstein, whose presence in books like this is something of a statutory requirement. He and his close friend, Leiden physicist Paul Ehrenfest, are passing snarky notes at a scientific conference in Brussels in 1927. Three pages and six years later, the friends are separated by an ocean, hurled apart by Hitler’s takeover of Germany in early 1933, and there is no room for laughter. In September, in a physician’s waiting room with his son Wassik (who had Down Syndrome), Ehrenfest took out a pistol, shot the boy, and then killed himself.

You might expect a rather somber book to unspool from here, but that is not the case (though a series of suicides does pepper the first 50 pages, disturbingly echoing Ehrenfest’s pistol). Instead, the Ehrenfest-Einstein peanut-gallery notes launch the beginning of the quantum narrative, starting with the full-fledged quantum mechanics of Werner Heisenberg and Erwin Schrödinger in 1925–1926, and landing with the “cosmic Bell” experiments that Kaiser ran with Austrian experimentalist Anton Zeilinger to generate some of the most precise and inventive tests ever conducted of that same Bell’s inequality.

At stake is the meaning of quantum mechanics. With very small scales of matter, strange things happen to the laws of nature that work in our everyday, “classical” world. Particles traverse space and time instantaneously, without seeming to travel the distance in between. We find ourselves unable to measure both the position and momentum of an electron to arbitrary accuracy. And, instead of the causal laws we know from the physics of Isaac Newton, the best we can do is to calculate probabilities: 30 percent chance the electron will be in SoHo, 70 percent it’s hanging out in Central Park — that kind of thing. (Kaiser is especially clear in outlining these issues, and I would recommend you turn to the book if what I just wrote has intrigued you.)

Einstein despised all of this. Even though he had been one of the earliest architects of quantum theory with his explanation for the photoelectric effect in 1905, he never accepted the probabilistic interpretation of nature as the final word. The upshot of the Kaiser-Zeilinger experiments, indeed of the entire quantum narrative, is that Einstein was wrong. The world is fuzzy when you look closely. Grappling with the scientific and philosophical implications of its fuzziness occupies roughly a third of Quantum Legacies, smeared like an electron’s orbital across the chapters.

A few themes in particular resonate through this tour of quantum mechanics. The first is that people matter. That might sound obvious — you can’t have any physics if you don’t have physicists — but the point is subtler. The specific personalities matter, and they matter to specific cohorts, making them different from all other cohorts. Even when Kaiser tours the most rarefied and abstract corridors of theoretical physics, he never finds a solitary theorist mooning over a blackboard. All physics happens in conversation, whether it is Einstein and Ehrenfest joking at a physics meeting, or Schrödinger working out his famous thought experiment about a cat — killed (or not) probabilistically with a dose of poison gas — in correspondence with Einstein and other quantum dissidents. As Kaiser points out, it is no accident that, as Europe descended into fascist chaos and warmongering, “Schrödinger’s thoughts turned to poison, death, and destruction.” Even the main character of the first chapter, Paul Dirac, notorious as the weirdest weirdo to ever do theoretical physics, is presented as enmeshed in the communities that found him so inscrutable. Kaiser, in his memoir moments, always credits the graduate students and collaborators who helped make the science possible. The story of quantum entanglement is about scientists being entangled too.

A second more geopolitical theme emerges as well. The book begins in Europe in the 1920s, and it ends there as well: with the discovery of the Higgs boson at CERN in 2012 and the cosmic Bell experiment in Vienna in 2016. The intervening century mostly takes place in the United States, and it is a somewhat dispiriting story. The wonderful philosophical quandaries of quantum mechanics are scrapped for a “shut up and calculate” pragmatism, and it takes a catastrophic crash of the physics job market (and a few hippies) to bring those questions back. In the book, Cold War America is not the place for pure inquiry. Enter the “scale narrative.”

¤

Historians of science use a standard expression to characterize postwar physics, especially in the United States: “big science.” That sounds like a tragic failure of poetic imagination, but it is simply plain speaking. Science was big. It was big because it was centered on experimental physics: table-top experiments were mutating into gigantic bubble chambers, beakers were being replaced by nuclear reactors, the intimate laboratory was being transformed into a factory floor where assembly lines of postdocs extracted data about the very minute. Kaiser guides the reader through the main stops of this familiar story. His more central point, however, is to stress what this change of scale does to American theoretical physics.

The meat of this narrative occupies the second part, “Calculating,” which begins with an illuminating chapter entitled “From Blackboards to Bombs.” Kaiser’s quarry here is the oft-repeated chestnut that World War II was “the physicist’s war,” in contrast to the canonical “chemist’s war,” World War I, forever stained by gas warfare. With the advent of radar and the nuclear destruction of two Japanese cities at the second war’s end, the appellation obviously fits. The puzzle is that the term “physicist’s war” was tacked onto the conflict by Harvard President (and veteran of chemical weapons research in the Great War) James Bryant Conant before the Americans even commenced hostilities in December 1941, and long before either radar or the atomic bomb (classified topics, not suited for nicknames) were realities. What were these people referring to?

They meant classrooms and classical physics. In an age of submarine warfare, aerial bombing, and mortar fire, officers needed to know how to calculate trajectories and repair electronics. They needed the kind of physics that is now the staple of high school education, but it was a scarce resource in those days. Physics instructors were spared the draft so they could drill recruits in Newton’s laws, not so they could build nukes. (That came later.) The term’s currency spiked in 1943, before Hiroshima, and declined precipitously after 1945. It sowed the seeds for what followed.

Decision makers in the United States were convinced that they needed lots of physicists, oodles more than before. Some of the reasoning for this made sense: electronics and nuclear weapons meant this esoteric profession was in greater demand. Some of the reasoning, however, was fabricated, based on a spurious reading of scientific “manpower” in the Soviet Union. (Kaiser’s deconstruction of these statistics is engrossing.) American policy became to overproduce physicists. You never know when you are going to need an egghead to build a gizmo, so make lots of ’em (both eggheads and gizmos). The ensuing physics “bubble” — Kaiser consciously develops the analogy from economic bubbles such as tulips and subprime mortgages — burst in the early 1970s, producing a recession in the physics job market. All academic fields suffered, but physics, which had risen fastest, crashed hardest.

The central theme of this section is understanding the making of scientists as an act of training. We train schoolchildren to spell correctly; we also train vines to grow the way we want them to, and lop them off if they go awry. We train physicists in both senses. Training a physicist drawn from the pool of a dozen or so smart men (women barely figure in Kaiser’s text) discussing philosophical conundrums about causality over cigarettes and coffee into the wee hours is something quite different from UC Berkeley commandeering the largest lecture halls to deal with the postwar onrush of aspiring scientists. Somewhere in the middle, in Kaiser’s telling, American physics instruction transitioned from the schoolchildren to the vines.

The key to these central chapters is a graph of the annual production of American physics PhDs from 1900 to 2005 (a segment of it is reproduced a few pages away, in case you missed it). It’s a striking curve: a modest but slight rise into the low hundreds until World War II, then a dip as physicists are drafted into the conflict, and then a quadrupling by the late 1950s, and more than a doubling of that by 1970. “In fact, more physicists were trained during the quarter century after the Second World War than had ever been trained, cumulatively, in human history,” writes Kaiser. To train them, the science itself changed. Quantum mechanics shed the philosophical puzzles so entrancing to Einstein, Zeilinger, and Kaiser and became a matter of manipulating formulas to get results. The system eventually collapsed under the strain of its scale.

The bursting of the physics bubble proved no less transformative than the boom had been. Quantum philosophy came back, midwifed by the groovy stylings of Fritjof Capra and his surprise best seller The Tao of Physics, a fusion of Copenhagen and Lhasa that was at first mocked by the mainstream and then embraced when it lured flower children into physics courses. The bust also changed the ancillary fields of science. No longer able to find a job at particle accelerators, theorists indulged previously marginal interests like cosmology. The synergies that happened in this rapprochement of subdisciplines still animate today’s physics world.

Kaiser is a product of this post-bubble era, trained during the collapse of the reprise Reagan bubble. Unusual for a popular science book, we are not introduced to just one area of contemporary physics — say, the irreducible quantum entanglement revealed by Bell’s inequality — but also to neutrino physics, high-energy theory, cosmology, the Higgs boson, gravitational waves, and more. Kaiser has worked in all of these areas, which is possible because he is a theorist at a particular nexus of the post-bubble landscape. The scale narrative ends with ruined careers, but a new physics rose from the ashes.

¤

To the extent there is a hero to Quantum Legacies, it is not Kaiser, nor Einstein, nor the Italian-physicist-turned-Soviet-spy Bruno Pontecorvo, nor the curve of the rise and fall of physics PhDs. It is … the textbook. That maligned genre of students and teachers everywhere repeatedly takes its star turn not only as a source of historical and scientific information but as an actor in its own right. The foreword by Kaiser’s MIT colleague Alan Lightman foreshadows this stardom in what at first seems a strange aside: “My college textbook on heat, titled Thermal Physics, is full of equations describing the modern understanding of heat as the random motion of atoms and molecules.” (Oddly, given that we are of very different generations, I used the same textbook in another dive into the salt mines of graduate physics education encouraged by the well-meaning advisor.)

It is not the last textbook we will encounter. There are several chapters devoted to them: to Capra’s Tao of Physics, to Richard Feynman’s and others’ on quantum mechanics, to Charles Misner, Kip Thorne and John Wheeler’s triumph Gravitation, and even to creationist textbooks that attempt to argue away the time-scales needed for Big Bang cosmology. Kaiser tells us up front that he is

particularly fascinated by textbooks as legacy-making engines: objects crafted expressly to try to smuggle forward, into the future, bundles of hard-won skills and insights. Chasing down these legacies has offered me an opportunity to reflect on my own training, as I wonder about what sorts of legacies my colleagues and I might pass along to our students.

By the end of the book, we see the point. Textbooks build communities within a generation of students who study from them, but they are also generated by the community of teachers from whom their authors are drawn. Those chapters that explore the relationships between teachers and students — including the sections about Kaiser’s relationships with his own teachers and students — leave us with a richer picture of physics as a lived activity than either the number-crunching about citations and papers or the nonmathematical explanations of astonishing physical phenomena. Kaiser was well trained.

¤

Michael D. Gordin is a professor in Princeton’s department of history. His latest book, Einstein in Bohemia, is just out. 


https://lareviewofbooks.org/article/quantum-conversations-entanglement-and-the-american-cold-war-physics-bubble/
Pluto’s Icy Heart Heart Beats Daily, Pumping Nitrogen Winds Around the Dwarf Planet

WENDIGO PLANET, IT HAS A FROZEN HEART

By AMERICAN GEOPHYSICAL UNION FEBRUARY 9, 2020


This high-resolution image captured by NASA’s New Horizons spacecraft shows the bright expanse of the western lobe of Pluto’s “heart,” or Sputnik Planitia, which is rich in nitrogen, carbon monoxide, and methane ices. Credit: NASA
A “beating heart” of frozen nitrogen controls Pluto’s winds and may give rise to features on its surface, according to a new study.

Pluto’s famous heart-shaped structure, named Tombaugh Regio, quickly became famous after NASA’s New Horizons mission captured footage of the dwarf planet in 2015 and revealed it isn’t the barren world scientists thought it was.

Now, new research shows Pluto’s renowned nitrogen heart rules its atmospheric circulation. Uncovering how Pluto’s atmosphere behaves provides scientists with another place to compare to our own planet. Such findings can pinpoint both similar and distinctive features between Earth and a dwarf planet billions of miles away.

Nitrogen gas — an element also found in air on Earth — comprises most of Pluto’s thin atmosphere, along with small amounts of carbon monoxide and the greenhouse gas methane. Frozen nitrogen also covers part of Pluto’s surface in the shape of a heart. During the day, a thin layer of this nitrogen ice warms and turns into vapor. At night, the vapor condenses and once again forms ice. Each sequence is like a heartbeat, pumping nitrogen winds around the dwarf planet.

New research in AGU’s Journal of Geophysical Research: Planets suggests this cycle pushes Pluto’s atmosphere to circulate in the opposite direction of its spin — a unique phenomenon called retro-rotation. As air whips close to the surface, it transports heat, grains of ice and haze particles to create dark wind streaks and plains across the north and northwestern regions.

“This highlights the fact that Pluto’s atmosphere and winds — even if the density of the atmosphere is very low — can impact the surface,” said Tanguy Bertrand, an astrophysicist and planetary scientist at NASA’s Ames Research Center in California and the study’s lead author.

Most of Pluto’s nitrogen ice is confined to Tombaugh Regio. Its left “lobe” is a 1,000-kilometer (620-mile) ice sheet located in a 3-kilometer (1.9-mile) deep basin named Sputnik Planitia — an area that holds most of the dwarf planet’s nitrogen ice because of its low elevation. The heart’s right “lobe” is comprised of highlands and nitrogen-rich glaciers that extend into the basin.

“Before New Horizons, everyone thought Pluto was going to be a netball — completely flat, almost no diversity,” Bertrand said. “But it’s completely different. It has a lot of different landscapes and we are trying to understand what’s going on there.”
Western winds

Bertrand and his colleagues set out to determine how circulating air — which is 100,000 times thinner than that of Earth’s — might shape features on the surface. The team pulled data from New Horizons’ 2015 flyby to depict Pluto’s topography and its blankets of nitrogen ice. They then simulated the nitrogen cycle with a weather forecast model and assessed how winds blew across the surface.

The group discovered Pluto’s winds above 4 kilometers (2.5 miles) blow to the west — the opposite direction from the dwarf planet’s eastern spin — in a retro-rotation during most of its year. As nitrogen within Tombaugh Regio vaporizes in the north and becomes ice in the south, its movement triggers westward winds, according to the new study. No other place in the solar system has such an atmosphere, except perhaps Neptune’s moon Triton.

The researchers also found a strong current of fast-moving, near-surface air along the western boundary of the Sputnik Planitia basin. The airflow is like wind patterns on Earth, such as the Kuroshio along the eastern edge of Asia. Atmospheric nitrogen condensing into ice drives this wind pattern, according to the new findings. Sputnik Planitia’s high cliffs trap the cold air inside the basin, where it circulates and becomes stronger as it passes through the western region.

The intense western boundary current’s existence excited Candice Hansen-Koharcheck, a planetary scientist with the Planetary Science Institute in Tucson, Arizona who wasn’t involved with the new study.

“It’s very much the kind of thing that’s due to the topography or specifics of the setting,” she said. “I’m impressed that Pluto’s models have advanced to the point that you can talk about regional weather.”

On the broader scale, Hansen-Koharcheck thought the new study was intriguing. “This whole concept of Pluto’s beating heart is a wonderful way of thinking about it,” she added.

These wind patterns stemming from Pluto’s nitrogen heart may explain why it hosts dark plains and wind streaks to the west of Sputnik Planitia. Winds could transport heat–which would warm the surface–or could erode and darken the ice by transporting and depositing haze particles. If winds on the dwarf planet swirled in a different direction, its landscapes might look completely different.

“Sputnik Planitia may be as important for Pluto’s climate as the ocean is for Earth’s climate,” Bertrand said. “If you remove Sputnik Planitia — if you remove the heart of Pluto — you won’t have the same circulation,” he added.

The new findings allow researchers to explore an exotic world’s atmosphere and compare what they discover with what they know about Earth. The new study also shines light on an object 6 billion kilometers (3.7 billion miles) away from the sun, with a heart that captivated audiences around the globe.

“Pluto has some mystery for everybody,” Bertrand said.

Latest Breakthrough Brings World’s Most Powerful Particle Accelerator One Big Step Closer

Latest Breakthrough Brings World’s Most Powerful Particle Accelerator One Big Step Closer

EVERY TIME THEY TURN IT ON THEY CHANGE THE NATURE OF OUR QUANTUM UNIVERSE

EVERY TIME THEY TURN IT OFF THEY CHANGE THE NATURE OF OUR QUANTUM UNIVERSE 

EVERY TIME THEY USE IT THEY CHANGE THE NATURE OF OUR QUANTUM UNIVERSE 

THE UNIVERSE IS A QUANTUM EXISTENCE 


By HAYLEY DUNNING, IMPERIAL COLLEGE LONDON FEBRUARY 6, 2020


Muon Ionization Cooling Experiment (MICE). Credit: STFC
Scientists have demonstrated a key technology in making next-generation high-energy particle accelerators possible.

Particle accelerators are used to probe the make-up of matter in colliders like the Large Hadron Collider, and for measuring the chemical structure of drugs, treating cancers and manufacturing silicon microchips.

“The enthusiasm, dedication, and hard work of the international collaboration and the outstanding support of laboratory personnel at STFC and from institutes across the world have made this game-changing breakthrough possible.” — Professor Ken Long

So far, the particles accelerated have been protons, electrons and ions, in concentrated beams. However, an international team called the Muon Ionization Cooling Experiment (MICE) collaboration, which includes Imperial College London researchers, are trying to create a muon beam.

Muons are particles like electrons, but with much greater mass. This means they could be used to create beams with ten times more energy than the Large Hadron Collider.

Muons can also be used to study the atomic structure of materials, as a catalyst for nuclear fusion and to see through really dense materials that X-rays can’t penetrate.
Success of a crucial step

MICE have today announced the success of a crucial step in creating a muon beam – corralling the muons into a small enough volume that collisions are more likely. The results were published in Nature yesterday, February 5, 2020.


Inside the MICE testing facility. Credit: STFC

The experiment was carried out using the MICE muon beam-line at the Science and Technology Facilities Council (STFC) ISIS Neutron and Muon Beam facility on the Harwell Campus in the UK.

Professor Ken Long, from the Department of Physics at Imperial, is the spokesperson for the experiment. He said: “The enthusiasm, dedication, and hard work of the international collaboration and the outstanding support of laboratory personnel at STFC and from institutes across the world have made this game-changing breakthrough possible.”


The experiment target. Credit: STFC

Muons are produced by smashing a beam of protons into a target. The muons can then be separated off from the debris created at the target and directed through a series of magnetic lenses. The collected muons form a diffuse cloud, so when it comes to colliding them, the chances of them hitting each other and producing interesting physical phenomena is really low.

To make the cloud less diffuse, a process called beam cooling is used. This involves getting the muons closer together and moving in the same direction. However, so far magnetic lenses could only get the muons closer together, or get them moving in the same direction, but not both at the same time.
Cooling muons

The MICE Collaboration tested a completely new method to tackle this unique challenge, cooling the muons by putting them through specially designed energy-absorbing materials. This was done while the beam was very tightly focussed by powerful superconducting magnetic lenses.


Inside the MICE testing facility. Credit: STFC

After cooling the beam into a denser cloud, the muons can be accelerated by a normal particle accelerator in a precise direction, making it much more likely for the muons to collide. Alternatively, the cold muons can be slowed down so that their decay products can be studied.

Dr. Chris Rogers, based at STFC’s ISIS facility and the collaboration’s Physics Co-ordinator, explained: “MICE has demonstrated a completely new way of squeezing a particle beam into a smaller volume. This technique is necessary for making a successful muon collider, which could outperform even the Large Hadron Collider.

Reference: “Demonstration of cooling by the Muon Ionization Cooling Experiment” by MICE collaboration, 5 February 2020, Nature.
DOI: 10.1038/s41586-020-1958-9

Gravity Mysteries – We May Have Had Fundamental Nature of the Universe Wrong This Whole Time


By KAVLI INSTITUTE FOR THE PHYSICS AND MATHEMATICS OF THE UNIVERSE FEBRUARY 7, 2020


Silly questions lead to surprising answers about the fundamental nature of the universe. We might have been getting it wrong this whole time. Credit: Kavli IPMU

Symmetry has been one of the guiding principles in physicists’ search for fundamental laws of nature. What does it mean that laws of nature have symmetry? It means that laws look the same before and after an operation, similar to a mirror reflection, the same but right is now left in the reflection.

Physicists have been looking for laws that explain both the microscopic world of elementary particles and the macroscopic world of the universe and the Big Bang at its beginning, expecting that such fundamental laws should have symmetry in all circumstances. However, last year, two physicists found a theoretical proof that, at the most fundamental level, nature does not respect symmetry.

How did they do it? Gravity and hologram.

There are four fundamental forces in the physical world: electromagnetism, strong force, weak force, and gravity. Gravity is the only force still unexplainable at the quantum level. Its effects on big objects, such as planets or stars, are relatively easy to see, but things get complicated when one tries to understand gravity in the small world of elementary particles.


The researchers showed that symmetry only affects the shaded regions in the diagram, not around the spot in the middle, thus there cannot be global symmetry. Credit: Kavli IPMU

To try to understand gravity on the quantum level, Hirosi Ooguri, the director of the Kavli Institute for the Physics and Mathematics of the Universe in Tokyo, and Daniel Harlow, an assistant professor at the Massachusetts Institute of Technology, started with the holographic principle. This principle explains three-dimensional phenomena influenced by gravity on a two-dimensional flat space that is not influenced by gravity. This is not a real representation of our universe, but it is close enough to help researchers study its basic aspects.

The pair then showed how quantum error correcting codes, which explain how three-dimensional gravitational phenomena pop out from two dimensions, like holograms, are not compatible with any symmetry; meaning such symmetry cannot be possible in quantum gravity.

They published their conclusion in 2019, garnering high praise from journal editors and significant media attention. But how did such an idea come to be?

There are four fundamental forces in the physical world: electromagnetism, strong force, weak force, and gravity. Gravity is the only force still unexplainable at the quantum level.

It started well over four years ago, when Ooguri came across a paper about holography and its relation to quantum error correcting codes by Harlow, who was then a post doc at Harvard University. Soon after, the two met at the Institute for Advanced Study in Princeton when Ooguri was there on sabbatical and Harlow came to give a seminar.

“I went to his seminar prepared with questions,” Ooguri says. “We discussed a lot afterwards, and then we started thinking maybe this idea he had can be used to explain one of the fundamental properties of quantum gravity, about the lack of symmetry.”

New research collaborations and ideas are often born from such conversations, says Ooguri, who is also a professor at the California Institute of Technology in the U.S. Ooguri travels at least once a fortnight to give lectures, attend conferences, workshops, and other events. While some might wonder if all that travel detracts from concentrating on research, Ooguri believes quite the opposite.

“Scientific progress is serendipitous,” he says. “It often happens in a way that you don’t expect. That kind of development is still very hard to achieve by remote exchange.

“Yes, nowadays it’s easier with e-mails and video conferences,” he continues, “but when you write an e-mail you have to have something to write about. When someone is in the same building, I can walk across the hallway and ask silly questions.”

These silly questions are key to progress in fundamental sciences. Unlike other fields, such as applied science where researchers work towards a specific goal, the first question or idea a theoretical physicist comes up with is usually not the right one, Ooguri says. But, through discussion, other researchers ask questions derived from their curiosity, taking the research in a new direction, landing on a very interesting question, which has an even more interesting answer.

Reference: “Constraints on Symmetries from Holography” by Daniel Harlow and Hirosi Ooguri, 17 May 2019, Physical Review Letters.
DOI: 10.1103/PhysRevLett.122.191601

Details Revealed of Asteroid So Heavily Cratered It’s Been Dubbed the “Golf Ball Asteroid”


By JENNIFER CHU, MASSACHUSETTS INSTITUTE OF TECHNOLOGY FEBRUARY 10, 2020



Two views of the asteroid Pallas, which researchers have determined to be the most heavily cratered object in the asteroid belt. Credit: Image courtesy of the researchers
A tilted orbit may explain the asteroid Pallas’ highly cratered surface.

Asteroids come in all shapes and sizes, and now astronomers at MIT and elsewhere have observed an asteroid so heavily cratered that they are dubbing it the “golf ball asteroid.”

The asteroid is named Pallas, after the Greek goddess of wisdom, and was originally discovered in 1802. Pallas is the third largest object in the asteroid belt, and is about one-seventh the size of the moon. For centuries, astronomers have noticed that the asteroid orbits along a significantly tilted track compared with the majority of objects in the asteroid belt, though the reason for its incline remains a mystery.

In a paper published today (February 10, 2020) in Nature Astronomy, researchers reveal detailed images of Pallas, including its heavily cratered surface, for the first time.

The researchers suspect that Pallas’ pummeled surface is a result of the asteroid’s skewed orbit: While most objects in the asteroid belt travel roughly along the same elliptical track around the sun, much like cars on a race course, Pallas’s tilted orbit is such that the asteroid has to smash its way through the asteroid belt at an angle. Any collisions that Pallas experiences along its way would be around four times more damaging than collisions between two asteroids in the same orbit.

“Pallas’ orbit implies very high-velocity impacts,” says Michaël Marsset, the paper’s lead author and a postdoc in MIT’s Department of Earth, Atmospheric and Planetary Sciences. “From these images, we can now say that Pallas is the most cratered object that we know of in the asteroid belt. It’s like discovering a new world.”

Marsset’s co-authors include collaborators from 21 research institutions around the world.
“A violent history”

The team, led by principal investigator Pierre Vernazza from the Laboratoire d’Astrophyisque de Marseille in France, obtained images of Pallas using the SPHERE instrument at the European Southern Observatory’s Very Large Telescope (VLT), an array of four telescopes, each with an 8-meter-wide mirror, situated in the mountains of Chile. In 2017, and then again in 2019, Marsset and his colleagues reserved one of the four telescopes several days at a time to see if they could capture images of Pallas at a point in its orbit that was closest to Earth.

The team obtained 11 series of images over two observing runs, catching Pallas from different angles as it rotated. After compiling the images, the researchers generated a 3D reconstruction of the shape of the asteroid, along with a crater map of its poles, along with parts of its equatorial region.

In all, they identified 36 craters larger than 30 kilometers in diameter — about one-fifth the diameter of Earth’s Chicxulub crater, the original impact of which likely killed off the dinosaurs 65 million years ago. Pallas’ craters appear to cover at least 10 percent of the asteroid’s surface, which is “suggestive of a violent collisional history,” as the researchers state in their paper.

To see how violent that history likely has been, the team ran a series of simulations of Pallas and its interactions with the rest of the asteroid belt over the last 4 billion years — about the age of the solar system. They did the same with Ceres and Vesta, taking into account each asteroid’s size, mass, and orbital properties, as well as the speed and size distributions of objects within the asteroid belt. They recorded each time a simulated collision produced a crater, on either Pallas, Ceres, or Vesta, that was at least 40 kilometers wide (the size of most of the craters that they observed on Pallas).

They found that a 40-kilometer crater on Pallas could be made by a collision with a much smaller object compared to the same size crater on either Ceres or Vesta. Because small asteroids are much more numerous in the asteroid belt than larger ones, this implies that Pallas has a higher likelihood of experiencing high-velocity cratering events than the other two asteroids.

“Pallas experiences two to three times more collisions than Ceres or Vesta, and its tilted orbit is a straightforward explanation for the very weird surface that we don’t see on either of the other two asteroids,” Marsset says.
A fragmented family

The researchers made two additional discoveries from their images: a curiously bright spot in the asteroid’s southern hemisphere and an extremely large impact basin along the asteroid’s equator.

For the latter discovery, the team looked for explanations for what may have caused such a large impact, estimated to be about 400 kilometers wide.

They simulated various impacts along the equator, and also tracked the fragments that likely were carved out of Pallas’ surface and spewed out into space as the result of each impact.

From their simulations, the team concludes that the large impact basin was likely the result of a collision about 1.7 billion years ago by an object between 20 to 40 kilometers wide, that subsequently ejected fragments of the asteroid out into space, in a pattern that, as it happens, matches a family of fragments that have been observed to trail after Pallas today.

“The equator excavation could very well relate to the current Pallas family of fragments,” says study co-author Miroslav Brož of the Astronomical Institute of Charles University in Prague.

As for the bright spot discovered in Pallas’ southern hemisphere, the researchers are still unclear as to what it might be. Their leading theory is that the region could be a very large salt deposit. From their three-dimensional reconstruction of the asteroid, the researchers estimated Pallas’ volume, and, combined with its known mass, they calculate that its density is different from either Ceres or Vesta, and that it likely originally formed from a mixture of water ice, and silicates. Over time, as the ice in the asteroid’s interior melted, it likely hydrated the silicates, forming salt deposits that could have been exposed following an impact.

One supporting piece of evidence for this hypothesis may come from closer to Earth. Each December, stargazers can view a dazzling display known as the Geminids — a shower of meteors that are fragments of the asteroid Phaethon, which itself is thought to be an escaped fragment of Pallas that eventually made its way into Earth’s orbit. Astronomers have long noted a range of sodium content in the Geminid showers, which Marsset and his colleagues now posit may have originated from salt deposits within Pallas.

“People have proposed missions to Pallas with very small, cheap satellites,” Marsset says. “I don’t know if they would happen, but they could tell us more about the surface of Pallas and the origin of the bright spot.”

Reference: “The violent collisional history of aqueously evolved (2) Pallas” by Michaël Marsset, Miroslav Brož, Pierre Vernazza, Alexis Drouard, Julie Castillo-Rogez, Josef Hanuš, Matti Viikinkoski, Nicolas Rambaux, Benoît Carry, Laurent Jorda, Pavel Ševeček, Mirel Birlan, Franck Marchis, Edyta Podlewska-Gaca, Erik Asphaug, Przemyslaw Bartczak, Jérôme Berthier, Fabrice Cipriani, François Colas, Grzegorz Dudziński, Christophe Dumas, Josef Ďurech, Marin Ferrais, Romain Fétick, Thierry Fusco, Emmanuel Jehin, Mikko Kaasalainen, Agnieszka Kryszczynska, Philippe Lamy, Hervé Le Coroller, Anna Marciniak, Tadeusz Michalowski, Patrick Michel, Derek C. Richardson, Toni Santana-Ros, Paolo Tanga, Frédéric Vachier, Arthur Vigan, Olivier Witasse and Bin Yang, 10 February 2020, Nature Astronomy.
DOI: 10.1038/s41550-019-1007-5

This research was supported, in part, by NASA, the French Ministry of Defense, Aix-Marseille University, and the European Union’s Horizon 2020 research and innovation program.
Harnessing Sunlight to Efficiently Make Fresh Drinkable Water From Seawater

TOPICS:Desalination Energy MIT

By DAVID L. CHANDLER, MASSACHUSETTS INSTITUTE OF TECHNOLOGY FEBRUARY 7, 2020


Tests on an MIT building rooftop showed that a simple proof-of-concept desalination device could produce clean, drinkable water at a rate equivalent to more than 1.5 gallons per hour for each square meter of solar collecting area. Credit: Images courtesy of the researchers

Simple, solar-powered water desalination system achieves new level of efficiency in harnessing sunlight to make fresh potable water from seawater.

A completely passive solar-powered desalination system developed by researchers at MIT and in China could provide more than 1.5 gallons of fresh drinking water per hour for every square meter of solar collecting area. Such systems could potentially serve off-grid arid coastal areas to provide an efficient, low-cost water source.

The system uses multiple layers of flat solar evaporators and condensers, lined up in a vertical array and topped with transparent aerogel insulation. It is described in a paper published yesterday (February 6, 2020) in the journal Energy and Environmental Science, authored by MIT doctoral students Lenan Zhang and Lin Zhao, postdoc Zhenyuan Xu, professor of mechanical engineering and department head Evelyn Wang, and eight others at MIT and at Shanghai Jiao Tong University in China.

The key to the system’s efficiency lies in the way it uses each of the multiple stages to desalinate the water. At each stage, heat released by the previous stage is harnessed instead of wasted. In this way, the team’s demonstration device can achieve an overall efficiency of 385 percent in converting the energy of sunlight into the energy of water evaporation.

The device is essentially a multilayer solar still, with a set of evaporating and condensing components like those used to distill liquor. It uses flat panels to absorb heat and then transfer that heat to a layer of water so that it begins to evaporate. The vapor then condenses on the next panel. That water gets collected, while the heat from the vapor condensation gets passed to the next layer.


Diagram illustrates the basic structure of the proposed desalination system. Sunlight passes through a transparent insulating layer at left, to heat up a black heat-absorbing material, which transfers the heat to a layer of wicking material (shown in blue), where it evaporates and then condenses on a surface (gray) and then drips off to be collected as fresh, potable water. Credit: Images courtesy of the researchers

Whenever vapor condenses on a surface, it releases heat; in typical condenser systems, that heat is simply lost to the environment. But in this multilayer evaporator the released heat flows to the next evaporating layer, recycling the solar heat and boosting the overall efficiency.

“When you condense water, you release energy as heat,” Wang says. “If you have more than one stage, you can take advantage of that heat.”

Adding more layers increases the conversion efficiency for producing potable water, but each layer also adds cost and bulk to the system. The team settled on a 10-stage system for their proof-of-concept device, which was tested on an MIT building rooftop. The system delivered pure water that exceeded city drinking water standards, at a rate of 5.78 liters per square meter (about 1.52 gallons per 11 square feet) of solar collecting area. This is more than two times as much as the record amount previously produced by any such passive solar-powered desalination system, Wang says.

Theoretically, with more desalination stages and further optimization, such systems could reach overall efficiency levels as high as 700 or 800 percent, Zhang says.

Unlike some desalination systems, there is no accumulation of salt or concentrated brines to be disposed of. In a free-floating configuration, any salt that accumulates during the day would simply be carried back out at night through the wicking material and back into the seawater, according to the researchers.

Their demonstration unit was built mostly from inexpensive, readily available materials such as a commercial black solar absorber and paper towels for a capillary wick to carry the water into contact with the solar absorber. In most other attempts to make passive solar desalination systems, the solar absorber material and the wicking material have been a single component, which requires specialized and expensive materials, Wang says. “We’ve been able to decouple these two.”

The most expensive component of the prototype is a layer of transparent aerogel used as an insulator at the top of the stack, but the team suggests other less expensive insulators could be used as an alternative. (The aerogel itself is made from dirt-cheap silica but requires specialized drying equipment for its manufacture.)

Wang emphasizes that the team’s key contribution is a framework for understanding how to optimize such multistage passive systems, which they call thermally localized multistage desalination. The formulas they developed could likely be applied to a variety of materials and device architectures, allowing for further optimization of systems based on different scales of operation or local conditions and materials.

One possible configuration would be floating panels on a body of saltwater such as an impoundment pond. These could constantly and passively deliver fresh water through pipes to the shore, as long as the sun shines each day. Other systems could be designed to serve a single household, perhaps using a flat panel on a large shallow tank of seawater that is pumped or carried in. The team estimates that a system with a roughly 1-square-meter solar collecting area could meet the daily drinking water needs of one person. In production, they think a system built to serve the needs of a family might be built for around $100.

The researchers plan further experiments to continue to optimize the choice of materials and configurations, and to test the durability of the system under realistic conditions. They also will work on translating the design of their lab-scale device into a something that would be suitable for use by consumers. The hope is that it could ultimately play a role in alleviating water scarcity in parts of the developing world where reliable electricity is scarce but seawater and sunlight are abundant.

“This new approach is very significant,” says Ravi Prasher, an associate lab director at Lawrence Berkeley National Laboratory and adjunct professor of mechanical engineering at the University of California at Berkeley, who was not involved in this work. “One of the challenges in solar still-based desalination has been low efficiency due to the loss of significant energy in condensation. By efficiently harvesting the condensation energy, the overall solar to vapor efficiency is dramatically improved. … This increased efficiency will have an overall impact on reducing the cost of produced water.”

Reference: “Ultrahigh-efficiency desalination via a thermally-localized multistage solar still” by Zhenyuan Xu, Lenan Zhang, Lin Zhao, Bangjun Li, Bikram Bhatia, Chenxi Wang, Kyle L. Wilke, Youngsup Song, Omar Labban, John H. Lienhard, Ruzhu Wang and Evelyn N. Wang, 15 January 2020, Energy and Environmental Science.
DOI: 10.1039/C9EE04122B

The research team included Bangjun Li, Chenxi Wang and Ruzhu Wang at the Shanghai Jiao Tong University, and Bikram Bhatia, Kyle Wilke, Youngsup Song, Omar Labban, and John Lienhard, who is the Abdul Latif Jameel Professor of Water at MIT. The research was supported by the National Natural Science Foundation of China, the Singapore-MIT Alliance for Research and Technology, and the MIT Tata Center for Technology and Design.