Sunday, May 30, 2021

Naftali Bennett: The right-wing millionaire who may end Netanyahu era
Maayan Lubell
Sun, 30 May 2021,

FILE PHOTO: Israeli Education Minister Naftali Bennett speaks during a reception hosted by the Orthodox Union in Jerusalem ahead of the opening of the new U.S. embassy in Jerusalem

By Maayan Lubell

JERUSALEM (Reuters) - Naftali Bennett, Israel's likely next prime minister, is a self-made tech millionaire who dreams of annexing most of the occupied West Bank.

Bennett has said that creation of a Palestinian state would be suicide for Israel, citing security reasons.

But the standard-bearer of Israel's religious right and staunch supporter of Jewish settlements said on Sunday he was joining forces with his political opponents to save the country from political disaster.

The son of American immigrants, Bennett, 49, is a generation younger than 71-year-old Prime Minister Benjamin Netanyahu, Israel's longest-serving leader.

A former commando, Bennett named his eldest son after Netanyahu's brother, Yoni, who was killed in an Israeli raid to free hijacked passengers at Uganda's Entebbe airport in 1976.

Bennett has had a long and often rocky relationship with Netanyahu, working between 2006 and 2008 as a senior aide to the then-opposition leader before leaving on reported bad terms.

Bennett stormed into national politics in 2013, revamping a pro-settler party and serving as minister of defence as well as of education and the economy in various Netanyahu governments.

A former leader of Yesha, the main settler movement in the West Bank, Bennett made annexation of parts of the territory that Israel captured in a 1967 war a major feature of his political platform.

But as head of a so-called government of "change" that will include left-wing and centrist parties, while relying on support in parliament from Arab legislators, following through on annexation would be politically unfeasible.

Bennett said on Sunday both the right and left would have to compromise on such ideological matters.

Born in the Israeli city of Haifa to immigrants from San Francisco, Bennett is a modern-Orthodox religious Jew. He lives with his wife, Gilat, a dessert chef, and their four children in the affluent Tel Aviv suburb of Raanana.

Like Netanyahu, Bennett speaks fluent American-accented English and spent some of his childhood in north America, where his parents were on sabbatical.

While working in the high-tech sector, Bennett studied law at Jerusalem's Hebrew University. In 1999, he formed a start-up and then moved to New York, eventually selling his anti-fraud software company, Cyota, to U.S. security firm RSA for $145 million in 2005.

POLICY

Last year, as Netanyahu’s government sought to press ahead with West Bank annexation and settlement building in the final months of the Trump administration, Bennett, then defence chief, said: "The building momentum in the country must not be stopped, even for a second."

The annexation plan was eventually scrapped when Israel formalised ties with the United Arab Emirates. Analysts see little chance of it being resurrected under Donald Trump's Democratic successor, President Joe Biden, if ever.

Nonetheless, Palestinians are likely to regard Bennett's elevation as a blow to hopes of a negotiated peace and an independent state, the long-standing diplomatic formula that Biden favours.

After Israel in March held its fourth election in two years Bennett, who leads the far-right Yamina party, said a fifth vote would be a national calamity and entered talks with the centre-left block that forms the main opposition to Netanyahu.

An advocate of liberalising the economy, Bennett has voiced support for cutting government red tape and taxes.

Unlike some of his former allies on the religious right, Bennett is comparatively liberal on issues such as gay rights and the relationship between religion and state in a country where Orthodox rabbis wield strong influence.

(Reporting by Maayan Lubell; Editing by Jeffrey Heller and Giles Elgood)

Israel's ex-defence minister Bennett shows Pakistani hospital in video as 'Hamas headquarters'

MENA2 min read
The New Arab Staff
30 May, 2021
Naftali Bennett, the right-wing politician who is in the running to become Israel's next prime minister, showed an image of a Pakistani hospital while describing the building as the headquarters of the Palestinian militant group.


The hospital shown in the video is located in Islamabad, Pakistan [YouTube]

Israel's former defence minister was caught out this week by Pakistani social media users who highlighted the right-wing politician's false use of a photo of a Pakistani hospital to describe the "headquarters" of Hamas in the besieged Gaza Strip.

Naftali Bennett, who is currently in talks with other Israeli opposition leaders to form a new government, used a photo of the Shifa International Hospital in the Pakistani capital Islamabad in a video posted on May 20.

The Yamina leader unwittingly used the photo to represent the Al-Shifa Hospital in Gaza City, which he falsely alleged was used as the headquarters of the Hamas militant group.

Al-Shifa is the largest hospital in the Palestinian enclave and treated hundreds of civilians injured during 11 days of intensive Israeli air strikes and shelling this month.

In the video, addressed to celebrities who have criticised Israel over the brutal bombardment and occupation of Palestinian lands, Bennett also claimed a school was being used as a weapons store by Hamas.

He also repeated the official Israeli allegation that the Al-Jalaa Tower in Gaza, which contained homes and international media offices before being completely destroyed in an air strike, was a Hamas military intelligence office. Israel has not provided any evidence for those claims.

Bennett is not the only high-profile Israeli to have shared images from another country falsely purporting to be from Palestine.


RELATED

Netanyahu spokesman uses Syria war footage in anti-Hamas post
MENA
The New Arab Staff
12 May, 2021

Yair Netanyahu, son of the current Israeli Prime Minister Benjamin Netanyahu, shared a video clip of a "funeral" where bodies are lying on the ground covered in white shrouds. As the camera moves, one of the bodies can be seen moving.

Netanyahu, who is known for his controversial statements on social media, claimed that the video showed "Paliwood" actors faking deaths in the Gaza Strip. The video was in fact from a 2013 protest in Egypt.

Similarly, Ofir Gendelman, the spokesperson of Prime Minister Netanyahu, shared a video which he claimed showed Palestinian militants launching rockets in a densely populated area. The video was taken in Syria in 2018.

Gendelman also shared a video, which he said showed Palestinians staging bodies for a fake photo. The video actually showed people preparing for a bomb drill.

Another viral video claimed to show Palestinians faking injuries. The video was actually from 2017 and showed the application of film and special effects make-up as part of a medical training exercise organised by Doctors of the World.

The Age of Reason and the Restless Masses:

Censoring Class Consciousness in the

 Nineteenth Century

Historians/History
tags: censorshipThomas PaineFree PressblasphemyPublicationRichard Carlile

Eric Berkowitz is a writer, lawyer, and journalist. For more than 20 years, he practiced intellectual property and business litigation law in Los Angeles. Berkowitz has published widely throughout his career, and his writing has appeared in periodicals such as the New York Times, the Washington Post, The Economist, the Los Angeles Times, and LA Weekly. His previous books include Sex and Punishment and The Boundaries of Desire. He lives in San Francisco. His new book, Dangerous Ideas: A Brief History of Censorship in the West, From the Ancients to Fake News, was released this month by Beacon Press.

 

 

Adapted from Dangerous Ideas: A Brief History of Censorship in the West, from the Ancients to Fake News

 

“The barbarians that threaten society,” declared a French legislative deputy in the early 1830s, are the “[working classes] of our manufacturing towns.” In France, and throughout nineteenth-century Europe, elites were intent on choking off information that might stir the working and lower classes to demand political and social rights, which the upper orders equated with rebellion. Censorship of the press--the theatre, caricature, and soon after the century’s end, cinema—was consistently driven by the governing classes’ morbid conviction that an informed populace would become an inflamed populace and lead to their own demise.

“Four hostile newspapers are more to be feared than a thousand bayonets,” observed Napoleon Bonaparte. “If I allowed a free press, I would not be in power for another three months.”  England’s Lord Grenville warned in 1817 that the press’s “wicked and blasphemous productions” did not merely risk changes in government; they questioned “whether government should exist at all.” Two years later, Austria’s foreign minister Klemens von Metternich called the press a “scourge” and the era’s “greatest and … most urgent evil.”

Government survived, of course, and much of the censorship during this period was paranoid and ludicrous. Austria banned the word “liberté” on the sides of imported boxes of china, while Russian cookbooks could not refer to “free air” in ovens. But however overwrought were the fears of the elites, the faith among workers and their allies that a free press would solve society’s ills was equally intense. In 1842, Karl Marx extolled an unregulated press as “the ideal world, which constantly gushes from the real one and streams back to it ever richer and animated anew”—a numinous sentiment later hardened into a battle cry by the liberal German parliamentarian Georg von Bunsen: “The fight for the freedom of the press is a holy war, the holy war of the nineteenth century.” By 1849, as revolutions roiled Europe and Marx stood trial for publishing derogatory remarks about German officials, he viewed the press as a force to “undermine all the foundations of the existing political system.”

Authorities viewed church and state as codependent; an attack on one was regarded as an assault on the other. Since the French Revolution, radical politics in England and on the Continent had involved distrust of religious institutions and questioning the tenets of Christianity. In the view of a London judge in 1819, the Revolution had been a dark time when “the worship of Christ was neglected,” which resulted in “the bands of society torn asunder, and a dreadful scene of anarchy, blood, and confusion.” France had been defeated, but the specter of religion’s “neglect” in England remained wherever challenges to authority were raised. “Everything . . . in the nineteenth century,” observes the historian E. P. Thompson, “was turned into a battleground of class.”

That battleground was frequently located in courthouses. English dissenters were often accused of seditious libel—a doctrine criminalizing most criticism of government or church, whether true or not—but authorities also brought hundreds of blasphemy cases, which they viewed as an easier sell to middle-class juries. Apparently, verbal attacks on church or scripture were not considered less problematic when consumed in sumptuous surroundings than when they were aimed at the lower classes. A worker hoping for comfort in the next world, as opposed to material gains in this one, was seen as a more docile worker, and that could not change.

“The gospel is preached particularly for the poor,” the prosecutor said in an 1819 blasphemy trial, which he framed as being “for the purpose of protecting the lower and illiterate classes from having their faith sapped” and their “deference to the laws of God, and of their country” diminished. In that case, the defendant was the radical pressman Richard Carlile, and the blasphemous material was an inexpensive version of The Age of Reason, Thomas Paine’s broadside against Christianity and the BiblePaine’s attacks on the Bible’s “lies,” “absurdities,” “atrocities,” and “contradictions” gave prosecutors much to work with. The book labeled Christians as infidels, framed Christianity as a heathen mythology, and characterized the Immaculate Conception as an “obscene” tale of a young woman “debauched by a ghost.” Religion itself was cast as a political weapon to crush the common people.

Carlile tried to demonstrate that Paine’s critiques of religion were in fact correct, but the judge stopped him: “I cannot let men be acquitted of . . . violating the law because they are unbelievers.” When Carlile tried to show inconsistencies in the Bible, the judge again refused: “You cannot go into the truth of the Christian religion. . . . You are not at liberty to do anything to question the divine origin of Christianity.”

Carlile lost the case and was sent to jail, but his goal was achieved, at least for the moment. Court rules permitted him to read the entirety of The Age of Reason while on the stand, and the law allowed the publication of court proceedings. Carlile’s wife, Jane, went into action, selling ten thousand copies of Paine’s work in the form of a trial report in a matter of weeks. More importantly, Carlile’s travails sparked widespread discussion on the meaning of freedom of the press. The question was not whether Paine’s opinions were correct, but whether Carlile should be jailed for publishing them.

A small army of supporters took up his cause, and Jane took over his Fleet Street printing shop and was jailed herself at least four times. Other supporters continued publishing the writings of Paine and Carlile; they were almost all summarily tried and convicted. So many of Carlile’s shopmen and supporters ended up in Newgate Prison with him that they launched a magazine, Newgate Monthly, which managed to remain in publication out of Carlile’s shop for two years.

“My whole and sole object, from first to last,” Carlile wrote in characteristically grandiose terms, “has been a Free Press and Free Discussion.” In the end, Carlile paid a big price for this, spending a total of about nine years in jail, impoverishing himself and his family, and sparking dozens of blasphemy prosecutions against himself and his supporters. But he obtained a good measure of satisfaction through attrition. After nearly a decade of blasphemy prosecutions against Carlile and his followers, The Age of Reason would remain in circulation.*

Further issues arose when states tried to accommodate the demands of the middle and commercial classes for materials that were off-limits to the poor. At various times, France, Germany, and Russia all exempted expensive works from prepublication censorship while imposing it on cheaper books, newspapers, and pamphlets. In Austria and Russia, a book could be banned when published individually at a low price, and permitted when sold as part of an expensive set, which happened with Tolstoy’s The Kreutzer Sonata. The author’s wife, Sonya, pleaded with the tsar to allow it to be published as part of Tolstoy’s collected works, since, as the tsar himself noted, “not everyone could afford to buy the whole set.”

At the same time, books with troublesome subject matter could be allowed if the lower classes were deemed too ignorant to grasp them. This occurred, with delicious irony, in 1867, with the book that became the seminal text of communism: Karl Marx’s Das Kapital. Russian authorities allowed it in both the original German and in translation, because it was “difficult” and “inaccessible,” its socialist message deemed buried in a “colossal mass of abstruse, somewhat obscure” arguments.

Authorities were especially alarmed by media that did not require reading—particularly theatre, drawings, and caricature—which communicated to a broader, poorer audience than printed text did and carried a more powerful, visceral impact. In several countries, advance censorship of theatre and graphic art continued long after such controls had been dropped for press publications. Printed text came to be regarded as a less threatening communications medium, because so many poor people remained illiterate or semiliterate.

Whether in theatre, opera, film, or even songs, the perceived threat of instant, unmediated communication was compounded by the fact that these media were consumed collectively. They were thus, according to the scholar Robert Goldstein, considered “far more likely to provoke immediate action than printed matter typically read in the privacy of (often middle-class) homes.” As elaborated by an Austrian censor in 1795:

Censorship of the theatre must be much stricter than the normal censorship of printed reading matter. . . . The impression made [by a dramatic work] is infinitely more powerful . . . because [it] engages the eyes and ears and is intended to penetrate the will of the spectator in order to attain the emotional effects intended; this is something that reading alone does not achieve. Censorship of books can . . . make them accessible only to a certain kind of reader, whereas the playhouse by contrast is open to the entire public, which consists of every class, every walk of life, and every age.

And when this impressive experience is shared in darkened rooms by simple people, according to a French theatre censor in 1862, the risk of chaos results:

An electric current runs through the playhouse, passing from actor to spectator, inflaming them both with a sudden ardor and giving them an unexpected audacity. The public is like a group of children. Each of them by themselves is sweet, innocuous, sometimes fearful, but bring them together and you are faced with a group that is bold and noisy, often wicked. The courage or rather the cowardice of anonymity is such a powerful force!

Preventing such electric currents kept theatre censors busy throughout the century. While special scrutiny was paid to inexpensive venues, plays performed before all strata of society were examined. Regardless of where they were performed, any play that impugned ruling authority would likely be censored. Austrian censors demanded that even fictional kings be depicted with delicacy. The producers of Shakespeare’s tragedy King Lear were told in 1826 to rewrite the play so that Lear did not die at the end, even though the story was rendered incomprehensible as a result. The censor believed it was wrong to show a king dying in a state of abject insanity.

Class-based censorship continued into the twentieth century, particularly of the new and first truly mass medium of cinema. The degree of concern over how impecunious audiences might respond to political, sexual, or criminal messaging in movies would be laughable, had it not been so harmful. But restrictions on cinema were soon folded into a more complex global matrix of censorship, lies, and selective truth telling that took shape amid the propaganda-soaked cataclysms of two world wars and the rise of broadcast communications. As political censorship became associated with the regimes of industrialized murder and dissent came to be viewed as a positive attribute of a free society rather than the seed of its downfall, and as the West remade itself after World War II, the commitment to a truly free press and unconstrained self-expression—with significant hiccups, backtracking, and interruptions—expanded as never before.


Review: Lesley Blume's “Fallout: The Hiroshima

 Cover-up and the Reporter Who Revealed It to the

 World”

Books
tags: nuclear weaponsHiroshimacensorshipjournalismatomic bombNagasakiWorld War 2John Hersey


Dr. Lawrence Wittner is Professor of History Emeritus at SUNY/Albany and the author of Confronting the Bomb (Stanford University Press).

In this crisply written, well-researched book, Lesley Blume, a journalist and biographer, tells the fascinating story of the background to John Hersey’s pathbreaking article “Hiroshima,” and of its extraordinary impact upon the world.

In 1945, although only 30 years of age, Hersey was a very prominent war correspondent for Time magazine—a key part of publisher Henry Luce’s magazine empire—and living in the fast lane.  That year, he won the Pulitzer Prize for his novel, A Bell for Adano, which had already been adapted into a movie and a Broadway play.  Born the son of missionaries in China, Hersey had been educated at upper class, elite institutions, including the Hotchkiss School, Yale, and Cambridge.  During the war, Hersey’s wife, Frances Ann, a former lover of young Lieutenant John F. Kennedy, arranged for the three of them to get together over dinner.  Kennedy impressed Hersey with the story of how he saved his surviving crew members after a Japanese destroyer rammed his boat, PT-109.  This led to a dramatic article by Hersey on the subject—one rejected by the Luce publications but published by the New Yorker.  The article launched Kennedy on his political career and, as it turned out, provided Hersey with the bridge to a new employer – the one that sent him on his historic mission to Japan.

Blume reveals that, at the time of the U.S. atomic bombing of Hiroshima, Hersey felt a sense of despair—not for the bombing’s victims, but for the future of the world.  He was even more disturbed by the atomic bombing of Nagasaki only three days later, which he considered a “totally criminal” action that led to tens of thousands of unnecessary deaths.

Most Americans at the time did not share Hersey’s misgivings about the atomic bombings.  A Gallup poll taken on August 8, 1945 found that 85 percent of American respondents expressed their support for “using the new atomic bomb on Japanese cities.”

Blume shows very well how this approval of the atomic bombing was enhanced by U.S. government officials and the very compliant mass communications media.  Working together, they celebrated the power of the new American weapon that, supposedly, had brought the war to an end, producing articles lauding the bombing mission and pictures of destroyed buildings.  What was omitted was the human devastation, the horror of what the atomic bombing had done physically and psychologically to an almost entirely civilian population—the flesh roasted off bodies, the eyeballs melting, the terrible desperation of mothers digging with their hands through the charred rubble for their dying children.

The strange new radiation sickness produced by the bombing was either denied or explained away as of no consequence.  “Japanese reports of death from radioactive effects of atomic bombing are pure propaganda,” General Leslie Groves, the head of the Manhattan Project, told the New York Times.  Later, when, it was no longer possible to deny the existence of radiation sickness, Groves told a Congressional committee that it was actually “a very pleasant way to die.”

When it came to handling the communications media, U.S. government officials had some powerful tools at their disposal.  In Japan, General Douglas MacArthur, the supreme commander of the U.S. occupation regime, saw to it that strict U.S. military censorship was imposed on the Japanese press and other forms of publication, which were banned from discussing the atomic bombing.  As for foreign newspaper correspondents (including Americans), they needed permission from the occupation authorities to enter Japan, to travel within Japan, to remain in Japan, and even to obtain food in Japan.  American journalists were taken on carefully controlled junkets to Hiroshima, after which they were told to downplay any unpleasant details of what they had seen there.

In September 1945, U.S. newspaper and magazine editors received a letter from the U.S. War Department, on behalf of President Harry Truman, asking them to restrict information in their publications about the atomic bomb.  If they planned to do any publishing in this area of concern, they were to submit the articles to the War Department for review.

Among the recipients of this warning were Harold Ross, the founder and editor of the New Yorker, and William Shawn, the deputy editor of that publication.  The New Yorker, originally founded as a humor magazine, was designed by Ross to cater to urban sophisticates and covered the world of nightclubs and chorus girls.  But, with the advent of the Second World War, Ross decided to scrap the hijinks flavor of the magazine and begin to publish some serious journalism.

As a result, Hersey began to gravitate into the New Yorker’s orbit.  Hersey was frustrated with his job at Time magazine, which either rarely printed his articles or rewrote them atrociously.  At one point, he angrily told publisher Henry Luce that there was as much truthful reporting in Time magazine as in Pravda.  In July 1945, Hersey finally quit his job with Time.  Then, late that fall, he sat down with William Shawn of the New Yorker to discuss some ideas he had for articles, one of them about Hiroshima.

Hersey had concluded that the mass media had missed the real story of the Hiroshima bombing.  And the result was that the American people were becoming accustomed to the idea of a nuclear future, with the atomic bomb as an acceptable weapon of war.  Appalled by what he had seen in the Second World War—from the firebombing of cities to the Nazi concentration camps—Hersey was horrified by what he called “the depravity of man,” which, he felt, rested upon the dehumanization of others.  Against this backdrop, Hersey and Shawn concluded that he should try to enter Japan and report on what had really happened there.

Getting into Japan would not be easy.  The U.S. Occupation authorities exercised near-total control over who could enter the stricken nation, keeping close tabs on all journalists who applied to do so, including records on their whereabouts, their political views, and their attitudes toward the occupation.  Nearly every day, General MacArthur received briefings about the current press corps, with summaries of their articles.  Furthermore, once admitted, journalists needed permission to travel anywhere within the country, and were allotted only limited time for these forays.

Even so, Hersey had a number of things going for him.  During the war, he was a very patriotic reporter.  He had written glowing profiles about rank-and-file U.S. soldiers, as well as a book (Men on Bataan) that provided a flattering portrait of General MacArthur.  This fact certainly served Hersey well, for the general was a consummate egotist.  Apparently as a consequence, Hersey received authorization to visit Japan.

En route there in the spring of 1946, Hersey spent some time in China, where, on board a U.S. warship, he came down with the flu.  While convalescing, he read Thornton Wilder’s Pulitzer Prize-winning novel, The Bridge of San Luis Rey, which tracked the different lives of five people in Peru who were killed when a bridge upon which they stood collapsed.  Hersey and Shawn had already decided that he should tell the story of the Hiroshima bombing from the victims’ point of view.  But Hersey now realized that Wilder’s book had given him a particularly poignant, engrossing way of telling a complicated story.  Practically everyone could identify with a group of regular people going about their daily routines as catastrophe suddenly struck them.

Hersey arrived in Tokyo on May 24, 1946, and two days later, received permission to travel to Hiroshima, with his time in that city limited to 14 days.

Entering Hiroshima, Hersey was stunned by the damage he saw.  In Blume’s words, there were “miles of jagged misery and three-dimensional evidence that humans—after centuries of contriving increasingly efficient ways to exterminate masses of other humans—had finally invented the means with which to decimate their entire civilization.”  Now there existed what one reporter called “teeming jungles of dwelling places . . . in a welter of ashes and rubble.”  As residents attempted to clear the ground to build new homes, they uncovered masses of bodies and severed limbs.  A cleanup campaign in one district of the city alone at about that time unearthed a thousand corpses.  Meanwhile, the city’s surviving population was starving, with constant new deaths from burns, other dreadful wounds, and radiation poisoning.

Given the time limitations of his permit, Hersey had to work fast.  And he did, interviewing dozens of survivors, although he eventually narrowed down his cast of characters to six of them.

Departing from Hiroshima’s nightmare of destruction, Hersey returned to the United States to prepare the story that was to run in the New Yorker to commemorate the atomic bombing.  He decided that the article would have to read like a novel.  “Journalism allows its readers to witness history,” he later remarked.  “Fiction gives readers the opportunity to live it.”  His goal was “to have the reader enter into the characters, become the characters, and suffer with them.”

When Hersey produced a sprawling 30,000 word draft, the New Yorker’s editors at first planned to publish it in serialized form.  But Shawn decided that running it this way wouldn’t do, for the story would lose its pace and impact.  Rather than have Hersey reduce the article to a short report, Shawn had a daring idea.  Why not run the entire article in one issue of the magazine, with everything else—the “Talk of the Town” pieces, the fiction, the other articles and profiles, and the urbane cartoons—banished from the issue?

Ross, Shawn, and Hersey now sequestered themselves in a small room at the New Yorker’s headquarters, furiously editing Hersey’s massive article.  Ross and Shawn decided to keep the explosive forthcoming issue a top secret from the magazine’s staff.  Indeed, the staff were kept busy working on a “dummy” issue that they thought would be going to press.  Contributors to that issue were baffled when they didn’t receive proofs for their articles and accompanying artwork.  Nor were the New Yorker’s advertisers told what was about to happen.  As Blume remarks:  “The makers of Chesterfield cigarettes, Perma-Lift brassieres, Lux toilet soap, and Old Overholt rye whiskey would just have to find out along with everyone else in the world that their ads would be run alongside Hersey’s grisly story of nuclear apocalypse.”

However, things don’t always proceed as smoothly as planned.  On August 1, 1946, President Truman signed into law the Atomic Energy Act, which established a “restricted” standard for “all data concerning the manufacture or utilization of atomic weapons.”  Anyone who disseminated that data “with any reason to believe that such data” could be used to harm the United States could face substantial fines and imprisonment.  Furthermore, if it could be proved that the individual was attempting to “injure the United States,” he or she could “be punished by death or imprisonment for life.”

In these new circumstances, what should Ross, Shawn, and Hersey do?  They could kill the story, water it down, or run it and risk severe legal action against them.  After agonizing over their options, they decided to submit Hersey’s article to the War Department – and, specifically, to General Groves – for clearance.

Why did they take that approach?  Blume speculates that the New Yorker team thought that Groves might insist upon removing any technical information from the article while leaving the account of the sufferings of the Japanese intact.  After all, Groves believed that the Japanese deserved what had happened to them, and could not imagine that other Americans might disagree.  Furthermore, the article, by underscoring the effectiveness of the atomic bombing of Japan, bolstered his case that the war had come to an end because of his weapon.  Finally, Groves was keenly committed to maintaining U.S. nuclear supremacy in the world, and he believed that an article that led Americans to fear nuclear attacks by other nations would foster support for a U.S. nuclear buildup.

The gamble paid off.  Although Groves did demand changes, these were minor and did not affect the accounts by the survivors.

On August 29, 1946, copies of the “Hiroshima” edition of the New Yorker arrived on newsstands and in mailboxes across the United States, and it quickly created an enormous sensation, particularly in the mass media.  Editors from more than thirty states applied to excerpt portions of the article, and newspapers from across the nation ran front-page banner stories and urgent editorials about its revelations.  Correspondence from every region of the United States poured into the New Yorker’s office.  A large number of readers expressed pity for the victims of the bombing.  But an even greater number expressed deep fear about what the advent of nuclear war meant for the survival of the human race.

Of course, not all readers approved of Hersey’s report on the atomic bombing.  Some reacted by canceling their subscriptions to the New Yorker.  Others assailed the article as antipatriotic, Communist propaganda, designed to undermine the United States.  Still others dismissed it as pro-Japanese propaganda or, as one reader remarked, written “in very bad taste.”

Some newspapers denounced it.  The New York Daily News derided it as a stunt and “propaganda aimed at persuading us to stop making atom bombs . . . and to give our technical bomb secrets away . . . to Russia.”  Not surprisingly, Henry Luce was infuriated that his former star journalist had achieved such an enormous success writing for a rival publication, and had Hersey’s portrait removed from Time Inc.’s gallery of honor.

Despite the criticism, “Hiroshima” continued to attract enormous attention in the mass media.  The ABC Radio Network did a reading of the lengthy article over four nights, with no acting, no music, no special effects, and no commercials.  “This chronicle of suffering and destruction,” it announced, was being “broadcast as a warning that what happened to the people of Hiroshima could next happen anywhere.”  After the broadcasts, the network’s telephone switchboards were swamped by callers, and the program was judged to have received the highest rating of any public interest broadcast that had ever occurred.  The BBC also broadcast an adaptation of “Hiroshima,” while some 500 U.S. radio stations reported on the article in the days following its release.

In the United States, the Alfred Knopf publishing house came out with the article in book form, which was quickly promoted by the Book-of-the-Month Club as “destined to be the most widely read book of our generation.”  Ultimately, Hiroshima sold millions of copies in nations around the world.  By the late fall of 1946, the rather modest and retiring Hersey, who had gone into hiding after the article’s publication to avoid interviews, was rated as one of the “Ten Outstanding Celebrities of 1946,” along with General Dwight Eisenhower and singer Bing Crosby.

For U.S. government officials, reasonably content with past public support for the atomic bombing and a nuclear-armed future, Hersey’s success in reaching the public with his disturbing account of nuclear war confronted them with a genuine challenge.  For the most part, U.S. officials recognized that they had what Blume calls “a serious post-`Hiroshima’ image problem.”

Behind the scenes, James B. Conant, the top scientist in the Manhattan Project, joined President Truman in badgering Henry Stimson, the former U.S. Secretary of War, to produce a defense of the atomic bombing.  Provided with an advance copy of the article, to be published in Harper’s, Conant told Stimson that it was just what was needed, for they could not have allowed “the propaganda against the use of the atomic bomb . . . to go unchecked.”

Although the New Yorker’s editors sought to arrange for publication of the book version of “Hiroshima” in the Soviet Union, this proved impossible.  Instead, Soviet authorities banned the book in their nation.  Pravda fiercely assailed Hersey, claiming that “Hiroshima” was nothing more than an American scare tactic, a fiction that “relishes the torments of six people after the explosion of the atomic bomb.”  Another Soviet publication called Hersey an American spy who embodied his country’s militarism and had helped to inflict upon the world a “propaganda of aggression, strongly reminiscent of similar manifestations in Nazi Germany.”

Ironically, the Soviet attack upon Hersey didn’t make him any more acceptable to the U.S. government.  In 1950, FBI director J. Edgar Hoover assigned FBI field agents to research, monitor, and interview Hersey, on whom the Bureau had already opened a file.  During the FBI interview with Hersey, agents questioned him closely about his trip to Hiroshima.

Not surprisingly, U.S. occupation authorities did their best to ban the appearance of “Hiroshima” in Japan.  Hersey’s six protagonists had to wait months before they could finally read the article, which was smuggled to them.  In fact, some of Hersey’s characters were not aware that they had been included in the story or that the article had even been written until they received the contraband copies.  MacArthur managed to block publication of the book in Japan for years until, after intervention by the Authors’ League of America, he finally relented.  It appeared in April 1949, and immediately became a best-seller.

Hersey, still a young man at the time, lived on for decades thereafter, writing numerous books, mostly works of fiction, and teaching at Yale.  He continued to be deeply concerned about the fate of a nuclear-armed world—proud of his part in stirring up resistance to nuclear war and, thereby, helping to prevent it.

The conclusion drawn by Blume in this book is much like Hersey’s.  As she writes, “Graphically showing what nuclear warfare does to humans, `Hiroshima’ has played a major role in preventing nuclear war since the end of World War II.”

A secondary theme in the book is the role of a free press.  Blume observes that “Hersey and his New Yorker editors created `Hiroshima’ in the belief that journalists must hold accountable those in power.  They saw a free press as essential to the survival of democracy.”  She does, too.

Overall, Blume’s book would provide the basis for a very inspiring movie, for at its core is something many Americans admire:  action taken by a few people who triumph against all odds.

But the actual history is somewhat more complicated.  Even before the publication of “Hiroshima,” a significant number of people were deeply disturbed by the atomic bombing of Japan.  For some, especially pacifists, the bombing was a moral atrocity.  An even larger group feared that the advent of nuclear weapons portended the destruction of the world.  Traditional pacifist organizations, newly-formed atomic scientist groups, and a rapidly-growing world government movement launched a dramatic antinuclear campaign in the late 1940s around the slogan, “One World or None.”  Curiously, this uprising against nuclear weapons is almost entirely absent from Blume’s book.

Even so, Blume has written a very illuminating, interesting, and important work—one that reminds us that daring, committed individuals can help to create a better world.

 AMERIKA

In 1844, Nativist Protestants Burned Churches in

 the Name of Religious Liberty

News at Home
tags: immigrationpolitical violencereligious historyNativismAmerican Religion`


Zachary M. Schrag is Professor of History at George Mason University and the author of the forthcoming books The Princeton Guide to Historical Research (Princeton University Press) and The Fires of Philadelphia: Citizen-Soldiers, Nativists, and the 1844 Riots Over the Soul of a Nation (Pegasus Books).

A mob burns St. Augustine's Catholic Church in Philadelphia, 1844, from John B. Perry A Full and Complete Account of the Late Awful Riots in Philadelphia

 

 

Former U.S. senator Rick Santorum has deservedly lost his position at CNN for his April speech in which he described all of Native American culture as “nothing.” But he made that remark in service to an equally suspect claim: that America “was born of the people who came here pursuing religious liberty to practice their faith, to live as they ought to live and have the freedom to do so. Religious liberty.” Contrary to Santorum’s rosy picture, many of the English settlers of what is now the east coast of the United States were as devoted to denying religious liberty to others as they were to securing their own ability to worship as they pleased. And as a committed Catholic, Santorum should know that for many Protestants, “religious liberty” meant attacking the Catholic Church.

 

The first English monarchs to back colonization hoped to contain Catholic expansion with what historian Carla Gardina Pestana calls “a Protestant empire.” While some colonies persecuted dissenters—whipping Baptists and Quakers—most tolerated varieties of Protestantism. But the settlers often drew the line at Catholicism. Each November, colonists celebrated “Pope’s Day” by lighting bonfires, firing cannon, and marching effigies of the pontiff through the streets, all to celebrate their common Protestant identity. Colonial governments outlawed Catholic priests, threatening them with life imprisonment or death. Even Maryland, founded in part as a Catholic haven, eventually restricted Catholic worship.

 

The Revolution—secured with the help of Catholic Spain and France, as well as that of many American Catholics—toned down some of the most vicious anti-Catholicism. Most American Protestants learned to respect and live with their Catholic neighbors. But while the United States Constitution forbade the establishment of religion or religious tests for office, individual states continued to privilege Protestantism. Some limited office holding to Protestants, declared Protestantism the official religion, and, most commonly, assigned the King James Bible in public schools, over the objections of Catholics.

 

Political anti-Catholicism gained new adherents in the 1830s, in response to both Catholic Emancipation in the British Empire and increased Irish Catholic immigration to the United States. In 1835, New York’s Protestant Association debated the question, “Is Popery compatible with civil liberty?” In 1840, a popular Protestant pastor warned that “It has been the favourite policy of popish priests to represent Romanism as a harmless thing.” “If they ever succeed in making this impression general,” he continued, “we may well tremble for the liberties of our country. It is a startling truth that popery and civil and religious liberty cannot flourish on the same soil; popery is death to both!”

 

Such beliefs led anti-Catholics to attack Catholic institutions as alien intruders. In August 1834, a mob burned down the Ursuline convent in Charlestown, Massachusetts, acting in the conviction the they were protecting American liberty against an institution that “‘ought not to be allow[e]d in a free country.’’ Five years later, a Baltimore mob threatened a convent there with a similar fate. As Irish immigrants filled both the pews and pulpits of American Catholic churches, such anti-Catholicism merged with a nativist movement that hoped to restrict immigration and make naturalization difficult.

 

The most sustained attack against Catholics came in Philadelphia in the spring and summer of 1844. Inspired by the success of a third-party nativist candidate in New York City’s mayoral election, Philadelphia nativists staged their own rallies throughout the city and its surrounding districts. In May, rallies in the largely Irish Catholic Third Ward of Kensington sparked three days of rioting. On the third day, nativist mobs burned two Catholic churches, along with the adjacent rectories and a seminary. Outside of one church, they built a bonfire of Bibles and other sacred texts, and cheered when the cross atop the church’s steeple collapsed in flame. In a nearby Catholic orphan asylum, the superioress wondered how she could evacuate nearly a hundred children if the mob attacked. “They have sworn vengeance against all the churches and their institutions,” she wrote. “We have every reason to expect the same fate.”

 

In the aftermath of the May riots, a priest in the heavily nativist district of Southwark resolved to prepare his church against future attacks. Along with his brother, he organized parishioners into a security force, armed with a collection of weapons ranging from surplus military muskets to bayonets stuck on brush handles. When, in July, the church’s neighbors realized the extent of his preparations, they concluded that the Catholics were planning to murder their Protestant neighbors in their sleep. Mobbing the church, they launched a second wave of riots, and even bombarded the church with a stolen cannon. Eventually, the county’s militia arrived in force and fired into the crowd. By the time the fighting was over, two dozen Americans were dead, and the nation was in shock.

 

Throughout all of this, leading nativists insisted that they tolerated all religions. “We do not interfere with any man’s religious creed or religious liberty,” asserted one. “A man may be a Turk, a Jew or a Christian, a Catholic, Methodist or a Presbyterian, and we say nothing against it, but accord to all a liberty of conscience.” He then immediately revealed the limits of his tolerance: “When we remember that our Pilgrim Fathers landed on Plymouth rock, to establish the Protestant religion, free from persecution, we must contend that this was and always will be a Protestant country!” That second sentiment—the insistence that the country truly belonged to members of one creed—explains the fury of the mob.

 

The same cramped view of religious liberty echoes in Santorum’s speech. As a Catholic, Santorum unsurprisingly identifies America with “the morals and teachings of Jesus Christ,” rather than only Protestantism. He also calls the United States “a country that was based on Judeo-Christian principles,” letting Jews halfway into his club. But any effort to privilege some religions over others reminds us that purported advocates of tolerance may be religious supremacists under the skin. Pursuing religious liberty for one’s own kind is only the beginning of freedom. Securing liberty to all is the true achievement.


 

Jerusalem: A Divided and Invented City

News Abroad
tags: colonialismIsraelPalestineJerusalemurban historyBritish Mandate


James A. S. Sunderland is a DPhil student at Merton College, Oxford where he holds scholarships from both the Arts and Humanities Research Council (AHRC) and Clarendon Fund. His work looks at Britain’s relationship with the Yishuv, the Jewish population of pre-state Israel.

Photo Andrew Shiva CC BY-SA 4.0

 

 

There are few cities on earth which can trigger the heights of passion and anger that Jerusalem can. The current round of violence between Israel and Hamas, the worst in recent years, is further proof of this. Israel’s decision to restrict access to parts of the Old City of Jerusalem during the month of Ramadan and evict several Arab families from the Sheikh Jarrah neighborhood, just north of the Old City, led Hamas to launch a series of rocket attacks on Israel far exceeding anything seen in the last bout of fighting in 2014.

Hamas, already frustrated by the cancelation of this year’s Palestinian elections by President Mahmoud Abbas (in which Hamas would have likely made significant gains), chose to escalate tensions and present itself as the dynamic, active party of Palestinian resistance to Israeli rule. This decision, combined with the Israeli response, has come at a horrific cost. Yet the leadership of Hamas have claimed their PR victory, being hailed by some as the ‘defenders of Jerusalem,’ while the leadership of the Palestinian Authority in the West Bank looks even more inert and impotent than ever.

In fact, the city that Hamas claims to be defending is, in large part, the invention of a British administration that ruled the city for just over 30 years from 1917 to 1948. Traces of the British Mandate are everywhere: from red post boxes on street corners, to the distinctive Armenian tiled street signs in the Old City – an invention of the city’s first governor, Sir Ronald Storrs, who saw Jerusalem more as an Orientalist arts and crafts project than the dynamic, multi-cultural and evolving city that it was.

Many of the divisions we see in Jerusalem today can be traced back to events following December 11, 1917, when General Allenby and British forces entered the Old City in their victory parade and proceeded to shape it into an Orientalist mirage, altogether detached from reality on the ground.

The dismembering of the Old City into Jewish, Muslim, Christian, and Armenian Quarters dates back, at the earliest, to the 19th century imaginations of Western visitors who noted the cross shape of the Old City, with two long roads running North-South and East-West dividing the city in four, and duly made their assumptions. These assumptions were deeply flawed.

Ashkenazi Jews would rent accommodation in the Christian Quarter, Muslims and Jews would live on the same streets in the Muslim Quarter, and people moved through the city with ease, interacting along complex networks tied to patronage, trade and class rather than along confessional or ethnic lines. Residents understood the city through their relation to their ethnically and religiously diverse localities or neighborhoods. Ya’akov Yehoshua, who grew up in the Old City during the first decade of the 20th century, reminisced that

Jews and Muslims shared residential courtyards. We resembled a single family and socialized together. Our mothers unburdened themselves of their troubles to Muslim women, who in turn confided in our mothers. The Muslim women taught themselves to speak Ladino. They frequently used the proverbs and sayings of this tongue.

Of course, we must not imagine that the city was an ethnically and religiously diverse utopia – it was not. Religious and ethnic prejudices were not absent from residents’ interactions. Nevertheless, people of all faiths and backgrounds rubbed alongside each other in the city, by and large without conflict.

After 1917, the western projection of a divided city was translated into a policy of segregation under the colonial power’s pre-existing assumptions about the existence of such divisions, and the belief that different ethnic and religious groups could not, and should not mix. It was a racial, confessional, and spatial policy divorced from reality. Indeed, so strong were existing identities that it wasn’t until the 1930s that the local population came to think of themselves as belonging to a “Jewish,” “Muslim,” “Christian” or “Armenian” quarter, by which time the British had cemented the idea into their administrative, construction and social policies.

The visual language of the city, as well as its communities, were also reshaped by the British. Britain viewed Palestine through a biblical lens. As Prime Minister David Lloyd George put it, “I was taught far more history of the Jews than about my own land. I could tell you all the kings of Israel.” British administrators strove to preserve this biblical city, so familiar to them from their Christian upbringings. This “preservation” was encoded in British infrastructure projects throughout the city and the building codes promulgated in 1918 by the Alexandria City Engineer, William McLean, brought to Jerusalem by Storrs. Building in the Old City was severely restricted, with new building encouraged only in the new city beyond. Even there, regulations meant buildings had to be low (so as not to obscure the Old City and Mount of Olives). There were to be no industrial buildings, and new buildings had to be faced in stone or other “approved material” matching the character of the Old City. It is hard to exaggerate the importance this last point. Storrs (who viewed the local stone as imbued with “a hallowed and immemorial tradition”) and McLean had set the character of modern Jerusalem, stretching far beyond the walls of the Old City. The “ancient” sandy stone was turned into Jerusalem’s visual language, one which the British permeated with religious and historical meaning, while other local building practices, not in keeping with the British view of the city, were banned.

Ahead of this May’s now cancelled Palestinian legislative elections, the competing parties revised, polished and released their logos. That of Hamas shows the Dome of the Rock at the centre, flanked by other parts of the Old City’s architecture. A banner in Arabic above states “Jerusalem is our promise.” Jerusalem is just over 75 kilometers (46 miles) away from Gaza, yet few residents or Hamas members will ever have visited it. But even for Hamas, besides the religious significance represented by the Dome of the Rock, it is the visual imagery of an “ancient,” sandy stoned city that they conjure in their mind’s eye when they think of Jerusalem.

Meanwhile, far-right Israeli desires to “reclaim” the city and expel Arab residents from the Old City and eastern part of the new city, have no basis in Jerusalem’s recent history. Muslims, Jews and Christians mixed, made business deals, lived and socialized together less than 100 years ago in the very streets which far-right extremists now claim exclusive right to.

The stylised image for both sides is a Jerusalem that was created, carefully moulded and “preserved” by the orientalist imaginations of British officials and administrative apparatchiks. The fight to be the “defenders of Jerusalem” is the fight over a sanitized, stylised image of a city which has been largely invented and whose divisions, cemented by years of British, Jordanian and Israeli rule, date back to 1917.

There is a glimmer of hope though. Although right-wing parties now control the majority of seats in the Knesset, Israel’s parliament, grass roots peace activists have long been fighting to bring Jews and Arabs, Israelis and Palestinians together in Jerusalem and to find ways, if not to solve the thorny issues surrounding Jerusalem’s future, then at least to learn to coexist and work together for a more peaceful future. Recent events have galvanized them and led others to come out and support their efforts. If divisions, mental and physical, can ever be dismantled, it will be through the work of ordinary people like these.