Saturday, November 25, 2023

Revealed: how top PR firm uses ‘trust barometer’ to promote world’s autocrats

Adam Lowenstein in Washington
GUARDIAN
Fri, November 24, 2023



LONG READ



Public trust in some of the world’s most repressive governments is soaring, according to Edelman, the world’s largest public relations firm, whose flagship “trust barometer” has created its reputation as an authority on global trust. For years, Edelman has reported that citizens of authoritarian countries, including Saudi Arabia, Singapore, the United Arab Emirates and China, tend to trust their governments more than people living in democracies do.

But Edelman has been less forthcoming about the fact that some of these same authoritarian governments have also been its clients. Edelman’s work for one such client – the government of the UAE – will be front and center when world leaders convene in Dubai later this month for the UN’s Cop28 climate summit.

The Guardian and Aria, a non-profit research organization, analyzed Edelman trust barometers, as well as Foreign Agent Registration Act (Fara) filings made public by the Department of Justice, dating back to 2001, when Edelman released its first survey of trust. (The act requires US companies to publish certain information about their lobbying and advocacy work for foreign governments.) During that time Edelman and its subsidiaries have been paid millions of dollars by autocratic governments to develop and promote their desired images and narratives.

Polling experts have found that public opinion surveys tend to overstate the favorability of authoritarian regimes because many respondents fear government reprisal. That hasn’t stopped these same governments from exploiting Edelman’s findings to burnish their reputations and legitimize their holds on power.

Edelman’s trust barometer is “quoted everywhere as if this is some credible, objective research from a thinktank, whereas there is a fairly obvious commercial background, and it’s fairly obviously a sales tool,” said Alison Taylor, a professor at New York University’s business school. “At minimum, the firm should be disclosing these financial relationships as part of the study. But they’re not doing that.”

An Edelman spokesperson said in a statement emailed to the Guardian: “As a global firm, we believe it is important to work with clients and in markets around the world that are transforming –economically, politically, socially, environmentally, and culturally.

“We believe our presence in the Middle East can help drive change by counseling influential organizations, advising on expectations for business and brands today, and building new stakeholder relationships.”

‘The government’s efforts are paying dividends’

The government of the UAE became an Edelman client in 2007. Over the next two years, as DeSmog, a non-profit publication that investigates climate misinformation, has reported, Edelman was paid more than $6m for its work improving the sustainability reputation of the UAE and the Abu Dhabi National Oil Company, or Adnoc, a state-run oil giant. These efforts culminated in the UAE being selected to host this year’s UN climate conference.

In 2010 Edelman and a subsidiary signed two more contracts to work on behalf of the Emirati government, including the production of a “Beltway barometer survey” to measure public opinion “among policymakers and influencers in Washington DC”. The following year the UAE appeared in the Edelman trust barometer for the first time.

Edelman did not respond to questions about whether the UAE government’s decision to hire the firm in 2010 was connected to the country’s inclusion in the trust barometer the following year, or whether Edelman offers potential or current government clients the opportunity to be included in the survey as a benefit of hiring the firm.

Since the UAE’s first appearance in 2011, Edelman’s surveys have routinely reported that citizens of the country strongly trust their government – a finding that Edelman and Emirati media have been happy to endorse.

“According to Edelman, the NY-based company, the UAE’s plans and strategies contributed to the trust in performance it earned over the year,” tweeted Sheikh Mohammed bin Rashid Al Maktoum, the prime minister and vice-president of the UAE and ruler of Dubai, after the release of the 2014 trust barometer. “The people’s trust in the government is a result of the leadership closeness to them and attending to their needs and demands.”

Tod Donhauser, the executive then running Edelman’s UAE business, used similar language in a 2018 blogpost on Edelman’s website. “This year’s Edelman trust barometer results suggest that the government’s efforts to protect the public from fake news are paying dividends,” Donhauser wrote.

Three months after Donhauser praised the government for “combating fake news” and “uniting the country behind a common purpose and elevating trust levels”, a UAE court sentenced activist Ahmed Mansoor to 10 years in prison. Mansoor’s charges included “publish[ing] false information, rumors and lies about the UAE” on Twitter and Facebook that “would damage the UAE’s social harmony and unity”, Amnesty International reported.

Mansoor had been arrested the previous year for “spreading sectarianism and hatred on social media”. He remains in prison.

Donhauser, who no longer works for Edelman, did not respond to requests for comment. The firm did not respond to questions about the blogpost.

‘The system works in terms of trust’

Edelman’s chief executive, Richard Edelman, was asked why he thought authoritarian regimes fared so well in the trust barometer at a discussion hosted by the Atlantic Council, a Washington thinktank, in February.

“There’s one hypothesis: that the difference in information issued by government and by media in those countries is very small, whereas here, you know, media is doing its job. It’s saying, ‘What they say on Capitol Hill is not true,’” Edeleman replied. “There’s much higher trust in media in those countries that are single party than there is in democracies, maybe because there’s one line.”

Edelman continued: “I’m not a fan of authoritarian government. I’m a kid who grew up in democracies. But the system works in terms of trust, and that’s all I can tell you.”

In its statement, the Edelman spokesperson said: “The trust barometer seeks to help businesses, organizations and institutions understand how personal attitudes interconnect to shape broader societal forces.

“For a number of reasons, including strong economic performance over a decade, developing nations around the world tend to show high levels of trust among internal respondents.”

At the time of the Atlantic Council event, Edelman was in the middle of at least four different contracts to represent the interests of the governments of the UAE and Saudi Arabia, repressive regimes under which human rights are regularly threatened, civil and political liberties are mostly non-existent, and dissidents and critics are routinely harassed and imprisoned. Edelman has since agreed to additional work for the Emirati and the Saudi governments.

Even as he spoke on stage about trust, Richard Edelman himself had been, and currently remains, registered with the US government as a foreign agent personally representing the interests of the Saudi ministry of culture. Politico described Richard Edelman’s decision to register as “a rare move for such a high-ranking executive of such a massive agency”.

“We have more than 2,000 clients globally spanning both the public and private sector across various industries,” the Edelman spokesperson said. “We publicly disclose those engagements subject to [Fara] in line with the regulatory requirements. When individuals have performed services that require registration under Fara, Edelman works to ensure that a registration is completed for those individuals.”

In June, the firm submitted a redacted email to the justice department from a company executive inviting a recipient to coffee in New York City with Hamed bin Mohammed Fayez, Saudi Arabia’s deputy culture minister. “The ambition of KSA [Kingdom of Saudi Arabia] is stunning,” the sender wrote.

A few months later Richard Edelman hosted a “small dinner” for Fayez, one of at least three networking events that he has personally hosted for Saudi officials over the past year and a half.

The company did not respond to a question about whether Richard Edelman is regularly involved in client work for foreign governments.

Earlier this year, Richard Edelman told PR Week that he recently “went around with the minister of culture” and “visited six cultural institutions, and one of them is now considering putting a school into Saudi”. Richard Edelman said: “That is the kind of engagement that I want to have because Saudi is on a continuum of change. That is the kind of work we want to be doing: engagement in the important issues and challenges of our time.”

The New York Times has described the culture minister as “a friend and associate” of the Saudi crown prince, Mohammed bin Salman. Under the crown prince’s rule, “Saudi Arabia has undergone one of the worst periods for human rights in the country’s modern history,” Human Rights Watch’s Joey Shea told a US Senate committee in September.

Edelman did not respond to questions regarding whether Richard Edelman worries about the Saudi culture ministry’s close ties with the crown prince, or if he is concerned that his firm’s work for the governments of Saudi Arabia and the UAE risks sanitizing or shifting public attention away from the countries’ records on human rights and civil and political liberties.

‘Boosting the confidence citizens have in the government’

In recent years Edelman’s business in the Middle East has grown more lucrative as the firm has cultivated deeper ties with Saudi Arabia. While Edelman began working for the Saudi government in 2013, the country did not show up in the trust barometer until 2019. This appearance came fewer than three months after the crown prince approved the assassination of the journalist and Washington Post columnist Jamal Khashoggi, a prominent critic of the regime and the country’s heavy-handed rule.

Edelman’s trust surveys include the dates during which it conducts “fieldwork” – the online and telephone interviews from which its findings are derived. Edelman reported that fieldwork for the 2019 survey took place between 19 October and 16 November of the previous year, which means the company could have been surveying Saudi citizens about whether they trust their government fewer than three weeks after that same government was implicated in the murder of a high-profile critic.

The firm did not respond to questions from the Guardian on why it chose to keep Saudi Arabia in the trust barometer and publish survey results purporting to reflect high levels of public trust in the Saudi government just months after Khashoggi’s death.



King Salman and Crown Prince Mohammed bin Salman are shown on screen at a Saudi Premier League game in Riyadh in September. Photograph: Ahmed Yosri/Reuters

Edelman has previously dropped countries from the trust barometer following geopolitical events. The firm featured Russia between 2007 and 2022, but in 2023 it decided not to include the country after it invaded Ukraine. “When business leaders choose to stay silent on these issues, they are now viewed as being guilty of complicity,” an Edelman executive wrote in a blogpost.

Saudi media frequently tout the findings of Edelman’s surveys as evidence of the regime’s popular support. After the 2021 trust barometer reported that 82% of Saudis trusted their government– tied with China for the highest in the world – a Saudi press agency published a story highlighting the finding. (According to Reporters Without Borders, an international NGO, “virtually all Saudi media operate under direct official control” and journalists “who do not follow the official line of praise for Crown Prince Mohammed bin Salman become de facto suspects”.)

The following year a story in the Arab Weekly highlighted a similarly high trust finding as proof that the kingdom’s “reforms” were effective and popular. The story declared that “observers believe that this trust grew after a reform process launched by the Saudi leadership in 2016” – the year that the crown prince began an effort to reinvent the country’s reputation as a “global investment powerhouse”.

“Over the past few years, some have sought to question the ability of the current leadership to achieve its reform goals,” the Arab Weekly story read. “However, results so far have shown the authorities’ success in achieving their goals, boosting the confidence citizens have in the government.”

‘It’s soft power’

Richard Edelman has described Edelman’s trust barometer as “data, not opinion”. But survey experts have found that measures of public support for the government tend to be inflated in authoritarian states, making it challenging to directly compare democracies with non-democracies.

The company declined to answer questions about whether it takes this fact into account when it conducts the trust barometer, and whether it believes polling results in democracies and autocracies are directly comparable. In its statement, Edelman’s spokesperson said, “We are committed to transparency about our trust barometer methodologies, and we adhere to both industry- and country-specific regulations and standards, including those of The Insights Association Code of Standards and Ethics and ESOMAR’s Code and Guideline[s].”

ESOMAR, the European Society for Opinion and Marketing Research, is a not-for-profit organization “financed primarily” by its members, some two-thirds of whom are corporations, according to its most recent financial statement.

Staffan I Lindberg, the founding director of the Varieties of Democracy (V-Dem) Institute, a non-profit that conducts peer-reviewed research on democracy around the world, said: “There is a ton of scientific evidence to suggest that…questions asked for authoritarian countries give a misleading picture [and] overestimate the level of trust and satisfaction with, and support for … authoritarian governments.”

In one study, Marcus Tannenberg of Sweden’s University of Gothenburg analyzed some 80,000 survey responses across more than 30 countries in Africa. Tannenberg concluded that “fear of the government” leads respondents in autocratic states to significantly overstate their trust in the government, a trend Tannenberg deemed “autocratic trust bias”.

Such fears are well-founded in some of the countries with which Edelman does business.

In August, for instance, Human Rights Watch reported that Muhammad al-Ghamdi, a former teacher in Saudi Arabia, had been sentenced to death for his “tweets, retweets, and YouTube activity”. According to court documents seen by Human Rights Watch, a Saudi court explained that the “magnitude of his actions is amplified by the fact they occurred through a global media platform, necessitating a strict punishment”. Combined, Ghamdi’s two Twitter accounts that were cited by the court had a total of 10 followers, Human Rights Watch found. Other Saudis have been handed decades-long prison sentences for their Twitter activity.

Saudi authorities’ close scrutiny of “global media platforms” such as Twitter, now X, might help explain why the government has invested so much money in polishing the government’s image on social media.

In early February, Edelman agreed to a three-year contract to improve the “social media presence” of executives of Neom, a futuristic city that is a personal priority of Prince Mohammed and owned by the country’s sovereign wealth fund, which the crown prince controls. Edelman also signed a separate contract to work on some of Neom’s other social media accounts.

These projects came on top of the social media-heavy work Edelman has done for the Emirati government in the run-up to the UN climate summit later this month.

“It’s soft power,” Richard Edelman told PRWeek of his firm’s work to improve the Saudi regime’s reputation in the United States. “It’s about the culture of Saudi Arabia being exposed to the world, and the people of Saudi Arabia being exposed to western culture.”

Benjamin Freeman of the Quincy Institute for Responsible Statecraft said: “Instead of Americans associating Saudi Arabia with 9/11 or with the brutal murder of Jamal Khashoggi, they want us thinking about golf. They want us thinking about the arts world. They want us thinking about Hollywood.

“Anything that they can do to pull the blinders over our eyes, they’re going to do it. And folks like Edelman, PR folks like that, they have no shortage of ideas for exactly how to get that done.”

Scientists Warn That AI Threatens 
Science Itself


Maggie Harrison

Thu, November 23, 2023 


What role should text-generating large language models (LLMs) have in the scientific research process? According to a team of Oxford scientists, the answer — at least for now — is: pretty much none.

In a new essay, researchers from the Oxford Internet Institute argue that scientists should abstain from using LLM-powered tools like chatbots to assist in scientific research on the grounds that AI's penchant for hallucinating and fabricating facts, combined with the human tendency to anthropomorphize the human-mimicking word engines, could lead to larger information breakdowns — a fate that could ultimately threaten the fabric of science itself.

"Our tendency to anthropomorphize machines and trust models as human-like truth-tellers, consuming and spreading the bad information that they produce in the process," the researchers write in the essay, which was published this week in the journal Nature Human Behavior, "is uniquely worrying for the future of science."

The scientists' argument hinges on the reality that LLMs and the many bots that the technology powers aren't primarily designed to be truthful. As they write in the essay, sounding truthful is but "one element by which the usefulness of these systems is measured." Characteristics including "helpfulness, harmlessness, technical efficiency, profitability, [and] customer adoption" matter, too.

"LLMs are designed to produce helpful and convincing responses," they continue, "without any overriding guarantees regarding their accuracy or alignment with fact."

Put simply, if a large language model — which, above all else, is taught to be convincing — comes up with an answer that's persuasive but not necessarily factual, the fact that the output is persuasive will override its inaccuracy. In an AI's proverbial brain, simply saying "I don't know" is less helpful than providing an incorrect response.

But as the Oxford researchers lay out, AI's hallucination problem is only half the problem. The Eliza Effect, or the human tendency to read way too far into human-sounding AI outputs due to our deeply mortal proclivity to anthropomorphize everything around us, is a well-documented phenomenon. Because of this effect, we're already primed to put a little too much trust in AI; couple that with the confident tone these chatbots so often take, and you have a perfect recipe for misinformation. After all, when a human gives us a perfectly bottled, expert-sounding paraphrasing in response to a query, we're probably less inclined to use the same critical thinking in our fact-checking as we might when we're doing our own research.

Importantly, the scientists do note "zero-shot translation" as a scenario in which AI outputs might be a bit more reliable. This, as Oxford professor and AI ethicist Brent Mittelstadt told EuroNews, refers to when a model is given "a set of inputs that contain some reliable information or data, plus some request to do something with that data."

"It's called zero-shot translation because the model has not been trained specifically to deal with that type of prompt," Mittelstadt added. So, in other words, a model is more or less rearranging and parsing through a very limited, trustworthy dataset, and not being used as a vast, internet-like knowledge center. But that would certainly limit its use cases, and would demand a more specialized understanding of AI tech — much different from just loading up ChatGPT and firing off some research questions.

And elsewhere, the researchers argue, there's an ideological battle at the core of this automation debate. After all, science is a deeply human pursuit. To outsource too much of the scientific process to automated AI labor, the Oxforders say, could undermine that deep-rooted humanity. And is that something we can really afford to lose?

"Do we actually want to reduce opportunities for writing, thinking critically, creating new ideas and hypotheses, grappling with the intricacies of theory and combining knowledge in creative and unprecedented ways?" the researchers write. "These are the inherently valuable hallmarks of curiosity-driven science."

"They are not something that should be cheaply delegated to incredibly impressive machines," they continue, "that remain incapable of distinguishing fact from fiction."

‘Huge egos are in play’: behind the firing and rehiring of OpenAI’s Sam Altman

Blake Montgomery
GUARDIAN
Thu, November 23, 2023

Photograph: Carlos Barría/Reuters

LONG READ


OpenAI’s messy firing and rehiring of its powerful chief executive this week shocked the tech world. But the power struggle has implications beyond the company’s boardroom, AI experts said. It throws into relief the greenness of the AI industry and the strong desire in Silicon Valley to be first, and raises urgent questions about the safety of the technology.

Related: OpenAI ‘was working on advanced model so powerful it alarmed staff’

“The AI that we’re looking at now is immature. There are no standards, no professional body, no certifications. Everybody figures out how to do it, figures out their own internal norms,” said Rayid Ghani, a professor of machine learning and public policy at Carnegie Mellon University. “The AI that gets built relies on a handful of people who built it, and the impact of these handfuls of people is disproportionate.”

The tussle between Sam Altman and OpenAI’s board of directors began on Friday with the unexpected announcement that the board had ousted Altman as CEO for being “not consistently candid in his communications with the board”.

The blogpost appeared with little warning, even to OpenAI’s minority owner Microsoft, which has invested about $13bn in the startup.

The board appointed an interim CEO, Mira Murati, then the chief technology officer of OpenAI, but by Sunday had tapped another, the former Twitch CEO Emmett Shear. Altman returned to the startup’s headquarters for negotiations the same day; that evening, Microsoft announced it had hired him to lead a new artificial intelligence unit.

On Monday, more than 95% of OpenAI’s roughly 750 employees signed an open letter asserting they would quit unless Altman were reinstated; signatories included Murati and the man many believed was the architect of Altman’s ouster, OpenAI’s co-founder and chief scientist, Ilya Sutskever.


OpenAI co-founder Ilya Sutskever, who many believe was behind Altman’s ouster, at a Ted AI conference in San Francisco on 17 October. 
Photograph: Glenn Chapman/AFP/Getty Images

By Wednesday, Altman was CEO once again. OpenAI’s board had been reconstituted without Altman and the company president, Greg Brockman, who had quit in solidarity but also was rehired, and without two members who had voted to fire them both.

In the absence of substantive regulation of the companies making AI, the foibles and idiosyncrasies of its creators take on outsized importance.

Asked what OpenAI’s saga could mean for any upcoming AI regulation, the United Kingdom’s Department for Science, Innovation and Technology (DSIT) said in a statement: “Because this is a commercial decision, it’s not something for DSIT to comment on.” In the US, the White House also did not provide comment. Senators Richard Blumenthal of Connecticut and Josh Hawley of Missouri, chairs of the US Senate subcommittee that oversaw Altman’s testimony earlier this year, did not respond to requests for comment; Blumenthal and Hawley have proposed a bipartisan AI bill “to establish guardrails”.

In a more mature sector, regulations would insulate consumers and consumer-facing products from the fights among the people at the top, Ghani said. The individual makers of AI would not be so consequential, and their spats would affect the public less.

“It’s too risky to rely on one person to be the spokesperson for AI, especially if that person is responsible for building. It shouldn’t be self-regulated. When has that ever worked? We don’t have self-regulation in anything that is important, why would we do it here?” he asked.

The political battle over AI

The struggle at OpenAI also highlighted a lack of transparency into decision-making at the company. The development of cutting-edge AI rests in the hands of a small, secretive cadre that operates behind closed doors.


At the moment, there’s no public body running tests of programs like ChatGPT, and companies aren’t transparent about updates 
Rayid Ghani of Carnegie Mellon University

“We have no idea how a staff change at OpenAI would change the nature of ChatGPT or Dall-E,” said Ghani. At the moment, there’s no public body running tests of programs like ChatGPT, and companies aren’t transparent about updates. Compare that to an iPhone or Android’s software updates, which list the changes and fixes coming to the software of the device you hold in your hand.

“Right now, we don’t have a public way of doing quality control. Each organization will do that for their own use cases,” he said. “But we need a way to continuously run tests on things like ChatGPT and monitor the results so as to profile the results for people and make it lower risk. If we had such a tool, the company would be less critical. Our only hope is that the people building it know what they’re doing.”

Paul Barrett, the deputy director of the center for business and human rights at New York University’s business school, agreed, calling for regulation that would require AI makers to demonstrate the safety and efficacy of their products the way pharmaceutical companies do.

“The fight for control of OpenAI provides a valuable reminder of the volatility within this relatively immature branch of the digital industry and the danger that crucial decisions about how to safeguard artificial intelligence systems may be influenced by corporate power struggles. Huge amounts of money – and huge egos – are in play. Judgments about when unpredictable AI systems are safe to be released to the public should not be governed by these factors,” he said.

Acceleration v deceleration


The split between Altman and the board at least partly seemed to fall along ideological lines, with Altman and Brockman in a camp known as “accelerationists” – people who believe AI should be deployed as quickly as possible – and “decelerationists” – people who believe it should be developed more slowly and with stronger guardrails. With Altman’s return, the former group takes the spoils.

“The people who seem to have won out in this case are the accelerationists,” said Sarah Kreps, a Cornell professor of government and the director of the Tech Policy Institute in the university’s school of public policy.

Kreps said we may see a reborn OpenAI that fully subscribes to the Meta chief executive Mark Zuckerberg’s “move fast and break things” mantra. Employees voted with their feet in the debate between moving more quickly or more carefully, she noted.

“What we’ll see is full steam ahead on AI research going forward. Then the question becomes, is it going to be totally unsafe, or will it have trials and errors? OpenAI may follow the Facebook model of moving quickly and realizing that the product is not always compatible with societal good,” she said.

What’s accelerating the AI arms race among OpenAI, Google, Microsoft and other tech giants, Kreps said, is vast amounts of capital and the burning desire to be first. If one company doesn’t make a certain discovery, another will – and fast. That leads to less caution.


The Pioneer Building, OpenAI’s headquarters, in San Francisco, California. Photograph: John G Mabanglo/EPA

“The former leadership of OpenAI has said all the right things about being cognizant of the risk, but as more money has poured into AI, the more incentive there is to move quickly and be less mindful of those risks,” she said.

Full speed ahead


The Silicon Valley wrestling match has called into question the future of the prominent startup’s business and its flagship product, ChatGPT. Altman had been on a world tour as an emissary for AI in the preceding weeks. He had spoken to Joe Biden, China’s Xi Jinping and other diplomats just days before at the Apec conference in San Francisco. Two weeks before, he had debuted the capability for developers to build their own versions of ChatGPT at a splashy demo day that featured the Microsoft chief executive, Satya Nadella, who has formed a strong partnership with Altman and cast his company’s lot with the younger man.

How could nations, strategic partners and customers trust OpenAI, though, if its own rulers would throw it into such disarray?

“The fact that the board hasn’t given a clear statement on its reasons for firing Altman, even to the CEO that the board itself hired, looks very bad,” said Derek Leben, a professor of ethics at Carnegie Mellon’s business school. Altman, Leben said, came out the winner in the public relations war, the protagonist in the story. Kreps agreed.

In favor of decelerationists, Leben said, is that this saga proved they are serious about their concerns, if ham-fisted in their expression. AI skeptics have criticized Altman and others for prophesying doom-by-AI in the future, arguing such concerns overlook real harms AI does in the present day and only aggrandize AI’s makers.


The fact that people are willing to burn down the company suggests that they’re not just using safety as a smokescreen for an ulterior motive
Derek Leben of Carnegie Mellon

“The fact that people are willing to burn down the company suggests that they’re not just using safety as a smokescreen for an ulterior motive. They’re being sincere when they say they’re willing to shut down the company to prevent bad outcomes,” he said.

One thing OpenAI’s succession war will not do is slow down the development of AI, the experts agreed.

“I’m less concerned for the safety of AI. I think AI will be safe, for better or for worse. I’m worried for the people who have to use it,” said Ghani.

Johana Bhuiyan contributed reporting
'Black Twitter' asks 'What if Sam Altman were a Black woman?' in the wake of ouster

Monica Melton,Aaron Mok
Thu, November 23, 2023 

Sam Altman and Dr. Timnit GebruGetty

  • Sam Altman's high-profile firing has drawn comparisons to Timnit Gebru's exit from Google.

  • Gebru, a well-respected AI researcher, no longer works at Google after authoring a paper on biases in AI.

  • Some tech observers and "Black Twitter" asked: "What if Sam Altman were a Black woman?"

High-profile public firings in tech are nothing new. Sam Altman's shocking ouster— and reinstatement — to OpenAI drew comparisons to Steve Jobs's exit and eventual return to Apple. But a less obvious comparison has been drawn that asks the question: "What if Sam Altman was a Black Woman?"

That's what "Black Twitter" and tech observers have been wondering, pointing to another high-profile exit of a famous AI executive: Dr. Timnit Gebru, the former co-lead of Google's ethical AI team.

Gebru's departure from Google, which she described as being fired, resulted in a very different outcome than Altman, who was quickly offered a cushy job at Microsoft, fought for by a united workforce that threatened to walk if he wasn't reinstated, and some of the people responsible for his firing removed from the board.

In 2020 Gebru's exit from Google centered around a research paper that was critical of biases being built into artificial intelligence. Some Googlers protested her departure, although not as unanimously as OpenAI employees supported Altman.
At the time, leaders at Google like Deepmind chief scientist Jeff Dean, attempted to defend Google's handling of the situation, and called Gebru's condition of wanting to learn the names of those who reviewed her research paper a "binary choice."

The outpouring of support for Altman resounded across the tech industry. Black women's experiences in corporate America however, and an underrepresentation in fields like technology, have led some Black tech workers to believe there's a double standard in how the outside world reacts to the ousting of white founders compared to Black founders.

Black women in tech

After Altman's departure from OpenAI was announced, tech workers — including figures like former Google CEO Eric Schmidt — went to X to express their shock and support for Altman. Dozens of OpenAI employees replied to Altman's X posts with heart emojis in what seemed like a digital demonstration of love for their former CEO.

The reaction to Altman's firing pales in comparison to Kimberly Bryant, a founder of Black Girls Code, who said that she received minimal support when got ousted by her board over alleged misconduct.

"Unlike Atlman, Black women founders rarely enjoy such overwhelming support, and the road to recovery after setbacks can be exceptionally challenging," Bryant told TechCrunch's Dominic-Madori Davis in a blog post. "The absence of a Black or female counterpart for Altman in the tech industry reflects the persistent replication of the 'successful CEO' prototype, primarily shaped by the persona of the white male wonderboy."

'People are getting trampled in the march of so-called progress'

In the hours following Altman's return as CEO of OpenAI a raucous party began as cofounder Greg Brockman posted to X "We're so back" with a picture of him and several smiling employees. This prompted several X users to assert that there are very few Black employees at OpenAI.

While the chaos sparked by the former board of OpenAI appears to be coming to a close, the people added to its new interim board, comprised entirely of white men, have already drawn criticism and could harken to a swift return to business as usual in which the white, male dominance of tech continues.

The issue lies in that OpenAI's mission to create tech that "benefits all of humanity" will continue to develop with key decision-makers who don't represent a variety of backgrounds and aren't currently a robust reflection of the humanity it purports to serve.

"It's a real shame because these people are mostly obsessed with and preoccupied with these very sci-fi kind of fantasies about how AGI is going to usher in utopia or completely annihilate humanity. Lots of people are getting trampled in the march of so-called progress," Dr. Émile Torres, AI philosopher and researcher, told Insider.

Only some of humanity

Black tech workers hold more concern about being replaced by AI than their white counterparts as experts urge companies to grow with inclusivity and responsibility.

Real-world harms of AI technology, such as facial recognition used in policing, have disproportionately led to the wrongful arrest of Black people.

Altman and the leaders of OpenAI fall into an ideology they and Dr. Gebru dubbed TESCREAL (transhumanism, extropianism, singularitarianism, cosmism, rationalism, effective altruism, and longtermism), which goes against the company's mission, according to Torres.

The class of ideologies that TESCREAL refers to including, effective altruism (EA) has been criticized for avoiding structural problems and overly centering Western and wealthy-centric views.

"All of these people, as far as I can tell, are cut from the same cloth. You can locate them within the general TESCREAL worldview," they said. "It's that worldview that is the greatest danger to human wellbeing. It's switching out players who disagree about the details, or certain aspects of the world view, isn't going to change much for most people."

New NASA imagery reveals startling behavior among group of ‘banished’ beavers: ‘[They] were just about everywhere’

Jeremiah Budin
Thu, November 23, 2023


Satellite imagery from NASA has revealed some surprising and encouraging information about beavers. But these aren’t space beavers — rather, they are beavers that were “banished” to a remote Idaho valley and have since transformed it into a “lush wetland” that protects against wildfires, as Yale Environment 360 reported.

Beavers have often been viewed as a nuisance, as they chew down trees and build dams that can flood fields. In the 1930s, officials began trapping them and relocating them to remote areas where they could, in theory, do less harm.

Instead of doing harm, the beavers did a lot of good, as their activities led to a sort of natural irrigation that promoted the growth of grasses, shrubs, and other vegetation that in turn supported many types of wildlife.

Now, people like rancher Jay Wilde are working with researchers to intentionally reintroduce beavers to their land in order to make it more fertile and enhance biodiversity.

While it may seem like an iffy idea to introduce a species to an area that it previously did not inhabit, researchers have explained that prior to the rise of beaver hunting and trapping, the animals were ever present, meaning that reintroducing beavers is simply bringing population levels back to their normal.

“Prior to beaver trapping, beaver dams were just about everywhere in the West,” Wally Macfarlane, a researcher at Utah State University who co-developed the Beaver Restoration Assessment Tool, said in a NASA report. “So what we’re attempting to do is to bring beaver dam densities back to historic levels where possible. In doing so, we’re building important drought resiliency and restoring stream areas. I think there’s a lot of foresight by NASA realizing how these things connect.”

In addition to adding drought resiliency and increased biodiversity, the beavers have had the effect of helping to guard against wildfires. In one area where beavers were reintroduced, a 2018 wildfire left the parts where the beavers had settled untouched, Yale Environment 360 reported.

Similar efforts to reintroduce beavers in an effort to enhance biodiversity have also occurred in England.

A Fish That Fishes for Other Fish Lives Its Life Upside Down

Elizabeth Anne Brown
Updated Thu, November 23, 2023 


An upside-down whipnose anglerfish spotted by Japanese researchers in the western North Pacific Ocean in 2011. 
(Japan Agency for Marine-Earth Science and Technology via The New York Times)

Usually, a belly-up fish isn’t long for this world. But video evidence from the deep ocean suggests that some species of anglerfish — the nightmarish deep-sea fish with bioluminescent lures — live their whole lives upside down.

“Just when you think they couldn’t get any weirder, anglerfish outdo themselves,” said Pamela Hart, an associate professor at the University of Alabama who researches fish that live in extreme conditions.

Sign up for The Morning newsletter from the New York Times

The behavior, documented earlier this month in the Journal of Fish Biology, is “beyond anyone’s wildest imagination,” said Elizabeth Miller, who studied the evolution of deep-sea fish as a postdoctoral fellow at the University of Oklahoma. (Neither Miller nor Hart was involved in the discovery.)

Whipnose anglerfish are small sea monsters with a fishing rod-like appendage on their faces. While a whipnose’s body is no bigger than that of a house cat, it has a long, floppy spine that sprouts from its nose and stretches up to four times its body length. The fish tempt prey with bioluminescent bacteria that live in the tip of the lure. (This applies to female whipnoses, said Andrew Stewart, curator of fishes at the Museum of New Zealand and an author of the study. The males of the species are “sad little tadpole things” a fraction of the size of the females, and without the lure.)

For nearly a century, scientists assumed whipnose anglerfish would dangle their lures in front of their faces, as many anglerfish with shorter lures do. But now, videos from underwater missions in the Atlantic, Pacific and Indian Oceans suggest that whipnoses spend their lightless days upside down, with their long lures hanging toward the seafloor.

The videos are confirmation of a tantalizing observation from more than 20 years ago, Stewart said.

In 1999, a remotely operated vehicle, or ROV, caught glimpses of whipnose anglerfish floating motionless, and, notably, upside down, about midway between Hawaii and California. Researchers suspected that they were targeting prey on the seafloor, but scientists couldn’t discard the possibility that it was just one goofy fish behaving abnormally, Hart explained — a hazard of animal behavior studies.

If that whipnose was a goof, then they’re all goofs, based on evidence from the footage that has been captured by remote subs and crewed vehicles. In a video filmed near the Izu-Ogasawara Trench off Japan, a whipnose drifts with the current, her body parallel to the seafloor, mouth agape and hundreds of tiny teeth glistening in the light.

Suddenly, she bursts into motion, using her powerful tail to swim in a tight circle, still inverted. Eventually she calms and begins drifting again, only to bump into the ROV’s light apparatus — probably a shock for a creature used to living in the featureless deep sea. Then she uses the tiny fins at her side to backpedal into the darkness.

In other videos, “the propellers and power of the submersible tumbled the anglerfish so it was right side up,” Stewart said. But the whipnoses weren’t having any of it, “they very quickly reverted to being upside down again,” he said.

While humans might find it hard to take a belly-up predator seriously, swimming upside down may make the whipnose more lethal. Researchers suspect that, by keeping their lures farther from their mouths, whipnose anglerfish could take down larger and faster prey without accidentally biting themselves. Stewart said that one dissected whipnose specimen had a gonatid squid in its belly — a real prize.

“Squid are very much the Ferrari of the deep ocean,” he said, adding that whipnose anglerfish “must be extremely fast and efficient for them to have nailed a gonatid.”

This new insight into the whipnoses’ behavior underscores how revolutionary ROV footage has been for deep-sea biology, Stewart said. Before this technology, scientists relied on dead specimens hoisted from the deep by trawling nets and pickled to preserve their delicate tissues, which are often damaged by the drastic change in pressure. There was nothing in the whipnose anglerfish’s anatomy to suggest their bizarre behavior.

“These videos are really precious,” Miller said. “Even a short, one-minute video tells us so much about how the anglerfish is living its life that we can’t otherwise get.”

c.2023 The New York Times Company

Chinese Spacecraft That Smashed Into Moon Was Carrying Something Mysterious, Scientists Say

Noor Al-Sibai
Thu, November 23, 2023 



In early 2022, a piece of Chinese space junk hit the Moon and left a mysterious double pockmark on its surface — and, as it turns out, there's more to this story than meets the eye.

In a new paper published in the Planetary Science Journal, researchers from the University of Arizona explained that per their findings, there's little doubt that the object that hit the moon in March 2022 was debris from a Chinese Long March 3C rocket booster, and that the strange double crater it left suggests that it carried an undisclosed payload along with it.

This specific lunar collision, to be fair, has been mired in speculation since before it even happened.

As Space.com recounts in its own write-up of the research, the debacle began in 2015 when scientists noticed that some manner of space junk was on a collision course with the Moon. Astronomers initially believed it was a SpaceX Falcon 9 booster, but eventually, scientists figured out that it was the launcher for China's Chang'e lunar rover mission, which had been launched a year prior.

Though China denied that the craft was part of a Chinese mission, the US Space Command pushed back on that assertion, saying that the probe's spent upper stage never re-entered our atmosphere, which would mean it was out there floating somewhere near-ish to our planet (or, as it turns out, our Moon).

Not only did the latest study find with a high degree of confidence that the debris that hit the Moon in March 2022 was almost certainly from the Long March 3C rocket, but the researchers also concluded that the strange double crater it left behind indicates that it was carrying something else.

However, what that second object could be is still a matter of guesswork, it seems.

Specifically, the researchers' observations of the Chinese rocket suggested that there was something heavy attached to it that made it tumble in space before its crash landing — which isn't how these kinds of objects would normally act in these situations.

"Something that's been in space as long as this is subjected to forces from the Earth's and the Moon's gravity and the light from the Sun," UA aerospace doctoral student Tanner Campbell said in a school press release about the research. "So you would expect it to wobble a little bit, particularly when you consider that the rocket body is a big empty shell with a heavy engine on one side. But this was just tumbling end-over-end, in a very stable way."

Whatever was attached to the obliterated rocket, it seems to have been big enough to counterbalance its two 1,200-pound engines and make it tumble like a kid in gymnastics class. But after looking at the booster's known payloads, the AU team determined an object of a suitable mass was mysteriously missing from the list.

"Obviously, we have no idea what it might have been — perhaps some extra support structure, or additional instrumentation, or something else," Campbell said. "We probably won't ever know."

Giant 1.5-foot-long rat that can crack open coconuts photographed for 1st time on remote island

Sascha Pare
Fri, November 24, 2023 

A camera trap picture of a Vangunu giant rat on the Solomon islands.


The first ever images of the Vangunu giant rat, an elusive rodent that can grow up to 1.5 feet long and is known from only a single specimen that fell out of a tree six years ago, have been recorded by researchers in the Solomon Islands.

Using camera traps and a particularly tasty lure, the team snapped pictures of four rodents at least twice the size of common rats scurrying around the forest floor on the Solomon Islands, an archipelago northeast of Australia in the Pacific Ocean.

The rodents were "irrefutably identified" as Vangunu giant rats (Uromys vika) owing to their large size, long tails and very short ears, according to a study published Nov. 20 in the journal Ecology and Evolution.

"Capturing images of the Vangunu giant rat for the first time is extremely positive news for this poorly known species," study lead author Tyrone Lavery, a lecturer of native vertebrate biology at the University of Melbourne in Australia, said in a statement.

Indigenous people living on Vangunu, an island that sits in the center of Solomon Islands, have long known that rats so big they can chew through coconuts live in their forest — but the species had eluded scientists. The first tangible proof of its existence came in 2017, when commercial loggers felled a tree on Vangunu and a giant rat dropped out of it dead.

Related: Can rats 'imagine'? Rodents show signs of imagination while playing VR games

A few years later, locals from the Zaira community, who manage the largest remaining tract of Vangunu's pristine forest and hold intimate knowledge of its ecology, helped the same researchers set up their camera traps to finally document the secretive rodents in their habitat.

"All images were captured during nocturnal hours, and activity was clustered around midnight," the researchers wrote in the study. They lured the giant rats with sesame oil, which may have been key to their success, they added, as previous attempts using peanut butter only attracted non-native black rats (Rattus rattus).

The pictures come "at a critical juncture," Lavery said. Vungunu giant rats could soon go extinct due to commercial logging, which has decimated much of the island's forest — including the area where the first giant rat specimen was found in 2017, according to the study.

Last year, the Solomon Islands' government granted consent for commercial logging of the last scraps of forest where the already critically endangered rats live. "Logging consent has been granted at Zaira, and if it proceeds it will undoubtedly lead to extinction of the Vangunu giant rat," Lavery said.

Zaira community representatives have lodged an appeal against the decision.

"We hope that these images of U. vika will support efforts to prevent the extinction of this threatened species," Lavery said.

The US may no longer be able to fight more than one major war at a time
Tom Porter
Thu, November 23, 2023






During the Cold War, the US had the capacity to fight two wars simultaneously.


Amid rising global conflict, US military planning is again under scrutiny.


An analyst told Business Insider that US had shifted its doctrine in response to new threats.


At the summit of US power, the Pentagon had a clear task: ensure the US could fight and win against two adversaries at the same time.

That strategy enabled America to deter the Soviet Union and its allies and emerge triumphant from the decades-long Cold War. It then fought in Afghanistan and Iraq simultaneously in the wake of the 9/11 attacks.

But a recent proliferation in threats facing the US, ranging from terror groups to a resurgent China, has prompted a rethink.

A shift after the Soviet collapse


US Army 3rd Division 3-7 Bradley fighting vehicles took up a position along a road on March 19, 2003, inside the demilitarized zone between Kuwait and Iraq.Scott Nelson

After the collapse of the Soviet Union in 1991, the US cut its military spending with the world seemingly headed toward a new era of stability.

The Pentagon retained the ability to battle two adversaries at once, a capacity tested after the 9/11 terror attacks when the US invaded Afghanistan and Iraq in a bid to reshape the region and reduce the threat of Islamist militants.

But toward the end of the 2000s, the US faced daunting new threats, and Pentagon officials began redrawing their plans.

The threat from China and Russia


Chinese soldiers practice marching in formation ahead of military parade to celebrate the 70th anniversary of the founding of the People's Republic of China ON September 25, 2019 in Beijing, China.Pool

Now, the Pentagon faces the possibility of war with resurgent major powers Russia and China which can deploy huge militaries and sophisticated weapons.

Over the past decade, both have signaled their hostility to the US' global dominance, and their willingness to extend their power by force, with Russia waging a campaign to conquer US ally Ukraine and China menacing Taiwan with invasion.

They've made the prospect of the US triumphing in two simultaneous conflicts increasingly improbable, unless it massively increases its defense spending and expands its military, Raphael Cohen, an analyst with the RAND Corporation think tank told BI.

"That's going to be a hard sell in this political climate," said Cohen.

"Fighting two wars simultaneously: That's a fairly sizable commitment, particularly once powers become on the scale of China or Russia," he continued.
A new doctrine

The US military had been stretched when fighting at the same time in Afghanistan and Iraq. It prompted a 2009 rethink of US military doctrine under President Barack Obama that was rubber-stamped by then-President Donald Trump and later President Joe Biden.

Instead of winning two wars, it's now committed to being able to win against one major adversary such as China, and to present a serious deterrent to attacks from other enemies, Cohen said.

The Pentagon's 2022 US National Defense Strategy, the most recent, commits the US to being able to "prevail in conflict" yet still "deter opportunistic aggression elsewhere."

In planning for the possibility of a new world war, the US must look at the global picture.

The US has long relied on its enemies being divided, and unlikely to join forces to attack the US simultaneously.

But China, Russia, Iran, and other US adversaries are drawing closer together, sharing weapons technology and drawing up new alliances, magnifying their threat.

In a worst-case scenario in which various nation-state adversaries of the US attacked simultaneously, the US would likely be fighting alongside its allies in various regions.

European allies could help to push back Russia, allies in the Middle East, such as Israel or some Arab states, would fight against Iran, while US allies in the Pacific region, such as Australia and Japan, would likely play an important role in repelling Chinese aggression, said Cohen.

The Ukraine war is providing important new lessons to the US in what it and its allies need to do to prepare for this scenario, said Cohen.

For example, both Russia and Ukraine have burned through vast amounts of ammunition in the conflict, highlighting the need for the US to increase defense industrial capacities to support allies.

"That's still an expensive proposition," but one less expensive than vastly expanding the US military, said Cohen.

Planning to counter today's threats, said Cohen, comes down not just to military might, but to political will and the careful cultivation of alliances.

"If there's a World War, you know, it won't be the sort of single-handed conflicts that we've sort of gotten used to," he said.