Showing posts sorted by date for query google. Sort by relevance Show all posts
Showing posts sorted by date for query google. Sort by relevance Show all posts

Thursday, November 21, 2024

YOU ARE WHAT YOU READ

Poor mental health linked to browsing negative content online



University College London





People with poorer mental health are more prone to browsing negative content online, which further exacerbates their symptoms, finds a study led by UCL researchers.

The relationship between mental health and web-browsing is causal and bi-directional, according to the Wellcome-funded study published in Nature Human Behaviour.

The researchers have developed a plug-in tool* that adds ‘content labels’ to webpages—similar to nutrition labels on food—designed to help users make healthier and more informed decisions about the content they consume. These labels emphasise the emotional impact of webpage content, along with its practicality and informativeness.

Co-lead author Professor Tali Sharot (UCL Psychology & Language Sciences, Max Planck UCL Centre for Computational Psychiatry and Ageing Research, and Massachusetts Institute of Technology) said: “Our results show that browsing negatively valenced content not only mirrors a person’s mood but can also actively worsen it. This creates a feedback loop that can perpetuate mental health challenges over time.”

Over 1,000 study participants answered questions about their mental health and shared their web browsing history with the researchers. Using natural language processing methods, the researchers analysed the emotional tone of the webpages participants visited. They found that participants with worse moods and mental health symptoms were inclined to browse more negative content online, and after browsing, those who browsed more negative content felt worse.

In an additional study, the researchers manipulated the websites people visited, exposing some participants to negative content and others to neutral content. They found that those exposed to negative websites reported worse moods afterward, demonstrating a causal effect of browsing negative content on mood. When these participants were then asked to browse the internet freely, those who had previously viewed negative websites—and consequently experienced a worse mood—chose to view more negative content. This finding highlights that the relationship is bi-directional: negative content affects mood, and a worsened mood drives the consumption of more negative content.

Co-lead author, PhD student Christopher Kelly (UCL Psychology & Language Sciences, Max Planck UCL Centre for Computational Psychiatry and Ageing Research, and Massachusetts Institute of Technology), said: "The results contribute to the ongoing debate regarding the relationship between mental health and online behaviour.

“Most research addressing this relationship has focused on the quantity of use, such as screen time or frequency of social media use, which has led to mixed conclusions. Here, instead, we focus on the type of content browsed and find that its emotional tone is causally and bidirectionally related to mental health and mood."

To check whether an intervention could be used to change web-browsing choices and improve mood, the researchers conducted a further study. They added content labels to the results of a Google search, which informed participants whether each search result would likely improve their mood, make it worse, or have no impact. Participants were then more likely to choose the positively-labelled sites deemed likely to improve their mood—and when asked about their mood after, those who had looked at the positive websites were indeed in better moods than other participants.

In response, the researchers have developed a free browser plug-in that adds labels to Google search results, providing three different ratings of how practical a website’s content is, how informative it is, and how it impacts mood.

Professor Sharot said: “We are accustomed to seeing content labels on our groceries, providing nutritional information such as sugar, calories, protein, and vitamins to help us make informed decisions about what we eat. A similar approach could be applied to the content we consume online, empowering people to make healthier choices online.”

Digital Diet browser extension

Wednesday, November 20, 2024

Op-Ed: Maybe polarizing social media was an even dumber idea than it looks

By Paul Wallis
November 20, 2024
DIGITAL JOURNAL

Elon Musk's X. — © AFP

The hordes of people leaving X are a sort of statistical obituary to years of propaganda. X is in big trouble, mainly because of its policies and algorithms and a bizarre relationship with infuriated advertisers.

Like most of American media, X instantly demoted itself to half of its own market share with its politics. It also became a servant, like FOX, to one side only. That’s not working out too well.

You’ll also notice from media coverage that nobody’s questioning the self-decapitation and disembowelment of X. The other glaring problem is that somehow this situation is now seen as normal.

It isn’t normal. People and advertisers are voting with their dollars and clicks.

Disgruntled X users are now heading to Bluesky in vast numbers. Millions of people have basically abandoned X. Bluesky looks a bit like early Twitter. Seems quite OK as an environment. I haven’t seen a rabid lunatic yet.

The new and huge problem is the quality of information vs communication. This is now an abyss of opposing information.

Unless there’s some middle ground, and the medium isn’t hopelessly biased, social media has just machinegunned itself in the foot. Politics is not the sole interest of the world. Other things happen, too, y’know.

That’s where audience loss is likely to be fatal. What if you don’t want to read about The Adventures of Donny and Elon in Disneyland?

What if you want a broad and useful mix of your own interests, instead?

The sole and whole purpose of social media is communication. Reducing your content range to such a narrow focus means you inevitably lose users.

Nor is the Chicken Little approach to information exactly popular. Nobody listens to raving lunatics if they don’t have to.

The possible exception to that theory is screen-fed America. Markets and media businesses take a long time to change course. This market doesn’t eat solid food anymore. The child-psychology is pretty obvious. These sources will probably simply continue to produce pablum.

The total stagnation of American mass media was one thing. This social media situation is stagnation of real-time information as well. X has made itself useless to its users.

People obviously don’t like that. The move to Bluesky is self-defense. The social media market can blame itself for a newcomer just walking off with its customers.

Let’s talk “dumb”.

Billions of dollars of investment are now evaporating in a festering social media environment and those billions aren’t coming back.

The World’s Richest Sudden Instant Fan Boy doesn’t have the excuse of being geriatric or illiterate. He should have enough metrics to see the cliff coming.

The big money markets in the US are all blue. The black holes are all red, with the possible exception of Texas. Ignore those big blue markets at your peril. Lose those markets and you’re doing hillbilly scale dollars. These much smaller markets can’t deliver the same value. Advertisers and marketers know that.

The rest of the world is also reacting very negatively to the toxicity of social media. The hubris and hype are all American-generated. The rest of the world can easily ignore most of this garbage.

What if reality gets involved in this mess? Economic risks and social disintegration are circling like buzzards over America in huge numbers with dollar signs on them. All types of media should be building financial fallout shelters about now. Markets can evaporate overnight.

No amount of fake news hysteria and self-congratulation can make an impression on that situation. You need “Make America Solvent Again” to do that, and it looks like that’s not happening for the next four years.

How would you read a commercial market that is basically suicidal? Would you focus on customer retention in the afterlife? You might have to do that.

Social media needs a functional market to exist at all. If the money’s pulling out, the message couldn’t be clearer.

__________________________________________________
Disclaimer

The opinions expressed in this Op-Ed are those of the author. They do not purport to reflect the opinions or views of the Digital Journal or its members.


Social media users probably won’t read beyond this headline, researchers say



A new study of 35 million news links circulated on Facebook reports that more than 75% of the time they were shared without the link being clicked upon and read



Penn State




UNIVERSITY PARK, Pa. — Congratulations. Reading this far into the story is a feat not many will accomplish, especially if shared on Facebook, according to a team led by Penn State researchers. In an analysis of more than 35 million public posts containing links that were shared extensively on the social media platform between 2017 and 2020, the researchers found that around 75% of the shares were made without the posters clicking the link first. Of these, political content from both ends of the spectrum was shared without clicking more often than politically neutral content.

The findings, which the researchers said suggest that social media users tend to merely read headlines and blurbs rather than fully engage with core content, appeared today (Nov. 19) in Nature Human Behavior. While the data were limited to Facebook, the researchers said the findings could likely map to other social media platforms and help explain why misinformation can spread so quickly online.

“It was a big surprise to find out that more than 75% of the time, the links shared on Facebook were shared without the user clicking through first,” said corresponding author S. Shyam Sundar, Evan Pugh University Professor and the James P. Jimirro Professor of Media Effects at Penn State. “I had assumed that if someone shared something, they read and thought about it, that they’re supporting or even championing the content. You might expect that maybe a few people would occasionally share content without thinking it through, but for most shares to be like this? That was a surprising, very scary finding.”

Access to the Facebook data was granted via Social Science One, a research consortium hosted by Harvard University’s Institute for Quantitative Social Science focused on obtaining and sharing social and behavioral data responsibly and ethically. The data were provided in collaboration with Meta, Facebook’s parent company, and included user demographics and behaviors, such as a “political page affinity score.” This score was determined by external researchers identifying the pages users follow — like the accounts of media outlets and political figures. The researchers used the political page affinity score to assign users to one of five groups — very liberal, liberal, neutral, conservative and very conservative.

To determine the political content of shared links, the researchers in this study used machine learning, a form of artificial intelligence, to identify and classify political terms in the link content. They scored the content on a similar five-point political affinity scale, from very liberal to very conservative, based on how many times each affinity group shared the link.

"We created this new variable of political affinity of content based on 35 million Facebook posts during election season across four years. This is a meaningful period to understand macro-level patterns behind social media news sharing,” said co-author Eugene Cho Snyder, assistant professor of humanities and social sciences at New Jersey Institute of Technology

The team validated the political affinity of news domains, such as CNN or Fox, based on the media bias chart produced by AllSides, an independent company focused on helping people understand the biases of news content, and a ratings system developed by researchers at Northeastern University.

With these rating systems, the team manually sorted 8,000 links, first identifying them as political or non-political content. Then the researchers used this dataset to train an algorithm that assessed 35 million links shared more than 100 times on Facebook by users in the United States.

“A pattern emerged that was confirmed at the level of individual links,” Snyder said. “The closer the political alignment of the content to the user — both liberal and conservative — the more it was shared without clicks. … They are simply forwarding things that seem on the surface to agree with their political ideology, not realizing that they may sometimes be sharing false information.”

The findings support the theory that many users superficially read news stories based just on headlines and blurbs, Sundar said, explaining that Meta also provided data from its third-party fact-checking service — which identified that 2,969 of the shared URLs linked to false content.

The researchers found that these links were shared over 41 million times, without being clicked. Of these, 76.94% came from conservative users and 14.25% from liberal users. The researchers explained that the vast majority — up to 82% — of the links to false information in the dataset originated from conservative news domains.

To cut down on sharing without clicking, Sundar said that social media platforms could introduce “friction” to slow the share, such as requiring people to acknowledge that they have read the full content prior to sharing.

“Superficial processing of headlines and blurbs can be dangerous if false data are being shared and not investigated,” Sundar said, explaining that social media users may feel that content has already been vetted by those in their network sharing it, but this work shows that is unlikely. “If platforms implement a warning that the content might be false and make users acknowledge the danger in doing so, that might help people think before sharing.”

This wouldn’t stop intentional misinformation campaigns, Sundar said, and individuals still have a responsibility to vet the content they share.

“Disinformation or misinformation campaigns aim to sow the seeds of doubt or dissent in a democracy — the scope of these efforts came to light in the 2016 and 2020 elections,” Sundar said. “If people are sharing without clicking, they’re potentially playing into the disinformation and unwittingly contributing to these campaigns staged by hostile adversaries attempting to sow division and distrust.”

So, why do people share without clicking in the first place?

“The reason this happens may be because people are just bombarded with information and are not stopping to think through it,” Sundar said. “In such an environment, misinformation has more of a chance of going viral. Hopefully, people will learn from our study and become more media literate, digitally savvy and, ultimately, more aware of what they are sharing.”

Other collaborators on this paper include Junjun Yin and Guangqing Chi, Penn State; Mengqi Liao, University of Georgia; and Jinping Wang, University of Florida.

The Social Science Research Council, New York, supported this research.

 

Sweet tooth- Ethiopian wolves seen feeding on nectar



University of Oxford
Ethiopian wolf and Ethiopian red hot poker flower 1 

image: 

An Ethiopian wolf (Canis simensis) licks nectar from the Ethiopian red hot poker flower (Kniphofia foliosa). © Adrien Lesaffre

view more 

Credit: Adrien Lesaffre




Summary:

  • For the first time, Ethiopian wolves have been documented feeding on the nectar of Ethiopian red hot poker flowers.
  • This is the first large carnivore species ever to be documented feeding on nectar.
  • In doing so, the wolves may act as pollinators – perhaps the first known plant-pollinator interaction involving a large carnivore. 

Content:

New findings, published in the journal Ecology, describe a newly documented behaviour of Ethiopian wolves (Canis simensis). Researchers at the Ethiopian Wolf Conservation Programme (EWCP) observed Ethiopian wolves foraging for the nectar of the Ethiopian red hot poker (Kniphofia foliosa) flower. Some individuals would visit as many as 30 blooms in a single trip, with multiple wolves from different packs exploiting this resource. There is also some evidence of social learning, with juveniles being brought to the flower fields along with adults.

In doing so, the wolves’ muzzles become covered in pollen, which they could potentially transfer from flower to flower as they feed. This novel behaviour is perhaps the first known plant-pollinator interaction involving a large predator, as well as the only large meat-eating predator ever to be observed feeding on nectar.

Dr Sandra Lai, EWCP Senior Scientist based at the University of Oxford, and lead author on the new study, said: “These findings highlight just how much we still have to learn about one of the world’s most-threatened carnivores. It also demonstrates the complexity of interactions between different species living on the beautiful Roof of Africa. This extremely unique and biodiverse ecosystem remains under threat from habitat loss and fragmentation.”

Professor Claudio Sillero, EWCP founder and director based at the University of Oxford, describes seeing this behaviour: “I first became aware of the nectar of the Ethiopian red hot poker when I saw children of shepherds in the Bale Mountains licking the flowers. In no time, I had a taste of it myself - the nectar was pleasantly sweet. When I later saw the wolves doing the same, I knew they were enjoying themselves, tapping into this unusual source of energy. I am chuffed that we have now reported this behaviour as being commonplace among Ethiopian wolves and explored its ecological significance.”

The Ethiopian wolf is the rarest wild canid species in the world, and Africa’s most threatened carnivore. Found only in the Ethiopian highlands, fewer than 500 individuals survive, in 99 packs restricted to 6 Afroalpine enclaves.

The Ethiopian Wolf Conservation Programme (EWCP) was set up in 1995 to protect the wolves, and their unique habitat. It is a partnership between the Wildlife Conservation Research Unit (WildCRU) at the University of Oxford, the Ethiopian Wildlife Conservation Authority (EWCA), and Dinkenesh Ethiopia. EWCP is the longest-running conservation programme in Ethiopia, aiming to safeguard the future of natural habitats for the benefit of wildlife and people in the highlands of Ethiopia.

EWCP Website: https://www.ethiopianwolf.org/

An Ethiopian wolf (Canis simensis) feeding amongst the blooming Ethiopian red hot poker flowers (Kniphofia foliosa). © Adrien Lesaffre

An Ethiopian wolf (Canis simensis) with its muzzle covered in pollen after feeding on the nectar of the red hot poker (Kniphofia foliosa). © Adrien Lesaffre


Notes to editors:
The study ‘Canids as pollinators? Nectar foraging by Ethiopian wolves may contribute to the pollination of Kniphofia foliosa’ has been published in Ecology at https://esajournals.onlinelibrary.wiley.com/doi/full/10.1002/ecy.4470

Images relating to this release that can be used in articles can be found here: https://drive.google.com/drive/folders/1tdnRo2HpSPhTX1c6jrKrbWQfPmNiQLuK?usp=sharing These are for editorial purposes relating to this press release ONLY and MUST BE credited. They MUST NOT be sold on to third parties.

Tuesday, November 19, 2024

Trump’s Election Is Also a Win for Tech’s Right-Wing “Warrior Class”

Silicon Valley has successfully rebranded military contracting as a proud national duty for the industry.
November 17, 2024
Source: The Intercept




Donald Trump pitched himself to voters as a supposed anti-interventionist candidate of peace. But when he reenters the White House in January, at his side will be a phalanx of pro-military Silicon Valley investors, inventors, and executives eager to build the most sophisticated weapons the world has ever known.

During his last term, the U.S. tech sector tiptoed skittishly around Trump; longtime right-winger Peter Thiel stood as an outlier in his full-throated support of MAGA politics as other investors and executives largely winced and smiled politely. Back then, Silicon Valley still offered the public peaceful mission statements of improving the human condition, connecting people, and organizing information. Technology was supposed to help, never harm. No more: People like Thiel, Palmer Luckey, Trae Stephens, and Marc Andreessen make up a new vanguard of powerful tech figures who have unapologetically merged right-wing politics with a determination to furnish a MAGA-dominated United States with a constant flow of newer, better arms and surveillance tools.


Trump’s election marks an epochal victory not just for the right, but also for a growing conservative counterrevolution in American tech.

These men (as they tend to be) hold much in common beyond their support of Republican candidates: They share the belief that China represents an existential threat to the United States (an increasingly bipartisan belief, to be sure) and must be dominated technologically and militarily at all costs. They are united in their aversion, if not open hostility, to arguments that the pace of invention must be balanced against any moral consideration beyond winning. And they all stand to profit greatly from this new tech-driven arms race.

Trump’s election marks an epochal victory not just for the right, but also for a growing conservative counterrevolution in American tech that has successfully rebranded military contracting as the proud national duty of the American engineer, not a taboo to be dodged and hidden. Meta’s recent announcement that its Llama large language model can now be used by defense customers means that Apple is the last of the “Big Five” American tech firms — Amazon, Apple, Google, Microsoft, and Meta — not engaged in military or intelligence contracting.

Elon Musk has drawn the lion’s share of media scrutiny (and Trump world credit) for throwing his fortune and digital influence behind the campaign. Over the years, the world’s richest man has become an enormously successful defense contractor via SpaceX, which has reaped billions selling access to rockets that the Pentagon hopes will someday rapidly ferry troops into battle. SpaceX’s Starlink satellite internet has also become an indispensable American military tool, and the company is working on a constellation of bespoke spy satellites for U.S. intelligence agency use.

But Musk is just one part of a broader wave of militarists who will have Trump’s ear on policy matters.

After election day, Musk replied to a celebratory tweet from Palmer Luckey, a founder of Anduril, a $14 billion startup that got its start selling migrant-detecting surveillance towers for the southern border and now manufactures a growing line of lethal drones and missiles. “Very important to open DoD/Intel to entrepreneurial companies like yours,” Musk wrote. Anduril’s rise is inseparable from Trumpism: Luckey founded the firm in 2017 after he was fired by Meta for contributing to a pro-Trump organization. He has been outspoken in his support for Trump as both candidate and president, fundraising for him in both 2020 and 2024.

Big Tech historically worked hard to be viewed by the public as inhabiting the center-left, if not being apolitical altogether. But even that is changing. While Luckey was fired for merely supporting Trump’s first campaign, his former boss (and former liberal) Mark Zuckerberg publicly characterized Trump surviving the June assassination attempt as “bad ass” and quickly congratulated the president-elect on a “decisive victory.” Zuckerberg added that he is “looking forward to working with you and your administration.”

To some extent, none of this is new: Silicon Valley’s origin is one of militarism. The American computer and software economy was nurtured from birth by the explosive growth and endless money of the Cold War arms race and its insatiable appetite for private sector R&D. And despite the popular trope of liberal Google executives, the tech industry has always harbored a strong anti-labor, pro-business instinct that dovetails neatly with conservative politics. It would also be a mistake to think that Silicon Valley was ever truly in lockstep with progressive values. A 2014 political ad by Americans for a Conservative Direction, a defunct effort by Facebook to court the Republican Party, warned that “it’s wrong to have millions of people living in America illegally” and urged lawmakers to “secure our borders so this never happens again.” The notion of the Democrat-friendly wing of Big Tech as dovish is equally wrong: Former Google chair and longtime liberal donor Eric Schmidt is a leading China hawk and defense tech investor. Similarly, the Democratic Party itself hasn’t meaningfully distanced itself from militarism in recent history. The current wave of startups designing smaller, cheaper military drones follows the Obama administration’s eager mass adoption of the technology, and firms like Anduril and Palantir have thrived under Joe Biden.

What has changed is which views the tech industry is now comfortable expressing out loud.

A year after Luckey’s ouster from the virtual reality subsidiary he founded, Google became embroiled in what grew into an industry-wide upheaval over military contracting. After it was reported that the company sought to win Project Maven, a lucrative drone-targeting contract, employees who had come to the internet titan to work on consumer products like Search, Maps, and Gmail found themselves disturbed by the thought of contributing to a system that could kill people. Waves of protests pushed Google to abandon the Pentagon with its tail between its legs. Even Fei-Fei Li, then Google Cloud’s chief artificial intelligence and machine learning scientist, described the contract as a source of shame in internal emails obtained by the New York Times. “Weaponized AI is probably one of the most sensitized topics of AI — if not THE most. This is red meat to the media to find all ways to damage Google,” she wrote. “I don’t know what would happen if the media starts picking up a theme that Google is secretly building AI weapons or AI technologies to enable weapons for the Defense industry.”

It’s an exchange that reads deeply quaint today. The notion that the country’s talented engineers should build weapons is becoming fully mainstreamed. “Societies have always needed a warrior class that is enthused and excited about enacting violence on others in pursuit of good aims,” Luckey explained in an on-campus talk about his company’s contributions to the Ukrainian war effort with Pepperdine University President Jim Gash. “You need people like me who are sick in that way and who don’t lose any sleep making tools of violence in order to preserve freedom.”

This “warrior class” mentality traces its genealogy to Peter Thiel, whose disciples, like Luckey, spread the gospel of a conservative-led arms race against China. “Everything that we’re doing, what the [Department of Defense] is doing, is preparing for a conflict with a great power like China in the Pacific,” Luckey told Bloomberg TV in a 2023 interview. At the Reagan National Defense Forum in 2019, Thiel, a lifelong techno-libertarian and Trump’s first major backer in tech, rejected the “ethical framing” of the question of whether to build weapons.” When it’s a choice between the U.S. and China, it is always the ethical decision to work with the U.S. government,” he said. Though Sinophobia is increasingly standard across party affiliations, it’s particularly frothing in the venture-backed warrior class. In 2019, Thiel claimed that Google had been “infiltrated by Chinese intelligence” and two years later suggested that bitcoin is “a Chinese financial weapon against the U.S.”

Thiel often embodies the self-contradiction of Trumpist foreign policy, decrying the use of taxpayer money on “faraway wars” while boosting companies that design weapons for exactly that. Like Trump, Thiel is a vocal opponent of Bush- and Obama-era adventurism in the Middle East as a source of nothing but regional chaos — though Thiel has remained silent on Trump’s large expansion of the Obama administration’s drone program and his assassination of Iranian Maj. Gen. Qassim Suleimani. In July, asked about the Israeli use of AI in the ongoing slaughter in Gaza, Thiel responded, “I defer to Israel.”

Thiel’s gravitational pull is felt across the whole of tech’s realignment toward militarism. Vice President-elect JD Vance worked at Mithril, another of Thiel’s investment firms, and used $15 million from his former boss to fund the 2022 Senate win that secured his national political bona fides. Vance would later go on to invest in Anduril. Founders Fund, Thiel’s main venture capital firm, has seeded the tech sector with influential figures friendly to both Trumpism and the Pentagon. Before, an investor or CEO who publicly embraced right-wing ideology and products designed to kill risked becoming an industry pariah. Today, he can be a CNBC guest.

An earlier adopter of MAGA, Thiel was also investing in and creating military- and intelligence-oriented companies before it was cool. He co-founded Palantir, which got its start helping facilitate spy agency and deportation raids by Immigration and Customs Enforcement. Now part of the S&P 500, the company helps target military strikes for Ukraine and in January sealed a “strategic partnership for battle tech” with the Israeli Ministry of Defense, according to a press release.


Before, a tech investor or CEO who publicly embraced right-wing ideology and products designed to kill risked becoming an industry pariah. Today, he can be a CNBC guest.

The ripple effect of Palantir’s success has helped popularize defense tech and solidify its union with the American right. Thiel’s Palantir co-founder Joe Lonsdale, also an Anduril investor, is reportedly helping Trump staff his new administration. Former Palantir employee and Anduril executive chair Trae Stephens joined the Trump transition team in 2016 and has suggested he would serve a second administration. As a member of the U.S.–China Economic and Security Review Commission, Thiel ally Jacob Helberg has been instrumental in whipping up anti-China fervor on Capitol Hill, helping push legislation to ban TikTok, and arguing for military adoption of AI technologies like those sold by his employer, Palantir, which markets itself as a bulwark against Chinese aggression. Although Palantir CEO Alex Karp is a self-described Democrat who said he planned to vote against Trump, he has derided progressivism as a “thin pagan religion” of wokeness, suggested pro-Palestine college protesters leave for North Korea, and continually advocating for an American arms buildup.

“Trump has surrounded himself with ‘techno-optimists’ — people who believe technology is the answer to every problem,” Brianna Rosen, a strategy and policy fellow at the University of Oxford and alumnus of the Obama National Security Council, told The Intercept. “Key members of his inner circle — leading tech executives — describe themselves in this way. The risk of techno-optimism in the military domain is that it focuses on how technology saves lives, rather than the real risks associated with military AI, such as the accelerated pace of targeting.”

The worldview of this corner of the tech industry is loud, if not always consistent. Foreign entanglements are bad, but the United States must be on perpetual war-footing against China. China itself is dangerous in part because it’s rapidly weaponizing AI, a current that threatens global stability, so the United States should do the very same, even harder, absent regulatory meddling.

Stephens’s 2022 admonition that “the business of war is the business of deterrence” argues that “peaceful outcomes are only achievable if we maintain our technological advantage in weapons systems” — an argument that overlooks the fact that the U.S. military’s overwhelming technological superiority failed to keep it out of Korea, Vietnam, Iraq, or Afghanistan. In a recent interview with Wired, Stephens both criticized the revolving door between the federal government and Anduril competitors like Boeing while also stating that “it’s important that people come out of private industry to work on civil service projects, and I hope at some point I’ll have the opportunity to go back in and serve the government and American people.”

William Fitzgerald, the founder of Worker Agency, a communications and advocacy firm that has helped tech workers organize against military contracts, said this square is easily circled by right-wing tech hawks, whose pitch is centered on the glacial incompetence of the Department of Defense and blue-chip contractors like Lockheed and Raytheon. “Peter Thiel’s whole thing is to privatize the state,” Fitzgerald explained. Despite all of the rhetoric about avoiding foreign entanglements, a high-tech arms race is conducive to different kinds of wars, not fewer of them. “This alignment fits this narrative that we can do cheaper wars,” he said. “We won’t lose the men over there because we’ll have these drones.”

In this view, the opposition of Thiel and his ilk isn’t so much to forever wars, then, but rather whose hardware is being purchased forever.

The new conservative tech establishment seems in full agreement about the need for an era of techno-militarism. Marc Andreessen and Ben Horowitz, the namesakes of one of Silicon Valley’s most storied and successful venture capital firms, poured millions into Trump’s reelection and have pushed hard to reorient the American tech sector toward fighting wars. In a “Techno-Optimist Manifesto” published last October, Andreessen wrote of defense contracting as a moral imperative. “We believe America and her allies should be strong and not weak. We believe national strength of liberal democracies flows from economic strength (financial power), cultural strength (soft power), and military strength (hard power). Economic, cultural, and military strength flow from technological strength.” The firm knows full well what it’s evoking through a naked embrace of strength as society’s greatest virtue: Listed among the “Patron Saints of Techno-Optimism” is Filippo Tommaso Marinetti, co-author of the 1919 Fascist Manifesto.

The venture capitalists’ document offers a clear rebuttal of employees’ moral qualms that pushed Google to ditch Project Maven. The manifesto dismisses basic notions of “ethics,” “safety,” and “social responsibility” as a “demoralization campaign” of “zombie ideas, many derived from Communism” pushed by “the enemy.” This is rhetoric that matches a brand Trump has worked to cultivate: aspirationally hypermasculine, unapologetically jingoistic, and horrified by an America whose potential to dominate the planet is imperiled by meddling foreigners and scolding woke co-workers.

“There’s a lot more volatility in the world, [and] there is more of a revolt against what some would deem ‘woke culture,’” said Michael Dempsey, managing partner at the New York-based venture capital firm Compound. “It’s just more in the zeitgeist now that companies shouldn’t be so heavily influenced by personal politics. Obviously that is the tech industry talking out of both sides of their mouth because we saw in this past election a bunch of people get very political and make donations from their firms.”


“It’s just more in the zeitgeist now that companies shouldn’t be so heavily influenced by personal politics. Obviously that is the tech industry talking out of both sides of their mouth.”

Despite skewing young (by national security standards), many in this rightward, pro-military orbit are cultural and religious traditionalists infused with the libertarian preferences of the Zynternet, a wildly popular online content scene that’s melded apolitical internet bro culture and a general aversion to anything considered vaguely “woke.” A recent Vanity Fair profile of the El Segundo tech scene, a hotbed of the burgeoning “military Zyndustrial complex” commonly known as “the Gundo,” described the city as “California’s freedom-loving, Bible-thumping hub of hard tech.” It paints a vivid scene of young engineers who eschewed the progressive dystopia of San Francisco they read about on Twitter and instead flocked to build “nuclear reactors and military weaponry designed to fight China” beneath “an American flag the size of a dumpster” and “a life-size poster of Jesus Christ smiling benevolently onto a bench press below.”

The American right’s hold over online culture in the form of podcasts, streamers, and other youth-friendly media has been central to both retaking Washington and bulldozing post-Maven sentiment, according to William Fitzgerald of Worker Agency. “I gotta hand it to the VCs, they’re really good at comms,” said Fitzgerald, who himself is former Google employee who helped leak critical information about the company’s involvement in Project Maven. “They’re really making sure that these Gundo bros are wrapping the American flag around them. It’s been fascinating to see them from 2019 to 2024 completely changing the culture among young tech workers.”

A wave of layoffs and firings of employees engaged in anti-military protests have been a boon for defense evangelists, Fitzgerald added. “The workers have been told to shut up, or they get fired.”

This rhetoric has been matched by a massive push by Andreessen Horowitz (already an Anduril investor) behind the fund’s “American Dynamism” portfolio, a collection of companies that leans heavily into new startups hoping to be the next Raytheon. These investments include ABL Space Systems, already contracting with the Air Force,; Epirus, which makes microwave directed-energy weapons; and Shield AI, which works on autonomous military drones. Following the election, David Ulevitch, who leads the fund’s American Dynamism team, retweeted a celebratory video montage interspersed with men firing flamethrowers, machine guns, jets, Hulk Hogan, and a fist-pumping post-assassination attempt Trump.

Even the appearance of more money and interest in defense tech could have a knock-on effect for startup founders hoping to chase what’s trendy. Dempsey said he expects investors and founder to “pattern-match to companies like Anduril and to a lesser extent SpaceX, believing that their outcomes will be the same.” The increased political and cultural friendliness toward weapons startups also coincides with high interest rates and growing interest in hardware companies, Dempsey explained, as software companies have lost their luster following years of growth driven by little more than cheap venture capital.

There’s every reason to believe a Trump-controlled Washington will give the tech industry, increasingly invested in militarized AI, what it wants. In July, the Washington Post reported the Trump-aligned America First Policy Institute was working on a proposal to “Make America First in AI” by undoing regulatory burdens and encouraging military applications. Trump has already indicated he’ll reverse the Biden administration’s executive order on AI safety, which mandated safety testing and risk-based self-reporting by companies. Michael Kratsios, chief technology officer during the first Trump administration and managing director of Air Force contractor Scale AI, is reportedly advising Trump’s transition team on policy matters.

“‘Make America First in AI’ means the United States will move quickly, regardless of the costs, to maintain its competitive edge over China,” Brianna Rosen, the Oxford fellow, explained. “That translates into greater investment and fewer restrictions on military AI. Industry already leads AI development and deployment in the defense and intelligence sectors; that role has now been cemented.”

The mutual embrace of MAGA conservatism and weapons tech seems to already be paying off. After dumping $200 million into the Trump campaign’s terminal phase, Musk was quick to cash his chips in: On Thursday, the New York Times reported that he petitioned Trump SpaceX executives into positions at the Department of Defense before the election had even begun. Musk will also co-lead a nebulous new office dedicated to slashing federal spending. Rep. Matt Gaetz, brother-in-law to Luckey, now stands to be the country’s next attorney general. In a post-election interview with Bloomberg, Luckey shared that he is already advising the Trump transition team and endorses the current candidates for defense secretary. “We did well under Trump, and we did better under Biden,” he said of Anduril. “I think we will do even better now.”
U.S. to call for Google to sell Chrome browser: report


By AFP
November 18, 2024

Photo illustration: — © Digital Journal

The U.S. will urge a judge to make Google-parent company Alphabet sell its widely used Chrome browser in a major antitrust crackdown on the internet giant, according to a media report Monday.

Antitrust officials with the US Department of Justice declined to comment on a Bloomberg report that they will ask for a sell-off of Chrome and a shake-up of other aspects of Google’s business in court Wednesday.

Justice officials in October said they would demand that Google make profound changes to how it does business — even considering the possibility of a breakup — after the tech juggernaut was found to be running an illegal monopoly.

The government said in a court filing that it was considering options that included “structural” changes, which could see them asking for a divestment of its smartphone Android operating system or its Chrome browser.

Calling for the breakup of Google would mark a profound change by the US government’s reglators, which have largely left tech giants alone since failing to break up Microsoft two decades ago.

Google dismissed the idea at the time as “radical.”

Adam Kovacevich, chief executive of industry trade group Chamber of Progress, released a statement arguing that what justice officials reportedly want is “fantastical” and defies legal standards, instead calling for narrowly tailored remedies.


Google Chrome is the most popular internet browser in the world, making the internet giant a part of everyday life for people around the globe – Copyright AFP KIMIHIRO HOSHINO

Determining how to address Google’s wrongs is the next stage of a landmark antitrust trial that saw the company in August ruled a monopoly by US District Court Judge Amit Mehta.

Requiring Google to make its search data available to rivals was also on the table.

Regardless of Judge Mehta’s eventual decision, Google is expected to appeal the ruling, potentially prolonging the process for years and possibly reaching the US Supreme Court.

The trial, which concluded last year, scrutinized Google’s confidential agreements with smartphone manufacturers, including Apple.

These deals involve substantial payments to secure Google’s search engine as the default option on browsers, iPhones and other devices.

The judge determined that this arrangement provided Google with unparalleled access to user data, enabling it to develop its search engine into a globally dominant platform.

From this position, Google expanded its tech empire to include the Chrome browser, Maps and the Android smartphone operating system.

According to the judgment, Google controlled 90 percent of the US online search market in 2020, with an even higher share, 95 percent, on mobile devices.

Remedies being sought will include imposing measures curbing Google artificial intelligence from tapping into website data and barring the Android mobile operating system from being bundled with the company’s other offerings, according to the report.

 

Leaner large language models could enable efficient local use on phones and laptops


Princeton University, Engineering School





Large language models (LLMs) are increasingly automating tasks like translation, text classification and customer service. But tapping into an LLM’s power typically requires users to send their requests to a centralized server — a process that’s expensive, energy-intensive and often slow.

Now, researchers have introduced a technique for compressing an LLM’s reams of data, which could increase privacy, save energy and lower costs.

The new algorithm, developed by engineers at Princeton and Stanford Engineering, works by trimming redundancies and reducing the precision of an LLM’s layers of information. This type of leaner LLM could be stored and accessed locally on a device like a phone or laptop and could provide performance nearly as accurate and nuanced as an uncompressed version.

“Any time you can reduce the computational complexity, storage and bandwidth requirements of using AI models, you can enable AI on devices and systems that otherwise couldn’t handle such compute- and memory-intensive tasks,” said study coauthor Andrea Goldsmith, dean of Princeton’s School of Engineering and Applied Science and Arthur LeGrand Doty Professor of Electrical and Computer Engineering.

“When you use ChatGPT, whatever request you give it goes to the back-end servers of OpenAI, which process all of that data, and that is very expensive,” said coauthor Rajarshi Saha, a Stanford Engineering Ph.D. student. “So, you want to be able to do this LLM inference using consumer GPUs [graphics processing units], and the way to do that is by compressing these LLMs.” Saha’s graduate work is coadvised by Goldsmith and coauthor Mert Pilanci, an assistant professor at Stanford Engineering.

The researchers will present their new algorithm CALDERA, which stands for Calibration Aware Low precision DEcomposition with low Rank Adaptation, at the Conference on Neural Information Processing Systems (NeurIPS) in December. Saha and colleagues began this compression research not with LLMs themselves, but with the large collections of information that are used to train LLMs and other complex AI models, such as those used for image classification. This technique, a forerunner to the new LLM compression approach, was published in 2023.

Training data sets and AI models are both composed of matrices, or grids of numbers that are used to store data. In the case of LLMs, these are called weight matrices, which are numerical representations of word patterns learned from large swaths of text.

“We proposed a generic algorithm for compressing large data sets or large matrices,” said Saha. “And then we realized that nowadays, it’s not just the data sets that are large, but the models being deployed are also getting large. So, we could also use our algorithm to compress these models.”

While the team’s algorithm is not the first to compress LLMs, its novelty lies in an innovative combination of two properties, one called “low-precision,” the other “low-rank.” As digital computers store and process information as bits (zeros and ones), “low-precision” representation reduces the number of bits, speeding up storage and processing while improving energy efficiency. On the other hand, “low-rank” refers to reducing redundancies in the LLM weight matrices.

“Using both of these properties together, we are able to get much more compression than either of these techniques can achieve individually,” said Saha.

The team tested their technique using Llama 2 and Llama 3, open-source large language models released by Meta AI, and found that their method, which used low-rank and low-precision components in tandem with each other, can be used to improve other methods which use just low-precision. The improvement can be up to 5%, which is significant for metrics that measure uncertainty in predicting word sequences.

They evaluated the performance of the compressed language models using several sets of benchmark tasks for LLMs. The tasks included determining the logical order of two statements, or answering questions involving physical reasoning, such as how to separate an egg white from a yolk or how to make a cup of tea.

“I think it’s encouraging and a bit surprising that we were able to get such good performance in this compression scheme,” said Goldsmith, who moved to Princeton from Stanford Engineering in 2020. “By taking advantage of the weight matrix rather than just using a generic compression algorithm for the bits that are representing the weight matrix, we were able to do much better.”

Using an LLM compressed in this way could be suitable for situations that don’t require the highest possible precision. Moreover, the ability to fine-tune compressed LLMs on edge devices like a smartphone or laptop enhances privacy by allowing organizations and individuals to adapt models to their specific needs without sharing sensitive data with third-party providers. This reduces the risk of data breaches or unauthorized access to confidential information during the training process. To enable this, the LLMs must initially be compressed enough to fit on consumer-grade GPUs.

Saha also cautioned that running LLMs on a smartphone or laptop could hog the device’s memory for a period of time. “You won’t be happy if you are running an LLM and your phone drains out of charge in an hour,” said Saha. Low-precision computation can help reduce power consumption, he added. “But I wouldn’t say that there’s one single technique that solves all the problems. What we propose in this paper is one technique that is used in combination with techniques proposed in prior works. And I think this combination will enable us to use LLMs on mobile devices more efficiently and get more accurate results.”

The paper, “Compressing Large Language Models using Low Rank and Low Precision Decomposition,” will be presented at the Conference on Neural Information Processing Systems (NeurIPS) in December 2024. In addition to Goldsmith, Saha and Pilanci, coauthors include Stanford Engineering researchers Naomi Sagan and Varun Srivastava. This work was supported in part by the U.S. National Science Foundation, the U.S. Army Research Office, and the Office of Naval Research.

Asking ChatGPT vs Googling: Can AI chatbots boost human creativity?

The Conversation
November 18, 2024

Image via TippaPatt/Shutterstock.

Think back to a time when you needed a quick answer, maybe for a recipe or a DIY project. A few years ago, most people’s first instinct was to “Google it.” Today, however, many people are more likely to reach for ChatGPT, OpenAI’s conversational AI, which is changing the way people look for information.

Rather than simply providing lists of websites, ChatGPT gives more direct, conversational responses. But can ChatGPT do more than just answer straightforward questions? Can it actually help people be more creative?

I study new technologies and consumer interaction with social media. My colleague Byung Lee and I set out to explore this question: Can ChatGPT genuinely assist people in creatively solving problems, and does it perform better at this than traditional search engines like Google?


Across a series of experiments in a study published in the journal Nature Human Behavour, we found that ChatGPT does boost creativity, especially in everyday, practical tasks. Here’s what we learned about how this technology is changing the way people solve problems, brainstorm ideas and think creatively.

ChatGPT and creative tasks

Imagine you’re searching for a creative gift idea for a teenage niece. Previously, you might have googled “creative gifts for teens” and then browsed articles until something clicked. Now, if you ask ChatGPT, it generates a direct response based on its analysis of patterns across the web. It might suggest a custom DIY project or a unique experience, crafting the idea in real time.

To explore whether ChatGPT surpasses Google in creative thinking tasks, we conducted five experiments where participants tackled various creative tasks. For example, we randomly assigned participants to either use ChatGPT for assistance, use Google search, or generate ideas on their own. Once the ideas were collected, external judges, unaware of the participants’ assigned conditions, rated each idea for creativity. We averaged the judges’ scores to provide an overall creativity rating.

One task involved brainstorming ways to repurpose everyday items, such as turning an old tennis racket and a garden hose into something new. Another asked participants to design an innovative dining table. The goal was to test whether ChatGPT could help people come up with more creative solutions compared with using a web search engine or just their own imagination.


ChatGPT did well with the task of suggesting creative ideas for reusing household items. Simon Ritzmann/DigitalVision via Getty Images

The results were clear: Judges rated ideas generated with ChatGPT’s assistance as more creative than those generated with Google searches or without any assistance. Interestingly, ideas generated with ChatGPT – even without any human modification – scored higher in creativity than those generated with Google.

One notable finding was ChatGPT’s ability to generate incrementally creative ideas: those that improve or build on what already exists. While truly radical ideas might still be challenging for AI, ChatGPT excelled at suggesting practical yet innovative approaches. In the toy-design experiment, for example, participants using ChatGPT came up with imaginative designs, such as turning a leftover fan and a paper bag into a wind-powered craft.

Limits of AI creativity

ChatGPT’s strength lies in its ability to combine unrelated concepts into a cohesive response. Unlike Google, which requires users to sift through links and piece together information, ChatGPT offers an integrated answer that helps users articulate and refine ideas in a polished format. This makes ChatGPT promising as a creativity tool, especially for tasks that connect disparate ideas or generate new concepts.

It’s important to note, however, that ChatGPT doesn’t generate truly novel ideas. It recognizes and combines linguistic patterns from its training data, subsequently generating outputs with the most probable sequences based on its training. If you’re looking for a way to make an existing idea better or adapt it in a new way, ChatGPT can be a helpful resource. For something groundbreaking, though, human ingenuity and imagination are still essential.

Additionally, while ChatGPT can generate creative suggestions, these aren’t always practical or scalable without expert input. Steps such as screening, feasibility checks, fact-checking and market validation require human expertise. Given that ChatGPT’s responses may reflect biases in its training data, people should exercise caution in sensitive contexts such as those involving race or gender.

We also tested whether ChatGPT could assist with tasks often seen as requiring empathy, such as repurposing items cherished by a loved one. Surprisingly, ChatGPT enhanced creativity even in these scenarios, generating ideas that users found relevant and thoughtful. This result challenges the belief that AI cannot assist with emotionally driven tasks

Future of AI and creativity

As ChatGPT and similar AI tools become more accessible, they open up new possibilities for creative tasks. Whether in the workplace or at home, AI could assist in brainstorming, problem-solving and enhancing creative projects. However, our research also points to the need for caution: While ChatGPT can augment human creativity, it doesn’t replace the unique human capacity for truly radical, out-of-the-box thinking.

This shift from Googling to asking ChatGPT represents more than just a new way to access information. It marks a transformation in how people collaborate with technology to think, create and innovate.

Jaeyeon Chung, Assistant Professor of Business, Rice University


This article is republished from The Conversation under a Creative Commons license. Read the original article.