Tuesday, November 19, 2024

 

Leaner large language models could enable efficient local use on phones and laptops


Princeton University, Engineering School





Large language models (LLMs) are increasingly automating tasks like translation, text classification and customer service. But tapping into an LLM’s power typically requires users to send their requests to a centralized server — a process that’s expensive, energy-intensive and often slow.

Now, researchers have introduced a technique for compressing an LLM’s reams of data, which could increase privacy, save energy and lower costs.

The new algorithm, developed by engineers at Princeton and Stanford Engineering, works by trimming redundancies and reducing the precision of an LLM’s layers of information. This type of leaner LLM could be stored and accessed locally on a device like a phone or laptop and could provide performance nearly as accurate and nuanced as an uncompressed version.

“Any time you can reduce the computational complexity, storage and bandwidth requirements of using AI models, you can enable AI on devices and systems that otherwise couldn’t handle such compute- and memory-intensive tasks,” said study coauthor Andrea Goldsmith, dean of Princeton’s School of Engineering and Applied Science and Arthur LeGrand Doty Professor of Electrical and Computer Engineering.

“When you use ChatGPT, whatever request you give it goes to the back-end servers of OpenAI, which process all of that data, and that is very expensive,” said coauthor Rajarshi Saha, a Stanford Engineering Ph.D. student. “So, you want to be able to do this LLM inference using consumer GPUs [graphics processing units], and the way to do that is by compressing these LLMs.” Saha’s graduate work is coadvised by Goldsmith and coauthor Mert Pilanci, an assistant professor at Stanford Engineering.

The researchers will present their new algorithm CALDERA, which stands for Calibration Aware Low precision DEcomposition with low Rank Adaptation, at the Conference on Neural Information Processing Systems (NeurIPS) in December. Saha and colleagues began this compression research not with LLMs themselves, but with the large collections of information that are used to train LLMs and other complex AI models, such as those used for image classification. This technique, a forerunner to the new LLM compression approach, was published in 2023.

Training data sets and AI models are both composed of matrices, or grids of numbers that are used to store data. In the case of LLMs, these are called weight matrices, which are numerical representations of word patterns learned from large swaths of text.

“We proposed a generic algorithm for compressing large data sets or large matrices,” said Saha. “And then we realized that nowadays, it’s not just the data sets that are large, but the models being deployed are also getting large. So, we could also use our algorithm to compress these models.”

While the team’s algorithm is not the first to compress LLMs, its novelty lies in an innovative combination of two properties, one called “low-precision,” the other “low-rank.” As digital computers store and process information as bits (zeros and ones), “low-precision” representation reduces the number of bits, speeding up storage and processing while improving energy efficiency. On the other hand, “low-rank” refers to reducing redundancies in the LLM weight matrices.

“Using both of these properties together, we are able to get much more compression than either of these techniques can achieve individually,” said Saha.

The team tested their technique using Llama 2 and Llama 3, open-source large language models released by Meta AI, and found that their method, which used low-rank and low-precision components in tandem with each other, can be used to improve other methods which use just low-precision. The improvement can be up to 5%, which is significant for metrics that measure uncertainty in predicting word sequences.

They evaluated the performance of the compressed language models using several sets of benchmark tasks for LLMs. The tasks included determining the logical order of two statements, or answering questions involving physical reasoning, such as how to separate an egg white from a yolk or how to make a cup of tea.

“I think it’s encouraging and a bit surprising that we were able to get such good performance in this compression scheme,” said Goldsmith, who moved to Princeton from Stanford Engineering in 2020. “By taking advantage of the weight matrix rather than just using a generic compression algorithm for the bits that are representing the weight matrix, we were able to do much better.”

Using an LLM compressed in this way could be suitable for situations that don’t require the highest possible precision. Moreover, the ability to fine-tune compressed LLMs on edge devices like a smartphone or laptop enhances privacy by allowing organizations and individuals to adapt models to their specific needs without sharing sensitive data with third-party providers. This reduces the risk of data breaches or unauthorized access to confidential information during the training process. To enable this, the LLMs must initially be compressed enough to fit on consumer-grade GPUs.

Saha also cautioned that running LLMs on a smartphone or laptop could hog the device’s memory for a period of time. “You won’t be happy if you are running an LLM and your phone drains out of charge in an hour,” said Saha. Low-precision computation can help reduce power consumption, he added. “But I wouldn’t say that there’s one single technique that solves all the problems. What we propose in this paper is one technique that is used in combination with techniques proposed in prior works. And I think this combination will enable us to use LLMs on mobile devices more efficiently and get more accurate results.”

The paper, “Compressing Large Language Models using Low Rank and Low Precision Decomposition,” will be presented at the Conference on Neural Information Processing Systems (NeurIPS) in December 2024. In addition to Goldsmith, Saha and Pilanci, coauthors include Stanford Engineering researchers Naomi Sagan and Varun Srivastava. This work was supported in part by the U.S. National Science Foundation, the U.S. Army Research Office, and the Office of Naval Research.

Asking ChatGPT vs Googling: Can AI chatbots boost human creativity?

The Conversation
November 18, 2024

Image via TippaPatt/Shutterstock.

Think back to a time when you needed a quick answer, maybe for a recipe or a DIY project. A few years ago, most people’s first instinct was to “Google it.” Today, however, many people are more likely to reach for ChatGPT, OpenAI’s conversational AI, which is changing the way people look for information.

Rather than simply providing lists of websites, ChatGPT gives more direct, conversational responses. But can ChatGPT do more than just answer straightforward questions? Can it actually help people be more creative?

I study new technologies and consumer interaction with social media. My colleague Byung Lee and I set out to explore this question: Can ChatGPT genuinely assist people in creatively solving problems, and does it perform better at this than traditional search engines like Google?


Across a series of experiments in a study published in the journal Nature Human Behavour, we found that ChatGPT does boost creativity, especially in everyday, practical tasks. Here’s what we learned about how this technology is changing the way people solve problems, brainstorm ideas and think creatively.

ChatGPT and creative tasks

Imagine you’re searching for a creative gift idea for a teenage niece. Previously, you might have googled “creative gifts for teens” and then browsed articles until something clicked. Now, if you ask ChatGPT, it generates a direct response based on its analysis of patterns across the web. It might suggest a custom DIY project or a unique experience, crafting the idea in real time.

To explore whether ChatGPT surpasses Google in creative thinking tasks, we conducted five experiments where participants tackled various creative tasks. For example, we randomly assigned participants to either use ChatGPT for assistance, use Google search, or generate ideas on their own. Once the ideas were collected, external judges, unaware of the participants’ assigned conditions, rated each idea for creativity. We averaged the judges’ scores to provide an overall creativity rating.

One task involved brainstorming ways to repurpose everyday items, such as turning an old tennis racket and a garden hose into something new. Another asked participants to design an innovative dining table. The goal was to test whether ChatGPT could help people come up with more creative solutions compared with using a web search engine or just their own imagination.


ChatGPT did well with the task of suggesting creative ideas for reusing household items. Simon Ritzmann/DigitalVision via Getty Images

The results were clear: Judges rated ideas generated with ChatGPT’s assistance as more creative than those generated with Google searches or without any assistance. Interestingly, ideas generated with ChatGPT – even without any human modification – scored higher in creativity than those generated with Google.

One notable finding was ChatGPT’s ability to generate incrementally creative ideas: those that improve or build on what already exists. While truly radical ideas might still be challenging for AI, ChatGPT excelled at suggesting practical yet innovative approaches. In the toy-design experiment, for example, participants using ChatGPT came up with imaginative designs, such as turning a leftover fan and a paper bag into a wind-powered craft.

Limits of AI creativity

ChatGPT’s strength lies in its ability to combine unrelated concepts into a cohesive response. Unlike Google, which requires users to sift through links and piece together information, ChatGPT offers an integrated answer that helps users articulate and refine ideas in a polished format. This makes ChatGPT promising as a creativity tool, especially for tasks that connect disparate ideas or generate new concepts.

It’s important to note, however, that ChatGPT doesn’t generate truly novel ideas. It recognizes and combines linguistic patterns from its training data, subsequently generating outputs with the most probable sequences based on its training. If you’re looking for a way to make an existing idea better or adapt it in a new way, ChatGPT can be a helpful resource. For something groundbreaking, though, human ingenuity and imagination are still essential.

Additionally, while ChatGPT can generate creative suggestions, these aren’t always practical or scalable without expert input. Steps such as screening, feasibility checks, fact-checking and market validation require human expertise. Given that ChatGPT’s responses may reflect biases in its training data, people should exercise caution in sensitive contexts such as those involving race or gender.

We also tested whether ChatGPT could assist with tasks often seen as requiring empathy, such as repurposing items cherished by a loved one. Surprisingly, ChatGPT enhanced creativity even in these scenarios, generating ideas that users found relevant and thoughtful. This result challenges the belief that AI cannot assist with emotionally driven tasks

Future of AI and creativity

As ChatGPT and similar AI tools become more accessible, they open up new possibilities for creative tasks. Whether in the workplace or at home, AI could assist in brainstorming, problem-solving and enhancing creative projects. However, our research also points to the need for caution: While ChatGPT can augment human creativity, it doesn’t replace the unique human capacity for truly radical, out-of-the-box thinking.

This shift from Googling to asking ChatGPT represents more than just a new way to access information. It marks a transformation in how people collaborate with technology to think, create and innovate.

Jaeyeon Chung, Assistant Professor of Business, Rice University


This article is republished from The Conversation under a Creative Commons license. Read the original article.
Is AI’s meteoric rise beginning to slow?

By AFP
November 18, 2024

OpenAI CEO Sam Altman fired off a social media post saying 'There is no wall' as fears arise over potential blockages to AI development - Copyright AFP/File Jason Redmond
Glenn CHAPMAN with Alex PIGMAN in Washington

A quietly growing belief in Silicon Valley could have immense implications: the breakthroughs from large AI models -– the ones expected to bring human-level artificial intelligence in the near future –- may be slowing down.

Since the frenzied launch of ChatGPT two years ago, AI believers have maintained that improvements in generative AI would accelerate exponentially as tech giants kept adding fuel to the fire in the form of data for training and computing muscle.

The reasoning was that delivering on the technology’s promise was simply a matter of resources –- pour in enough computing power and data, and artificial general intelligence (AGI) would emerge, capable of matching or exceeding human-level performance.

Progress was advancing at such a rapid pace that leading industry figures, including Elon Musk, called for a moratorium on AI research.

Yet the major tech companies, including Musk’s own, pressed forward, spending tens of billions of dollars to avoid falling behind.

OpenAI, ChatGPT’s Microsoft-backed creator, recently raised $6.6 billion to fund further advances.

xAI, Musk’s AI company, is in the process of raising $6 billion, according to CNBC, to buy 100,000 Nvidia chips, the cutting-edge electronic components that power the big models.

However, there appears to be problems on the road to AGI.

Industry insiders are beginning to acknowledge that large language models (LLMs) aren’t scaling endlessly higher at breakneck speed when pumped with more power and data.

Despite the massive investments, performance improvements are showing signs of plateauing.

“Sky-high valuations of companies like OpenAI and Microsoft are largely based on the notion that LLMs will, with continued scaling, become artificial general intelligence,” said AI expert and frequent critic Gary Marcus. “As I have always warned, that’s just a fantasy.”

– ‘No wall’ –

One fundamental challenge is the finite amount of language-based data available for AI training.

According to Scott Stevenson, CEO of AI legal tasks firm Spellbook, who works with OpenAI and other providers, relying on language data alone for scaling is destined to hit a wall.

“Some of the labs out there were way too focused on just feeding in more language, thinking it’s just going to keep getting smarter,” Stevenson explained.

Sasha Luccioni, researcher and AI lead at startup Hugging Face, argues a stall in progress was predictable given companies’ focus on size rather than purpose in model development.

“The pursuit of AGI has always been unrealistic, and the ‘bigger is better’ approach to AI was bound to hit a limit eventually — and I think this is what we’re seeing here,” she told AFP.

The AI industry contests these interpretations, maintaining that progress toward human-level AI is unpredictable.

“There is no wall,” OpenAI CEO Sam Altman posted Thursday on X, without elaboration.

Anthropic’s CEO Dario Amodei, whose company develops the Claude chatbot in partnership with Amazon, remains bullish: “If you just eyeball the rate at which these capabilities are increasing, it does make you think that we’ll get there by 2026 or 2027.”

– Time to think –

Nevertheless, OpenAI has delayed the release of the awaited successor to GPT-4, the model that powers ChatGPT, because its increase in capability is below expectations, according to sources quoted by The Information.

Now, the company is focusing on using its existing capabilities more efficiently.

This shift in strategy is reflected in their recent o1 model, designed to provide more accurate answers through improved reasoning rather than increased training data.

Stevenson said an OpenAI shift to teaching its model to “spend more time thinking rather than responding” has led to “radical improvements”.

He likened the AI advent to the discovery of fire. Rather than tossing on more fuel in the form of data and computer power, it is time to harness the breakthrough for specific tasks.

Stanford University professor Walter De Brouwer likens advanced LLMs to students transitioning from high school to university: “The AI baby was a chatbot which did a lot of improv'” and was prone to mistakes, he noted.

“The homo sapiens approach of thinking before leaping is coming,” he added.

How pirates steer corporate innovation: Lessons from the front lines

Calgary Innovation Week runs from Nov. 13-21, 2024.


By Chris Hogg
DIGITAL JOURNAL
November 19, 2024

Image generated by OpenAI's DALL-E via ChatGPT

If you ask Tina Mathas who should lead transformative innovation projects, she’ll tell you it’s all about the pirates.

“It requires a different type of mindset, a different type of ecosystem and environment, and it should be protected,” says Mathas, co-founder of Flow Factory, a company that aims to enhance human performance by integrating the concept of “flow state” with artificial intelligence.

For transformative innovation, she argues, big companies need pirates — not quite drunken Jack Sparrow adventurers, but individuals who challenge traditional processes and navigate uncharted waters of creativity and risk.

Mathas’s declaration set the tone for a lively virtual panel on corporate innovation at Calgary Innovation Week. The discussion brought together industry leaders to dissect how innovation can thrive in corporate environments often resistant to change.

The challenges, they agreed, are substantial, but the potential rewards for organizations that get it right are transformative.

Making the case for pirates

“Transformative innovation requires pirates,” Mathas said. “It’s not just about solving today’s problems — it’s about being bold and taking risks on where we think the industry is going.”

Mathas described her experience at ATB Financial, where her team was tasked with “breaking the bank.”

Operating with a $50,000 budget, they delivered a market-ready banking platform in just five weeks.

“We had no banking experience,” she said, “and people didn’t understand how we got that done. We had board support, and we had executive support. In other words, we reported directly into the executive and we were separate from the main organization.

 We were the pirates.”

This freedom is crucial, Mathas said, because transformative innovation rarely succeeds when confined by a corporation’s standard processes.

“According to an Accenture study, 82% of organizations run innovation in exactly the same way as any other regular project. Plus it takes about 18 months, and you’re still facing a 90% failure rate,” she said, telling the audience that is the reason she left the corporate world.


Innovation begins with people and alignment with business goals

Jeff Hiscock of Pembina Pipelines shifted the focus to the human element of innovation, emphasizing the challenges of workforce turnover and retention. Focus on the importance of building environments that retain experienced talent, while simultaneously attracting new entrants to the workforce, he advised.

“Thirty-five per cent of the energy workforce will turn over by 2035,” Hiscock said, referencing data from a provincial study. “A lot of that is through retirement. How do you create a workplace where those people want to stay in the roles longer?”

By focusing on creating workplaces that are innovative, engaging and adaptable, organizations can address this looming talent gap while driving forward their innovation goals.

Hiscock described innovation as a necessity, not a luxury, particularly in industries like energy.

“Innovation is about solving real problems that impact your business’s core value streams,” he said.

Pembina, for instance, focuses 70% of its innovation efforts on projects with direct EBITDA impacts, ensuring alignment with organizational goals.

However, Hiscock cautioned that innovation efforts often stall because of cultural resistance.

“What’s obvious to you is not obvious to everyone else,” he said. “It’s like playing a 4D chess game that only you can see. That’s a bad place to be.”

His solution? Securing buy-in from every level of the organization, not just senior executives.


From dollars to disruption

“Innovation isn’t about dollars, but it kind of is,” said Shannon Phillips, co-founder of Unbounded Thinking. Phillips’ work focuses on helping organizations, particularly small and medium-sized enterprises, implement effective innovation management systems.

He explained that many companies struggle to balance innovation’s creative potential with the financial realities of running a business.

“If we keep talking about this vague concept of innovation that is just about something new and breakthrough, we’ll never get the respect that we need. We really need to start looking at how we measure it to make it part of our DNA, and to make it a revenue stream in itself.”

Phillips outlined a structured approach to categorizing innovation: core (incremental improvements), adjacent (new markets or products), and breakthrough (disruptive technologies).

He emphasized focusing on core innovation first, as it carries the least risk, while building maturity and trust over time to approach higher-risk, breakthrough projects effectively. This holistic, balanced approach helps companies mitigate risks and align innovation with their capabilities and goals.

“For smaller companies, it’s not a buzzword — it’s about survival,” he said. “They need proof that innovation will help them grow and keep their doors open.”


Partnerships that deliver

Lee Evans, head of low-carbon activities at TC Energy, discussed how partnerships can drive innovation in meaningful ways.

“We think about win-wins,” Evans said. “How do we find ways to work with others to support each other?”

As an example, TC Energy recently invested and partnered with Qube Technologies, a Calgary-based emissions monitoring company, to address its decarbonization goals.

Evans highlighted the importance of starting small with innovation initiatives.

“Minimum viable products are really important,” he said. “You test, you learn and then you scale.” This approach minimizes risk while building trust in the process.

Evans also stressed the need for resilience and adaptability.

“If you want to be working in this space, you’ve got to be resilient. You’ve got to be willing to face challenges and setbacks and be willing to pivot. Those are really important. And never give up if you think there’s true value in what you’re up to. Find ways to make sure people understand the value of what you’re doing.”

The role of government and academia in innovation

Panelists also weighed in on how external forces, like government policies and academic research, shape innovation.

Mathas argued that governments should incentivize competition to stimulate corporate innovation. “We need more competition coming into Canada and into Alberta to create more of that incentive to want to compete and to want to innovate.”

On the academic front, Mathas cautioned universities in their efforts to turn researchers into entrepreneurs. She said universities should focus on supporting research, not forcing students to commercialize their ideas because it can lead to a loss of investment in the research that drives real innovation.

Key takeaways for corporate innovators

The panel left attendees with practical advice for navigating the complexities of corporate innovation:Start small, think big: “Innovate like a startup, scale like an enterprise,” said Mathas.
Embrace failure: “Failures are just learning in disguise,” she added.
Focus on core problems: Hiscock advised innovators to align their projects with a company’s key value streams.
Measure impact: “We need to make innovation part of the DNA,” said Phillips.
Be resilient: “Understand the value of what you’re doing and keep going,” said Evans.

As the panel concluded, one message was clear: the future belongs to those bold enough to embrace risk, empower people and innovate with purpose.
Whose data is it anyway? Indigenous voices call for accountability and data sovereignty in AI

Calgary Innovation Week runs from Nov. 13-21, 2024.


By Abigail Gamble
November 18, 2024
DIGITAL JOURNAL

Derek Eiteneier, Hayden Godfrey, Natasha Rabsatt, and Renard Jenkins (left to right) spoke at a Indigitech Destiny Innovation Summit panel on Monday, Nov. 18
. — Photo by Jennifer Friesen, Digital Journal

“Any usage of data that does not support data sovereignty, that does not support our economic reconciliation, does not support our interests, constitutes data genocide,” said Hayden Godfrey, director of Indigenous relations at Eighth Cortex.

Godfrey was on stage at Platform Calgary Innovation Centre this morning for the Indigitech Destiny Innovation Summit talking about data sovereignty and how to ethically integrate Indigenous knowledge, language and cultural data into artificial intelligence (AI) systems.

He was joined at the Calgary Innovation Week event by fellow panelists Natasha Rabsatt, founder of With These Lands to Talk, Renard Jenkins, president of I282 Technology Studios and Labs, and Derek Eiteneier, CEO of Outdoor Gala.

Together, they explored protecting Indigenous cultures, and what steps are necessary to build AI systems that foster inclusion rather than exploitation.

For Jenkins, AI tech itself isn’t what presents the greatest challenge to inclusion, but rather the people and power structures behind it.

“We should not be so concerned about the technology [of AI], but we should be concerned about who’s wielding the technology and who’s controlling the technology, and where that center of power comes in, with the technology,” he said.Renard Jenkins is the president and CEO of I2A2 Technologies, Studios & Labs. — Photo by Jennifer Friesen, Digital Journal
Here’s what data sovereignty means and why it matters so much

Data sovereignty ensures that data remains under the control of the communities it represents, a concept Jenkins sees as fundamental to ethical use of AI.

“One of the key things that we have to pay attention to is what data is being used for the foundational model of whichever AI system that you’re using,” he explained.

“That’s where we have the biggest opportunity right now to make sure that our foundational models look like the world that we live in — instead of looking like sometimes the individuals or the groups that actually build the models.”

Adding to the discussion, Eiteneier noted that often with AI tools “there’s bias in the overall data, or it’s missing data altogether.”

This incomplete picture can lead to misinformation or skewed representations — especially of minority or marginalized communities — if not carefully addressed.

“When we’re looking at Native and Indigenous communities, I think there is a lot of apprehension around how these technologies can actually be used, and be representative of cultures that were not historically [well] represented,” Rabsatt noted as well.

But she also emphasized the necessary balance of protecting cultural sovereignty, while embracing AI’s potential.

“I think if we see AI as a tool to augment our intelligence, and to automate, I think we can do something positive with that — with mindfulness, of course — and working together with other people and communities that have the same values.”

Natasha Rabsatt is the co-founder of If These Lands Could Talk. — Photo by Jennifer Friesen, Digital Journal

What the path forward for data sovereignty protection could (should?) look like

When it comes to establishing data sovereignty policies, Rabsatt highlighted the importance of determining what information should remain private and what should be shared.

“Especially with culturally sensitive information, it’s about asking: ‘What is it we don’t want in there? What shouldn’t be open source?’ Then we decide what information we do want to input, ensuring that it creates economic advantages for our community,” she said.

Jenkins emphasized the need for global collaboration in building ethical AI systems, warning against centralized power in the hands of, say, a few large companies.

“There are literally about seven or eight large language models that the majority of the artificial intelligence tools … are actually built upon. At this time, we do not have access to how those models were built, whose data was used for those models,” he explained.

That said, he also brought up some of the challenges of figuring out how to gain access to — and establish remuneration models for — culturally specific AI data.

— Photo by Jennifer Friesen, Digital Journal

“If we go into a regulated state where all of a sudden, individuals are forced to have to reveal what’s in their models, they’re forced to actually compensate the individuals whose IP has been utilized, we may see a lot of these models actually implode, because the cost will be much higher than what they could actually sustain.”

From a regulatory perspective, Godfrey proposed the creation of a binding code of ethics to ensure that AI developers respect Indigenous sovereignty and approach their work with transparency and accountability.

“I would like to see the development of a code of ethics that tech professionals need to abide by, not optionally, but have a mandate to abide by in interacting with this data,” he said.

“We need to ensure that technology is aligned with Indigenous values, that it serves as a tool for justice and reconciliation rather than exploitation. And that starts with respecting sovereignty, one ethical choice at a time.”
AMERIKA

Establishing Workers’ Right Not to Hear Bosses’ Propaganda


The NLRB rules that management may not compel employees to attend its anti-union meetings.



by Harold Meyerson
November 18, 2024

Bastiaan Slabbers/Sipa USA via AP Images
Union members await a speech by President Biden in Philadelphia, November 1, 2024.

One of the many ways that employers intimidate workers from joining unions is via the captive-audience meeting, in which those workers are subjected to their boss’s arguments against their unionizing. Employers require their workers to attend these meetings; not attending may be, depending on the boss’s mood, grounds for being penalized, demoted, or even discharged.

Last week, the National Labor Relations Board ruled that such meetings violate the National Labor Relations Act, which was designed to give workers a free choice in deciding whether they wished to join a union. By requiring workers’ attendance at such meetings, the Board ruled, those workers’ choice became less free.

Over the past four decades, captive-audience meetings have become standard management practice when workers seek to join a union. They are a prominent feature in the Union Busting 101 courses that anti-union attorneys and consultants provide to their business clients, both big and small. Union organizers have no tool in their arsenal that can match it: Not only can they not compel workers to do anything, but they’re also forbidden from organizing within the worksite. This asymmetry, the NLRB ruled, runs counter to the letter and spirit of the NLRA.


More from Harold Meyerson

Captive-audience meetings have also come under attack on a different front. During the past two years, ten states have outlawed them. The first was Minnesota, where the ban was enacted in early 2023, shortly after Democrats gained control of both houses of the legislature and Gov. Tim Walz signed it into law. Eight other blue trifecta states quickly followed: California, Connecticut, Hawaii, Illinois, Maine, New York, Oregon, and Washington. And a ban on those meetings was part of an omnibus pro-worker ballot measure (which also included a hike in the minimum wage and the establishment of paid sick leave) that Alaska voters enacted.

Those states, of course, didn’t base their decisions on the NLRA, which is federal law, but more simply on trying to level the playing field between employers and employees. One argument for banning the meetings that I’ve heard from some labor lawyers is that compelling employees to attend those meetings violates workers’ freedom from speech, which is something like the B-side of the First Amendment. As management lawyers take these state laws to court, they’re sure to argue that the NLRA preempts the rights of the states to address this issue. That’s one more reason why last week’s NLRB decision is so important.

That said, the NLRB is the federal agency most subject to reversing its own rulings, depending on who the president is. The Board consists of three members nominated by the president and two by the opposition party, and while members’ terms are staggered, eventually the appointees of the new president outnumber the appointees of the old. That’s why, for instance, a ruling from the Obama-era Board that said that graduate students working at private universities as teaching and research assistants were employees and thus eligible for union membership and collective bargaining was struck down by the Trump-era Board and then reinstated by the Biden Board. In all likelihood, it will be struck down again once the Board is dominated by Trump’s appointees.


What makes this reversal unlike the previous ones is that it’s only in the past couple of years, since the Biden Board gave those grad students the green light, that many thousands of them have organized most of America’s leading private universities. (The NLRA only covers private-sector employees; the 48,000 University of California grad student/employees, for instance, have unionized and won contracts because California state law permits public employees to collectively bargain.) Should the newly unionized private universities (Harvard, Yale, MIT, Caltech, etc.) revoke their grad students’ union recognition and rescind their contracts, it shouldn’t come as a surprise if those students strike and the nation’s foremost universities come to a screeching halt.

How soon could Trump appointees dominate the Board? That may be up to the current lame-duck session of the Senate, which since June has had before it the renomination of Lauren McFerran, who under Biden has been the Board chair. If she is confirmed again, she would be part of the three-member pro-worker Board majority. Until their own majority expires at year’s end, Senate Democrats have pledged to keep ratifying the judicial nominees Biden has put before them, and have come under understandable pressure from labor to confirm McFerran as well. If they do, the next expiration of a pro-worker member’s term won’t come until 2026.

Some Democrats reportedly fear that Trump will simply fire all three Democrats if the Senate reconfirms McFerran, a move that is legally contestable (not that that would deter Trump). Then again, not confirming McFerran would also quickly give the Board over to Trump.

Under McFerran, and with the strong prodding of NLRB General Counsel Jennifer Abruzzo, the Biden Board has been the most committed to ensuring workers’ rights since Franklin Roosevelt’s. It has limited employers’ ability to delay union recognition elections, increased the payments employers must make to employees they illegally fired to deter organizing campaigns, and mandated more significant remedies (including compelling employers to enter collective bargaining with their workers’ union) when employers violate the NLRA in seeking to deter unionization. Eliminating captive-audience meetings is a capstone of sorts to the Board’s campaign to restore to American workers the rights they once enjoyed, before Republicans in the White House, Congress, statehouses, and the courts concluded that worker power was an inherent threat to capital and their campaign contributors. Even if those Republicans now taking power sweep away the Biden Board’s rulings, though, those rulings would be the starting point for the next iteration of worker rights when the Democrats, as they surely will, return to power themselves.


Harold Meyerson is editor at large of The American Prospect.
Quirks of Right-Wing Populism

Far-right populists do share some things with the left. But boss rule Trumps them all.



Looking for some dim silver linings, some progressives have made the accurate observation that some right-wing populists have criticisms of capitalism that mirror the left’s. They may be, if not useful idiots, occasional allies.

For instance, Robert F. Kennedy Jr. has criticized Big Food and Big Pharma. If he can just drop some of his more lunatic views, as secretary of HHS he might shine a useful spotlight and revise some bad industry practices.

Dr. Deborah Birx, COVID response coordinator during Trump’s first term, said Sunday she expects that Kennedy’s nomination will lead to illuminating discussions about public health. Speaking on CBS’s Face the Nation, she said, "I’m actually excited that in a Senate hearing he would bring forward his data and the questions that come from the senators would bring forth their data."

CBS showed a clip of Kennedy saying, "I’m just going to tell the cereal companies to take all the dyes out of their food. I’ll get processed food out of school lunch immediately. Ten percent of food stamps go to sugar drinks to, you know, sodas. We’re creating diabetes problem, and our kids are giving them food that’s poison, and I’m going to stop that."

Birx is actually a serious person. She served as Obama’s global AIDS coordinator. Could she be onto something?

The populist right also has mixed feelings about tech billionaire monopolies (Elon Musk and Peter Thiel excepted) because of their fundraising for Democrats and their socially liberal views. Our friend Matt Stoller published a startling item on the admiration of Matt Gaetz for FTC Chair Lina Khan, charmingly describing Gaetz as a "Khanservative."

As Stoller wrote, "Gaetz proudly calls himself a Lina Khan fan, and filed a brief with the conservative Fifth Circuit asserting that the Federal Trade Commission has the authority to ban non-compete agreements, and personally hosted her as a guest on a show on Newsmax to discuss how to get rid of ‘creepy’ commercial surveillance. He has praised the Biden Antitrust Division’s Jonathan Kanter’s work on Google."

Stoller also quoted Gaetz: "It is my belief that the number one threat to our liberty is big government. It is also my belief that the number two big threat to our liberty is big business, when big business is able to use the apparatus of government to wrap around its objectives."

This sounds hopeful, but Gaetz may well not get confirmed as attorney general. And if he does, he still has to answer to Trump, who could easily find antitrust officials with his own highly selective views of which monopoly abuses to go after and which to give a pass.

And while Kennedy does have some views that are critical of Big Food and Big Pharma, consider what happened on the food front over the weekend.

In the past, RFK Jr. has been highly critical of Trump’s diet. "The stuff that he eats is really, like, bad," he said. "Campaign food is always bad, but the food that goes onto that airplane is, like, just poison. You have a choice between—you don’t have the choice, you’re either given KFC or Big Macs. That’s, like, when you’re lucky, and then the rest of the stuff I consider kind of inedible."

Well, on Sunday, all the talk shows showed images of Trump forcing Kennedy to choke down a burger, fries, and a Coke.

The deeper problem with far-right populism is that the boss is the boss of bosses. Because far fringe appointees like Gaetz and Kennedy, if confirmed, will be entirely creatures of Trump’s whims, they will do what he says.

E.E. Cummings wrote in a poem, celebrating a brave conscientious objector named Olaf who was brutalized by his captors, declaring, "There is some shit I will not eat." That evidently does not describe RFK Jr.


~ ROBERT KUTTNER
The American Prospect.


 

Scientist proposes deducing commonality from complexity to resolve global challenges



Chinese Academy of Sciences Headquarters





Two topics are now drawing great attention from the global scientific community: shifting or advancing paradigms in science, and tackling global challenges such as the UN Sustainable Development Goals, climate change, and human health. However, do these two topics share fundamental and interrelated mechanisms? Are there laws common to complex systems in science, engineering, and society?

These questions have puzzled scientists for decades as they try to address complexities fundamentally.

In his recent Perspective published in the Proceedings of the Royal Society A on Nov. 13, Professor LI Jinghai from the Institute of Process Engineering of the Chinese Academy of Sciences summarized his four-decade study of these questions and outlined future directions for his work.

Inspired by the physical phenomenon of the coexistence and interaction between a gas-rich dilute phase and a solid-rich dense phase, with different physical mechanisms dominating the behavior for each phase in gas–solid fluidization, LI and his colleagues recognized an underlying compromise-in-competition (CIC) mechanism that exists between the two physically dominant mechanisms.

The CIC principle was then extended to formulate and understand other complex systems in many apparently disparate fields. It later evolved into the concept of Mesoscience, which can be used to explore the common principles at mesoscales of different levels of complexity.

“Through the concept of Mesoscience, we aim to explore the possible commonality among complexity in different fields,” said Professor LI, also a member of the Chinese Academy of Sciences.

In his Perspective, LI proposed future directions for identifying, developing, and applying the concept of Mesoscience, including extending its possible generality and applications to many different disciplines and fields such as analyzing challenging global issues and promoting CIC-informed AI.

In this way, many challenging problems in engineering—identified through their underlying mesoscale complexity—will be closely correlated with the development of future basic science, according to Professor LI.

The deduction of commonality from diversity will help to resolve global challenges, shift paradigms in science, and fill in gaps at the mesoscales of different levels of knowledge. However, diversity also produces difficulties and uncertainty.

“We believe that the concept of Mesoscience deserves global attention, particularly at a time when tackling challenges facing all humankind involves some common knowledge gaps most likely at mesoscales.” said Professor LI.

 

Palliative Care needs to be transformed world-wide, say specialists at Qatar Foundation’s WISH 2024


WISH/QF
Dr Rachel Clarke 

image: 

Dr Rachel Clarke Speaking on Palliative Care at WISH 7

view more 

Credit: World Innovation Summit for Health




18 November 2024. Doha, Qatar —The World Innovation Summit for Health (WISH) – Qatar Foundation’s global healthcare initiative – featured a much-anticipated panel discussion on addressing the urgent need to implement palliative care based on the report, ‘Palliative Care: How can we respond to ten years of limited progress’, released alongside the summit.

The report’s main author, Professor Richard Harding, Director of the C. Saunders Institute of Palliative Care, said:

“Everybody in this room shares the same challenge: we have patients and family members who face progressive illnesses, who are living with unnecessary physical and psychological suffering. We have a challenge at the health system level…. And we know that there's a cost, with money being spent on non-beneficial treatments. In high-income countries, the one percent of people who die annually consume 10% of the health budget. So, we've got a major problem. The great thing is, we've got the solution. It's palliative care.”

Professor Richard Harding was joined on the panel by Dr. Rachel Clarke, award-winning author and palliative care doctor; Dr. Tala Al Taji, QF Alumni and Palliative care fellow at the University of Rochester; Dr. Asmus Hammerich, the Director for NCD and Mental Health for the Eastern Mediterranean Regional Office of the World Health Organization; Professor Dame Louise Robinson, Academic, General Practitioner (GP), and Professor of Primary Care and Ageing; and Dr. Emmanuel Luyirika, the Executive Director of African Palliative Care Association.

The panel agreed that if we are to tackle this global challenge, we need to provide more community-centered care and reframe our understanding of the term, emphasizing that it is a concept that includes -- but is not limited to -- end-of-life care.

As populations age and chronic conditions become more common, particularly in transitioning countries, the demand for effective care that alleviates suffering and enhances the quality of life—especially for those nearing the end of life—grows. The need for palliative care is expected to double by 2060.

The panel discussed in detail innovative and cost-effective recommendations to transform essential palliative care in the years to come and identified several key areas to focus on, including: empowering local communities with materials in locally spoken languages, supporting and enabling primary care doctors, increasing appropriate education and training at all levels, enabling research, advocating for better policies in this neglected healthcare area, and increasing literacy and reducing stigma in the general population.

Dr. Tala Al Taji emphasised the importance of understanding the religious and cultural context of patients: “I have found that culture and religion can shape somebody's willingness to go through treatment or desire to go through treatment. It can shape the decisions that they make at different stages of their illness, and it can even shape their decisions at the end of their lives. I think one of the core principles of palliative care is to preserve patient autonomy, and we cannot preserve patient autonomy without truly understanding the value system of a patient.”

Several of the panelists, including Dr. Clarke, spoke of their work in palliative care as a privilege.  Dr. Clarke said: “I strongly believe and see every day at work that there is always potential for goodness and beauty and love and compassion and meaning right up until the end of life, because dying is like living, it is a lived experience, and something we go through with people we love around us, if we're lucky, and that means there's always the potential for us as doctors and healthcare workers to make a difference.”

This year, WISH was opened in the presence of Her Highness Sheikha Moza bint Nasser, Chairperson of Qatar Foundation and founder of WISH. The opening ceremony, held at Qatar National Convention Centre in Doha, included speeches from Her Excellency Dr. Hanan Mohamed Al Kuwari, Qatar’s former Minister of Public Health; Lord Darzi of Denham, Executive Chair of WISH; and Christos Christou, President of Médecins Sans Frontières. 

The theme of WISH 2024 was ‘Humanizing Health: Conflict, Equity and Resilience’. It aims to highlight the need for innovation in health to support everyone, leaving nobody behind and building resilience, especially among vulnerable societies and in areas of armed conflict.

Ahead of the summit, WISH entered into a strategic partnership with the World Health Organization (WHO), collaborating on the development of a series of evidence-based reports and policy papers, as well as working with the United Nations’ health agency to develop a post-summit implementation strategy. 

The summit featured more than 200 experts in health speaking about evidence-based ideas and practices in healthcare innovation to address the world’s most urgent global health challenges.


 

Be humble: Pitt studies reveal how to increase perceived trustworthiness of scientists



University of Pittsburgh





How can scientists across climate science, medical and psychological topics foster the public’s trust in them and their science? Show that they are intellectually humble.

Those are some of the findings of two intellectually humble University of Pittsburgh scientists and their co-authors, using five separate studies totaling 2,034 participants in research published Nov. 18 in Nature Human Behaviour.

“Research has shown that having intellectual humility — which is an awareness that one’s knowledge or beliefs might be incomplete or wrong — is associated with engaging in more effortful and less biased information processing,” said Jonah Koetke, the principal author and a graduate student under co-author Karina Schumann, associate professor of psychology. “In this work, we wanted to flip the perspective and examine whether members of the public believe that scientists who are intellectually humble also produce more rigorous and trustworthy research.  

“Because it is so critical to the scientific process — for example, being aware of the limits of our knowledge, communicating the limitations of results, being willing to update beliefs — members of the public might be more likely to trust scientists who exhibit intellectual humility.”

The article, also co-authored by Shauna Bowes of Vanderbilt University and Nina Vaupotič of the University of Vienna, cited statistics showing how Americans reporting a great deal of confidence in scientists decreased by 10% from 2020 to 2021 (the last measured year at the time of the writing), to 29% overall. For hot-button topics, the confidence dips even lower — as evidenced by differing public perceptions amid the pandemic over lockdowns, social distancing, vaccines and more — despite the presence of evidence-based science affirming their effectiveness.

“These are anxiety-provoking times for people, and they feel uncertain about who to trust and which recommendations to follow,” Schumann said. “We wanted to know what can help people feel more confident putting their faith in scientists working to find solutions to some of the complex global challenges we are facing.”

This dilemma stood at the heart of their study: What are the factors “that legitimately promote or hinder trust” in science and scientists? The researchers measured perceived trustworthiness as having the qualities of expertise, benevolence (seeing scientists as people who pursue wellbeing for all), and integrity. They also measured how much people trusted the scientists’ research by asking about their willingness to learn more about the research and follow the scientists’ recommendations.

The researchers theorized that intellectual humility would be a key characteristic of scientists that guides how members of the public perceive them.

“When scientists fail to behave in ways that reflect intellectual humility, it might be especially detrimental and jarring, as it goes against both the fundamental norms of science and people’s expectations for how a responsible scientist should act,” the co-authors reasoned.

So they set out to research whether perceptions of scientists’ intellectual humility would influence people’s trust in scientists and their research.

Study 1: They asked 298 online participants from across the U.S. to think of scientists and rate them on their perceived intellectual humility. Participants also offered ratings of the perceived trustworthiness of scientists and of their belief in polarizing science topics such as climate change, vaccinations and genetically modified foods. In the end, the study showed correlational evidence that the more participants believed scientists were intellectually humble, the more they trusted scientists and believed in evidence-based science.

Study 2:  To better isolate the effects of intellectual humility on trust, they next tested their hypothesis by assigning 317 participants to read one of three “articles” about an ostensible scientist identified as a woman researching new treatments for long COVID-19 symptoms. The three “articles” described the scientist in ways that conveyed either low intellectual humility or high intellectual humility, or did not discuss characteristics related to intellectual humility (control condition). They found large effects on trust in the predicted direction, with participants reporting lower trust toward the scientist described as having low intellectual humility compared to the other two conditions. Participants in the low intellectual humility condition also reported less belief in the scientist’s research on the new treatment.

Study 3: Because questions surrounding gender perception were left unanswered in Study 2, the co-authors sought to examine the effect of a scientist’s gender identity on the public’s reactions to intellectual humility. They randomly assigned 369 participants to read an article about an ostensible psychological scientist studying why people should talk across political divides. They used the same three “article” designs as Study 2, but varied whether each described either a female or male scientist. Replicating Study 3, they again found large effects of intellectual humility on trust, as well as small-to-medium effects on belief in the research and whether participants would follow the scientist’s recommendations. The described gender of the scientist had no influence on the benefits of high vs. low intellectual humility on these outcomes.

Study 4:  To ensure that the benefits of perceived intellectual humility generalized to scientists of color, the co-authors next tested if participants were affected by the racial identity of the scientist. Some 371 participants were randomly assigned to read an “article” about an ostensible climate scientist testing the benefits of plant-rich diets for reducing global carbon emissions. In this new scientific context, the authors replicated the effects from the prior studies and also discovered a small-to-medium effect on participants’ desire to obtain further information about switching to a plant-rich diet — 36% people opted in to receive this information when the scientist was high in intellectual humility compared to 21% when the scientist was low in intellectual humility. Notably, as with gender, the described race of the scientist didn’t show an effect.

Study 5: In the final study, the authors set out to test an important question that remained: How can a scientist express they are intellectually humble when communicating their research to the public? The authors randomly assigned 679 participants in a census-matched sample to read one of four “interviews” with an ostensible scientist discussing the psychological benefits of taking a social-media break (not considered as polarizing as the previous study topics). These interviews included approaches like describing the methodological limitations of the research or giving credit to their graduate students. However, although the approaches were generally effective at increasing perceptions of intellectual humility, none of the communication strategies successfully increased perceptions of scientists’ trustworthiness and several even backfired by shaking people’s trust in the research. The authors humbly noted that they still don’t how scientists can communicate intellectual humility in ways that also builds trust.

“We still have a lot to learn about specific strategies scientists can use to display their intellectual humility in their public communications,” Koetke said. “This will be the focus of future work.”

For now, the research team came away feeling that the general public values intellectual humility.

“As a scientist, I felt incredibly encouraged by our findings,” Schumann said. “They suggest that the public understands that science isn’t about having all the answers; it's about asking the right questions, admitting what we don’t yet understand, and learning as we go. Although we still have much to discover about how scientists can authentically convey intellectual humility, we now know people sense that a lack of intellectual humility undermines the very aspects of science that make it valuable and rigorous. This is a great place to build from.”

 

 

How 70% of the Mediterranean Sea was lost 5.5 million years ago




CNRS
Artistic representation of the Gibraltar sill rupture at the end of the Messinian Salinity Crisis. 

image: 


Artistic representation of the Gibraltar sill rupture at the end of the Messinian Salinity Crisis. In the final moments of this crisis, the level of the Mediterranean Sea was around 1 km lower than that of the Atlantic Ocean.

view more 

Credit: © Pibernat & Garcia-Castellanos




Mediterranean Sea dropped during the Messinian Salinity Crisis – a major geological event that transformed the Mediterranean into a gigantic salt basin between 5.97 and 5.33 million years ago2.

Until now, the process by which a million cubic kilometres of salt accumulated in the Mediterranean basin over such a short period of time remained unknown. Thanks to analysis of the chlorine isotopes3 contained in salt extracted from the Mediterranean seabed, scientists have been able to identify the two phases of this extreme evaporation event. During the first phase, lasting approximately 35 thousand years, salt deposition occurred only in the eastern Mediterranean, triggered by the restriction of Mediterranean outflow to the Atlantic, in an otherwise brine-filled Mediterranean basin. During the second phase, salt accumulation occurred across the entire Mediterranean, driven by a rapid (< 10 thousand years) evaporative drawdown event during which sea-level dropped 1.7-2.1 km and ~0.85 km in the eastern and western Mediterranean, respectively. As a result, the Mediterranean Basin lost up to 70% of its water volume.

This spectacular fall in sea level is thought to have had consequences for both terrestrial fauna and the Mediterranean landscape – triggering localised volcanic eruptions due to unloading of Earth's crust, as well as generating global climatic effects due to the huge depression caused by the sea-level drawdown.

These results, published in Nature Communications on November 18, provide a better understanding of past extreme geological phenomena, the evolution of the Mediterranean region and successive global repercussions.

This work was supported by the European Union and the CNRS.

Notes : 

  1. From the French research institute Institut de physique du globe de Paris (CNRS/Université Paris Cité/Institut de physique du globe de Paris).
  2. This exceptional event covered the floor of the Mediterranean Sea with a layer of salt up to 3 km thick. Understanding the causes, consequences and environmental changes undergone by the Mediterranean region in response to the Messinian Salinity Crisis is a challenge that has mobilised the scientific community for decades.
  3. Analysis of the two stable chlorine isotopes (³⁷Cl and ³⁵Cl) made it possible to estimate the rate of salt accumulation and detect the drop in sea level.