It’s possible that I shall make an ass of myself. But in that case one can always get out of it with a little dialectic. I have, of course, so worded my proposition as to be right either way (K.Marx, Letter to F.Engels on the Indian Mutiny)
How will the internet evolve in the coming decades?
Fiction writers have explored some possibilities.
In his 2019 novel “Fall,” science fiction author Neal Stephenson imagined a near future in which the internet still exists. But it has become so polluted with misinformation, disinformation and advertising that it is largely unusable.
Characters in Stephenson’s novel deal with this problem by subscribing to “edit streams” – human-selected news and information that can be considered trustworthy.
The drawback is that only the wealthy can afford such bespoke services, leaving most of humanity to consume low-quality, noncurated online content.
To some extent, this has already happened: Many news organizations, such as The New York Times and The Wall Street Journal, have placed their curated content behind paywalls. Meanwhile, misinformation festers on social media platforms like X and TikTok.
On the surface, chatbots seem to provide a solution to the misinformation epidemic. By dispensing factual content, chatbots could supply alternative sources of high-quality information that aren’t cordoned off by paywalls.
Ironically, however, the output of these chatbots may represent the greatest danger to the future of the web – one that was hinted at decades earlier by Argentine writer Jorge Luis Borges. The rise of the chatbots
Today, a significant fraction of the internet still consists of factual and ostensibly truthful content, such as articles and books that have been peer-reviewed, fact-checked or vetted in some way.
The developers of large language models, or LLMs – the engines that power bots like ChatGPT, Copilot and Gemini – have taken advantage of this resource.
To perform their magic, however, these models must ingest immense quantities of high-quality text for training purposes. A vast amount of verbiage has already been scraped from online sources and fed to the fledgling LLMs.
The problem is that the web, enormous as it is, is a finite resource. High-quality text that hasn’t already been strip-mined is becoming scarce, leading to what The New York Times called an “emerging crisis in content.”
This has forced companies like OpenAI to enter into agreements with publishers to obtain even more raw material for their ravenous bots. But according to one prediction, a shortage of additional high-quality training data may strike as early as 2026.
As the output of chatbots ends up online, these second-generation texts – complete with made-up information called “hallucinations,” as well as outright errors, such as suggestions to put glue on your pizza – will further pollute the web.
And if a chatbot hangs out with the wrong sort of people online, it can pick up their repellent views. Microsoft discovered this the hard way in 2016, when it had to pull the plug on Tay, a bot that started repeating racist and sexist content.
Over time, all of these issues could make online content even less trustworthy and less useful than it is today. In addition, LLMs that are fed a diet of low-calorie content may produce even more problematic output that also ends up on the web. An infinite − and useless − library
It’s not hard to imagine a feedback loop that results in a continuous process of degradation as the bots feed on their own imperfect output.
A July 2024 paper published in Nature explored the consequences of training AI models on recursively generated data. It showed that “irreversible defects” can lead to “model collapse” for systems trained in this way – much like an image’s copy and a copy of that copy, and a copy of that copy, will lose fidelity to the original image.
How bad might this get?
Consider Borges’ 1941 short story “The Library of Babel.” Fifty years before computer scientist Tim Berners-Lee created the architecture for the web, Borges had already imagined an analog equivalent.
In his 3,000-word story, the writer imagines a world consisting of an enormous and possibly infinite number of hexagonal rooms. The bookshelves in each room hold uniform volumes that must, its inhabitants intuit, contain every possible permutation of letters in their alphabet
.
In Borges’ imaginary, endlessly expansive library of content, finding something meaningful is like finding a needle in a haystack.
Initially, this realization sparks joy: By definition, there must exist books that detail the future of humanity and the meaning of life.
The inhabitants search for such books, only to discover that the vast majority contain nothing but meaningless combinations of letters. The truth is out there –but so is every conceivable falsehood. And all of it is embedded in an inconceivably vast amount of gibberish.
Even after centuries of searching, only a few meaningful fragments are found. And even then, there is no way to determine whether these coherent texts are truths or lies. Hope turns into despair.
Will the web become so polluted that only the wealthy can afford accurate and reliable information? Or will an infinite number of chatbots produce so much tainted verbiage that finding accurate information online becomes like searching for a needle in a haystack?
The internet is often described as one of humanity’s great achievements. But like any other resource, it’s important to give serious thought to how it is maintained and managed – lest we end up confronting the dystopian vision imagined by Borges. Roger J. Kreuz, Associate Dean and Professor of Psychology, University of Memphis
Leaner large language models could enable efficient local use on phones and laptops
Princeton University, Engineering School
Large language models (LLMs) are increasingly automating tasks like translation, text classification and customer service. But tapping into an LLM’s power typically requires users to send their requests to a centralized server — a process that’s expensive, energy-intensive and often slow.
Now, researchers have introduced a technique for compressing an LLM’s reams of data, which could increase privacy, save energy and lower costs.
The new algorithm, developed by engineers at Princeton and Stanford Engineering, works by trimming redundancies and reducing the precision of an LLM’s layers of information. This type of leaner LLM could be stored and accessed locally on a device like a phone or laptop and could provide performance nearly as accurate and nuanced as an uncompressed version.
“Any time you can reduce the computational complexity, storage and bandwidth requirements of using AI models, you can enable AI on devices and systems that otherwise couldn’t handle such compute- and memory-intensive tasks,” said study coauthor Andrea Goldsmith, dean of Princeton’s School of Engineering and Applied Science and Arthur LeGrand Doty Professor of Electrical and Computer Engineering.
“When you use ChatGPT, whatever request you give it goes to the back-end servers of OpenAI, which process all of that data, and that is very expensive,” said coauthor Rajarshi Saha, a Stanford Engineering Ph.D. student. “So, you want to be able to do this LLM inference using consumer GPUs [graphics processing units], and the way to do that is by compressing these LLMs.” Saha’s graduate work is coadvised by Goldsmith and coauthor Mert Pilanci, an assistant professor at Stanford Engineering.
The researchers will present their new algorithm CALDERA, which stands for Calibration Aware Low precision DEcomposition with low Rank Adaptation, at the Conference on Neural Information Processing Systems (NeurIPS) in December. Saha and colleagues began this compression research not with LLMs themselves, but with the large collections of information that are used to train LLMs and other complex AI models, such as those used for image classification. This technique, a forerunner to the new LLM compression approach, was published in 2023.
Training data sets and AI models are both composed of matrices, or grids of numbers that are used to store data. In the case of LLMs, these are called weight matrices, which are numerical representations of word patterns learned from large swaths of text.
“We proposed a generic algorithm for compressing large data sets or large matrices,” said Saha. “And then we realized that nowadays, it’s not just the data sets that are large, but the models being deployed are also getting large. So, we could also use our algorithm to compress these models.”
While the team’s algorithm is not the first to compress LLMs, its novelty lies in an innovative combination of two properties, one called “low-precision,” the other “low-rank.” As digital computers store and process information as bits (zeros and ones), “low-precision” representation reduces the number of bits, speeding up storage and processing while improving energy efficiency. On the other hand, “low-rank” refers to reducing redundancies in the LLM weight matrices.
“Using both of these properties together, we are able to get much more compression than either of these techniques can achieve individually,” said Saha.
The team tested their technique using Llama 2 and Llama 3, open-source large language models released by Meta AI, and found that their method, which used low-rank and low-precision components in tandem with each other, can be used to improve other methods which use just low-precision. The improvement can be up to 5%, which is significant for metrics that measure uncertainty in predicting word sequences.
They evaluated the performance of the compressed language models using several sets of benchmark tasks for LLMs. The tasks included determining the logical order of two statements, or answering questions involving physical reasoning, such as how to separate an egg white from a yolk or how to make a cup of tea.
“I think it’s encouraging and a bit surprising that we were able to get such good performance in this compression scheme,” said Goldsmith, who moved to Princeton from Stanford Engineering in 2020. “By taking advantage of the weight matrix rather than just using a generic compression algorithm for the bits that are representing the weight matrix, we were able to do much better.”
Using an LLM compressed in this way could be suitable for situations that don’t require the highest possible precision. Moreover, the ability to fine-tune compressed LLMs on edge devices like a smartphone or laptop enhances privacy by allowing organizations and individuals to adapt models to their specific needs without sharing sensitive data with third-party providers. This reduces the risk of data breaches or unauthorized access to confidential information during the training process. To enable this, the LLMs must initially be compressed enough to fit on consumer-grade GPUs.
Saha also cautioned that running LLMs on a smartphone or laptop could hog the device’s memory for a period of time. “You won’t be happy if you are running an LLM and your phone drains out of charge in an hour,” said Saha. Low-precision computation can help reduce power consumption, he added. “But I wouldn’t say that there’s one single technique that solves all the problems. What we propose in this paper is one technique that is used in combination with techniques proposed in prior works. And I think this combination will enable us to use LLMs on mobile devices more efficiently and get more accurate results.”
The paper, “Compressing Large Language Models using Low Rank and Low Precision Decomposition,” will be presented at the Conference on Neural Information Processing Systems (NeurIPS) in December 2024. In addition to Goldsmith, Saha and Pilanci, coauthors include Stanford Engineering researchers Naomi Sagan and Varun Srivastava. This work was supported in part by the U.S. National Science Foundation, the U.S. Army Research Office, and the Office of Naval Research.
Method of Research
Experimental study
Subject of Research
Not applicable
Article Title
Compressing Large Language Models using Low Rank and Low Precision Decomposition
Asking ChatGPT vs Googling: Can AI chatbots boost human creativity?
Think back to a time when you needed a quick answer, maybe for a recipe or a DIY project. A few years ago, most people’s first instinct was to “Google it.” Today, however, many people are more likely to reach for ChatGPT, OpenAI’s conversational AI, which is changing the way people look for information.
Rather than simply providing lists of websites, ChatGPT gives more direct, conversational responses. But can ChatGPT do more than just answer straightforward questions? Can it actually help people be more creative?
I study new technologies and consumer interaction with social media. My colleague Byung Lee and I set out to explore this question: Can ChatGPT genuinely assist people in creatively solving problems, and does it perform better at this than traditional search engines like Google?
Across a series of experiments in a study published in the journal Nature Human Behavour, we found that ChatGPT does boost creativity, especially in everyday, practical tasks. Here’s what we learned about how this technology is changing the way people solve problems, brainstorm ideas and think creatively.
ChatGPT and creative tasks
Imagine you’re searching for a creative gift idea for a teenage niece. Previously, you might have googled “creative gifts for teens” and then browsed articles until something clicked. Now, if you ask ChatGPT, it generates a direct response based on its analysis of patterns across the web. It might suggest a custom DIY project or a unique experience, crafting the idea in real time.
To explore whether ChatGPT surpasses Google in creative thinking tasks, we conducted five experiments where participants tackled various creative tasks. For example, we randomly assigned participants to either use ChatGPT for assistance, use Google search, or generate ideas on their own. Once the ideas were collected, external judges, unaware of the participants’ assigned conditions, rated each idea for creativity. We averaged the judges’ scores to provide an overall creativity rating.
One task involved brainstorming ways to repurpose everyday items, such as turning an old tennis racket and a garden hose into something new. Another asked participants to design an innovative dining table. The goal was to test whether ChatGPT could help people come up with more creative solutions compared with using a web search engine or just their own imagination.
The results were clear: Judges rated ideas generated with ChatGPT’s assistance as more creative than those generated with Google searches or without any assistance. Interestingly, ideas generated with ChatGPT – even without any human modification – scored higher in creativity than those generated with Google.
One notable finding was ChatGPT’s ability to generate incrementally creative ideas: those that improve or build on what already exists. While truly radical ideas might still be challenging for AI, ChatGPT excelled at suggesting practical yet innovative approaches. In the toy-design experiment, for example, participants using ChatGPT came up with imaginative designs, such as turning a leftover fan and a paper bag into a wind-powered craft.
Limits of AI creativity
ChatGPT’s strength lies in its ability to combine unrelated concepts into a cohesive response. Unlike Google, which requires users to sift through links and piece together information, ChatGPT offers an integrated answer that helps users articulate and refine ideas in a polished format. This makes ChatGPT promising as a creativity tool, especially for tasks that connect disparate ideas or generate new concepts.
It’s important to note, however, that ChatGPT doesn’t generate truly novel ideas. It recognizes and combines linguistic patterns from its training data, subsequently generating outputs with the most probable sequences based on its training. If you’re looking for a way to make an existing idea better or adapt it in a new way, ChatGPT can be a helpful resource. For something groundbreaking, though, human ingenuity and imagination are still essential.
Additionally, while ChatGPT can generate creative suggestions, these aren’t always practical or scalable without expert input. Steps such as screening, feasibility checks, fact-checking and market validation require human expertise. Given that ChatGPT’s responses may reflect biases in its training data, people should exercise caution in sensitive contexts such as those involving race or gender.
We also tested whether ChatGPT could assist with tasks often seen as requiring empathy, such as repurposing items cherished by a loved one. Surprisingly, ChatGPT enhanced creativity even in these scenarios, generating ideas that users found relevant and thoughtful. This result challenges the belief that AI cannot assist with emotionally driven tasks
Future of AI and creativity
As ChatGPT and similar AI tools become more accessible, they open up new possibilities for creative tasks. Whether in the workplace or at home, AI could assist in brainstorming, problem-solving and enhancing creative projects. However, our research also points to the need for caution: While ChatGPT can augment human creativity, it doesn’t replace the unique human capacity for truly radical, out-of-the-box thinking.
This shift from Googling to asking ChatGPT represents more than just a new way to access information. It marks a transformation in how people collaborate with technology to think, create and innovate. Jaeyeon Chung, Assistant Professor of Business, Rice University
OpenAI CEO Sam Altman fired off a social media post saying 'There is no wall' as fears arise over potential blockages to AI development - Copyright AFP/File Jason Redmond Glenn CHAPMAN with Alex PIGMAN in Washington
A quietly growing belief in Silicon Valley could have immense implications: the breakthroughs from large AI models -– the ones expected to bring human-level artificial intelligence in the near future –- may be slowing down.
Since the frenzied launch of ChatGPT two years ago, AI believers have maintained that improvements in generative AI would accelerate exponentially as tech giants kept adding fuel to the fire in the form of data for training and computing muscle.
The reasoning was that delivering on the technology’s promise was simply a matter of resources –- pour in enough computing power and data, and artificial general intelligence (AGI) would emerge, capable of matching or exceeding human-level performance.
Progress was advancing at such a rapid pace that leading industry figures, including Elon Musk, called for a moratorium on AI research.
Yet the major tech companies, including Musk’s own, pressed forward, spending tens of billions of dollars to avoid falling behind.
OpenAI, ChatGPT’s Microsoft-backed creator, recently raised $6.6 billion to fund further advances.
xAI, Musk’s AI company, is in the process of raising $6 billion, according to CNBC, to buy 100,000 Nvidia chips, the cutting-edge electronic components that power the big models.
However, there appears to be problems on the road to AGI.
Industry insiders are beginning to acknowledge that large language models (LLMs) aren’t scaling endlessly higher at breakneck speed when pumped with more power and data.
Despite the massive investments, performance improvements are showing signs of plateauing.
“Sky-high valuations of companies like OpenAI and Microsoft are largely based on the notion that LLMs will, with continued scaling, become artificial general intelligence,” said AI expert and frequent critic Gary Marcus. “As I have always warned, that’s just a fantasy.”
– ‘No wall’ –
One fundamental challenge is the finite amount of language-based data available for AI training.
According to Scott Stevenson, CEO of AI legal tasks firm Spellbook, who works with OpenAI and other providers, relying on language data alone for scaling is destined to hit a wall.
“Some of the labs out there were way too focused on just feeding in more language, thinking it’s just going to keep getting smarter,” Stevenson explained.
Sasha Luccioni, researcher and AI lead at startup Hugging Face, argues a stall in progress was predictable given companies’ focus on size rather than purpose in model development.
“The pursuit of AGI has always been unrealistic, and the ‘bigger is better’ approach to AI was bound to hit a limit eventually — and I think this is what we’re seeing here,” she told AFP.
The AI industry contests these interpretations, maintaining that progress toward human-level AI is unpredictable.
“There is no wall,” OpenAI CEO Sam Altman posted Thursday on X, without elaboration.
Anthropic’s CEO Dario Amodei, whose company develops the Claude chatbot in partnership with Amazon, remains bullish: “If you just eyeball the rate at which these capabilities are increasing, it does make you think that we’ll get there by 2026 or 2027.”
– Time to think –
Nevertheless, OpenAI has delayed the release of the awaited successor to GPT-4, the model that powers ChatGPT, because its increase in capability is below expectations, according to sources quoted by The Information.
Now, the company is focusing on using its existing capabilities more efficiently.
This shift in strategy is reflected in their recent o1 model, designed to provide more accurate answers through improved reasoning rather than increased training data.
Stevenson said an OpenAI shift to teaching its model to “spend more time thinking rather than responding” has led to “radical improvements”.
He likened the AI advent to the discovery of fire. Rather than tossing on more fuel in the form of data and computer power, it is time to harness the breakthrough for specific tasks.
Stanford University professor Walter De Brouwer likens advanced LLMs to students transitioning from high school to university: “The AI baby was a chatbot which did a lot of improv'” and was prone to mistakes, he noted.
“The homo sapiens approach of thinking before leaping is coming,” he added.
How pirates steer corporate innovation: Lessons from the front lines Calgary Innovation Week runs from Nov. 13-21, 2024.
If you ask Tina Mathas who should lead transformative innovation projects, she’ll tell you it’s all about the pirates.
“It requires a different type of mindset, a different type of ecosystem and environment, and it should be protected,” says Mathas, co-founder of Flow Factory, a company that aims to enhance human performance by integrating the concept of “flow state” with artificial intelligence.
For transformative innovation, she argues, big companies need pirates — not quite drunken Jack Sparrow adventurers, but individuals who challenge traditional processes and navigate uncharted waters of creativity and risk.
Mathas’s declaration set the tone for a lively virtual panel on corporate innovation at Calgary Innovation Week. The discussion brought together industry leaders to dissect how innovation can thrive in corporate environments often resistant to change.
The challenges, they agreed, are substantial, but the potential rewards for organizations that get it right are transformative.
Making the case for pirates
“Transformative innovation requires pirates,” Mathas said. “It’s not just about solving today’s problems — it’s about being bold and taking risks on where we think the industry is going.”
Mathas described her experience at ATB Financial, where her team was tasked with “breaking the bank.”
Operating with a $50,000 budget, they delivered a market-ready banking platform in just five weeks.
“We had no banking experience,” she said, “and people didn’t understand how we got that done. We had board support, and we had executive support. In other words, we reported directly into the executive and we were separate from the main organization.
We were the pirates.”
This freedom is crucial, Mathas said, because transformative innovation rarely succeeds when confined by a corporation’s standard processes.
“According to an Accenture study, 82% of organizations run innovation in exactly the same way as any other regular project. Plus it takes about 18 months, and you’re still facing a 90% failure rate,” she said, telling the audience that is the reason she left the corporate world.
Innovation begins with people and alignment with business goals
Jeff Hiscock of Pembina Pipelines shifted the focus to the human element of innovation, emphasizing the challenges of workforce turnover and retention. Focus on the importance of building environments that retain experienced talent, while simultaneously attracting new entrants to the workforce, he advised.
“Thirty-five per cent of the energy workforce will turn over by 2035,” Hiscock said, referencing data from a provincial study. “A lot of that is through retirement. How do you create a workplace where those people want to stay in the roles longer?”
By focusing on creating workplaces that are innovative, engaging and adaptable, organizations can address this looming talent gap while driving forward their innovation goals.
Hiscock described innovation as a necessity, not a luxury, particularly in industries like energy.
“Innovation is about solving real problems that impact your business’s core value streams,” he said.
Pembina, for instance, focuses 70% of its innovation efforts on projects with direct EBITDA impacts, ensuring alignment with organizational goals.
However, Hiscock cautioned that innovation efforts often stall because of cultural resistance.
“What’s obvious to you is not obvious to everyone else,” he said. “It’s like playing a 4D chess game that only you can see. That’s a bad place to be.”
His solution? Securing buy-in from every level of the organization, not just senior executives.
From dollars to disruption
“Innovation isn’t about dollars, but it kind of is,” said Shannon Phillips, co-founder of Unbounded Thinking. Phillips’ work focuses on helping organizations, particularly small and medium-sized enterprises, implement effective innovation management systems.
He explained that many companies struggle to balance innovation’s creative potential with the financial realities of running a business.
“If we keep talking about this vague concept of innovation that is just about something new and breakthrough, we’ll never get the respect that we need. We really need to start looking at how we measure it to make it part of our DNA, and to make it a revenue stream in itself.”
Phillips outlined a structured approach to categorizing innovation: core (incremental improvements), adjacent (new markets or products), and breakthrough (disruptive technologies).
He emphasized focusing on core innovation first, as it carries the least risk, while building maturity and trust over time to approach higher-risk, breakthrough projects effectively. This holistic, balanced approach helps companies mitigate risks and align innovation with their capabilities and goals.
“For smaller companies, it’s not a buzzword — it’s about survival,” he said. “They need proof that innovation will help them grow and keep their doors open.”
Partnerships that deliver
Lee Evans, head of low-carbon activities at TC Energy, discussed how partnerships can drive innovation in meaningful ways.
“We think about win-wins,” Evans said. “How do we find ways to work with others to support each other?”
As an example, TC Energy recently invested and partnered with Qube Technologies, a Calgary-based emissions monitoring company, to address its decarbonization goals.
Evans highlighted the importance of starting small with innovation initiatives.
“Minimum viable products are really important,” he said. “You test, you learn and then you scale.” This approach minimizes risk while building trust in the process.
Evans also stressed the need for resilience and adaptability.
“If you want to be working in this space, you’ve got to be resilient. You’ve got to be willing to face challenges and setbacks and be willing to pivot. Those are really important. And never give up if you think there’s true value in what you’re up to. Find ways to make sure people understand the value of what you’re doing.”
The role of government and academia in innovation
Panelists also weighed in on how external forces, like government policies and academic research, shape innovation.
Mathas argued that governments should incentivize competition to stimulate corporate innovation. “We need more competition coming into Canada and into Alberta to create more of that incentive to want to compete and to want to innovate.”
On the academic front, Mathas cautioned universities in their efforts to turn researchers into entrepreneurs. She said universities should focus on supporting research, not forcing students to commercialize their ideas because it can lead to a loss of investment in the research that drives real innovation.
Key takeaways for corporate innovators
The panel left attendees with practical advice for navigating the complexities of corporate innovation:Start small, think big: “Innovate like a startup, scale like an enterprise,” said Mathas. Embrace failure: “Failures are just learning in disguise,” she added. Focus on core problems: Hiscock advised innovators to align their projects with a company’s key value streams. Measure impact: “We need to make innovation part of the DNA,” said Phillips. Be resilient: “Understand the value of what you’re doing and keep going,” said Evans.
As the panel concluded, one message was clear: the future belongs to those bold enough to embrace risk, empower people and innovate with purpose.
Sunday, November 17, 2024
It’s Not Too Late to Stop the Monopolization of AI
Artificial intelligence has dominated the news cycle and captured a big chunk of the public’s attention since the release of ChatGPT in 2022.
Hardly a minute goes by without a news story covering AI-related developments, some company releasing a new product “integrated with AI,” or a commentator evoking the threat AI poses to our society.
While there is a lot of unwarranted hype surrounding the technology, recent advances in AI are impressive, and, if harnessed in the right way, could help society solve complex problems and boost our shared prosperity. However, the structure of the market — the actors that have control over the development and deployment of AI — will determine whether AI lives up to its promise or entrenches the power a few dominant corporations already hold over our lives.
At first glance, the recent explosion of new AI products and services could make you think the field has robust competition and that there is a fertile marketplace for new businesses to establish a foothold and grow. But this is largely an illusion.
While competition ostensibly exists in this booming industry, it is rapidly being foreclosed by the largest incumbent technology giants: Microsoft, Google, Apple, Amazon, Nvidia, and Meta.
These dominant corporations, which already maintain an iron grip over essential aspects of our digital lives — including smartphones, internet search, online shopping, and social networking — are using their raw financial power and control over critical systems to monopolize the AI industry, exclude or co-opt competitors, and deepen the “walled gardens” that keep consumers locked into their services.
As we detail in a recent report published by the Open Markets Institute and the Mozilla Foundation, these corporate behemoths are already the primary owners and suppliers of key AI inputs and infrastructure, including cloud computing, frontier AI models, chips, and data.
To ensure they maintain and extend their dominance into the AI era, these companies are engaging in a variety of unfair business practices. Consider cloud computing, which provides the raw computational power and server capacity needed to train and host advanced AI models. Microsoft, Amazon, and, to a lesser extent, Google currently control a combined two-thirds of the global cloud computing market.
Through their market control and practically unlimited financial resources, the tech giants are rapidly co-opting many of today’s most promising AI startups, such as OpenAI and Anthropic, by giving them capital and preferential access to computational resources.
The giant firms are also using their control over digital ecosystems to lock consumers and businesses into their AI services, Google has integrated its “Gemini” AI model into its search engine and Meta has started infusing AI into Facebook and Instagram.
AI is set to dramatically reinforce the power of Big Tech — unless governments step in. Fortunately, AI remains in its nascency, and policymakers still have time to act.
Governments around the world, including the European Union, the United Kingdom, and the United States, have access to a vast set of tools to prevent AI from becoming Big Tech’s playground. Whether through antitrust enforcement that forces tech giants to split up their digital empires — preventing them from neutralizing challengers through partnerships and acquisitions — or common carrier rules requiring them to provide fair access to their digital infrastructure, regulators already have many of the tools they need to guarantee open and fair competition in AI.
We’ve seen the damaging consequences of enforcers’ failure to act in the rise of today’s platform monopolies — monopolies that are now poised to capture the benefits of AI for themselves. We can’t make the same mistake again.
Max von Thun is the Director of Europe & Transatlantic Partnerships at the Open Markets Institute.
Daniel A. Hanley is a senior legal analyst at the Open Markets Institute.
Friday, November 15, 2024
Experts urge complex systems approach to assess A.I. risks
The social context and its complex interactions must be considered and public engagement must be encouraged
Complexity Science Hub
[Vienna, November 13, 2024] — With artificial intelligence increasingly permeating every aspect of our lives, experts are becoming more and more concerned about its dangers. In some cases, the risks are pressing, in others they won't emerge until many months or even years from now. Scientists point out in The Royal Society’s journal that a coherent approach to understanding these threats is still elusive. They call for a complex systems perspective to better assess and mitigate these risks, particularly in light of long-term uncertainties and complex interactions between A.I. and society.
"Understanding the risks of A.I. requires recognizing the intricate interplay between technology and society. It's about navigating the complex, co-evolving systems that shape our decisions and behaviors,” says Fariba Karimi, co-author of the article. Karimi leads the research team on Algorithmic Fairness at the Complexity Science Hub (CSH) and is professor of Social Data Science at TU Graz.
“We should not only discuss what technologies to deploy and how, but also how to adapt the social context to capitalize on positive possibilities. A.I. possibilities and risks should likely be taken into account in debates about, for instance, economic policy,” adds CSH scientist Dániel Kondor, first author of the study.
Broader and Long-Term Risks
Current risk assessment frameworks often focus on immediate, specific harms, such as bias and safety concerns, according to the authors of the article published in Philosophical Transactions A. “These frameworks often overlook broader, long-term systemic risks that could emerge from the widespread deployment of A.I. technologies and their interaction with the social context they are used,” says Kondor.
“In this paper, we tried to balance the short-term perspectives on algorithms with long-term views of how these technologies affect society. It's about making sense of both the immediate and systemic consequences of A.I.," adds Kondor.
What Happens in Real Life
As a case study to illustrate the potential risks of A.I. technologies, the scientists discuss how a predictive algorithm was used during the Covid-19 pandemic in the UK for school exams. The new solution was “presumed to be more objective and thus fairer [than asking teachers to predict their students’ performance], relying on a statistical analysis of students’ performance in previous years,” according to the study.
However, when the algorithm was put into practice, several issues emerged. “Once the grading algorithm was applied, inequities became glaringly obvious,” observes Valerie Hafez, an independent researcher and study co-author. “Pupils from disadvantaged communities bore the brunt of the futile effort to counter grading inflation, but even overall, 40% of students received lower marks than they would have reasonably expected.”
Hafez reports that many responses in the consultation report indicate that the risk perceived as significant by teachers—the long-term effect of grading lower than deserved—was different from the risk perceived by the designers of the algorithm. The latter were concerned about grade inflation, the resulting pressure on higher education, and a lack of trust in students’ actual abilities.
The Scale and the Scope
This case demonstrates several important issues that arise when deploying large-scale algorithmic solutions, emphasize the scientists. “One thing we believe one should be attentive to is the scale—and scope—because algorithms scale: they travel well from one context to the next, even though these contexts may be vastly different. The original context of creation does not simply disappear, rather it is superimposed on all these other contexts,” explains Hafez.
"Long-term risks are not the linear combination of short-term risks. They can escalate exponentially over time. However, with computational models and simulations, we can provide practical insights to better assess these dynamic risks,” adds Karimi.
Computational Models – and Public Participation
This is one of the directions proposed by the scientists for understanding and evaluating risk associated with A.I. technologies, both in the short- and long-term. “Computational models—like those assessing the effect of A.I. on minority representation in social networks—can demonstrate how biases in A.I. systems lead to feedback loops that reinforce societal inequalities,” explains Kondor. Such models can be used to simulate potential risks, offering insights that are difficult to glean from traditional assessment methods.
In addition, the study's authors emphasize the importance of involving laypeople and experts from various fields in the risk assessment process. Competency groups—small, heterogeneous teams that bring together varied perspectives—can be a key tool for fostering democratic participation and ensuring that risk assessments are informed by those most affected by AI technologies.
“A more general issue is the promotion of social resilience, which will help A.I.-related debates and decision-making function better and avoid pitfalls. In turn, social resilience may depend on many questions unrelated (or at least not directly related) to artificial intelligence,” ponders Kondor. Increasing participatory forms of decision-making can be one important component of raising resilience.
“I think that once you begin to see A.I. systems as sociotechnical, you cannot separate the people affected by the A.I. systems from the ‘technical’ aspects. Separating them from the A.I. system takes away their possibility to shape the infrastructures of classification imposed on them, denying affected persons the power to share in creating worlds attenuated to their needs,” says Hafez, who’s an A.I. policy officer at the Austrian Federal Chancellery.
About the Study
The study “Complex systems perspective in assessing risks in A.I.,” by Dániel Kondor, Valerie Hafez, Sudhang Shankar, Rania Wazir, and Fariba Karimi was published in Philosophical Transactions A and is available online.
About CSH
The Complexity Science Hub (CSH) is Europe’s research center for the study of complex systems. We derive meaning from data from a range of disciplines — economics, medicine, ecology, and the social sciences — as a basis for actionable solutions for a better world. Established in 2015, we have grown to over 70 researchers, driven by the increasing demand to gain a genuine understanding of the networks that underlie society, from healthcare to supply chains. Through our complexity science approaches linking physics, mathematics, and computational modeling with data and network science, we develop the capacity to address today’s and tomorrow’s challenges.
Journal
Philosophical Transactions of the Royal Society A Mathematical Physical and Engineering Sciences
Complex systems perspective in assessing risks in AI
Article Publication Date
13-Nov-2024
COI Statement
The authors declare no competing interests. Valerie Hafez is a policy officer at the Austrian Federal Chancellery, but conducted this research independently. The views expressed in the paper do not necessarily reflect the views or positions of the Federal Chancellery.
AI needs to work on its conversation game
Researchers discover why AI does a poor job of knowing when to chime in on a conversation
Tufts University
When you have a conversation today, notice the natural points when the exchange leaves open the opportunity for the other person to chime in. If their timing is off, they might be taken as overly aggressive, too timid, or just plain awkward.
The back-and-forth is the social element to the exchange of information that occurs in a conversation, and while humans do this naturally—with some exceptions—AI language systems are universally bad at it.
Linguistics and computer science researchers at Tufts University have now discovered some of the root causes of this shortfall in AI conversational skills and point to possible ways to make them better conversational partners.
When humans interact verbally, for the most part they avoid speaking simultaneously, taking turns to speak and listen. Each person evaluates many input cues to determine what linguists call “transition relevant places” or TRPs. TRPs occur often in a conversation. Many times we will take a pass and let the speaker continue. Other times we will use the TRP to take our turn and share our thoughts.
JP de Ruiter, professor of psychology and computer science, says that for a long time it was thought that the “paraverbal” information in conversations—the intonations, lengthening of words and phrases, pauses, and some visual cues—were the most important signals for identifying a TRP.
“That helps a little bit,” says de Ruiter, “but if you take out the words and just give people the prosody—the melody and rhythm of speech that comes through as if you were talking through a sock—they can no longer detect appropriate TRPs.”
Do the reverse and just provide the linguistic content in a monotone speech, and study subjects will find most of the same TRPs they would find in natural speech.
“What we now know is that the most important cue for taking turns in conversation is the language content itself. The pauses and other cues don’t matter that much,” says de Ruiter.
AI is great at detecting patterns in content, but when de Ruiter, graduate student Muhammad Umair, and research assistant professor of computer science Vasanth Sarathy tested transcribed conversations against a large language model AI, the AI was not able to detect appropriate TRPs anywhere near the capability of humans.
The reason stems from what the AI is trained on. Large language models, including the most advanced ones such as ChatGPT, have been trained on a vast dataset of written content from the internet—Wikipedia entries, online discussion groups, company websites, news sites—just about everything. What is missing from that dataset is any significant amount of transcribed spoken conversational language, which is unscripted, uses simpler vocabulary and shorter sentences, and is structured differently than written language.
AI was not “raised” on conversation, so it does not have the ability to model or engage in conversation in a more natural, human-like manner.
The researchers thought that it might be possible to take a large language model trained on written content and fine-tune it with additional training on a smaller set of conversational content so it can engage more naturally in a novel conversation. When they tried this, they found that there were still some limitations to replicating human-like conversation.
The researchers caution that there may be a fundamental barrier to AI carrying on a natural conversation. “We are assuming that these large language models can understand the content correctly. That may not be the case,” said Sarathy. “They’re predicting the next word based on superficial statistical correlations, but turn taking involves drawing from context much deeper into the conversation.”
“It’s possible that the limitations can be overcome by pre-training large language models on a larger body of naturally occurring spoken language,” said Umair, whose PhD research focuses on human-robot interactions and is the lead author on the studies. “Although we have released a novel training dataset that helps AI identify opportunities for speech in naturally occurring dialogue, collecting such data at a scale required to train today’s AI models remains a significant challenge. There is just not nearly as much conversational recordings and transcripts available compared to written content on the internet.”
The study results were presented at the Empirical Methods in Natural Language Processing (EMNLP) 2024 conference, held in Miami from November 11 to 17 and posted on Arxiv.
Method of Research
Experimental study
Subject of Research
Not applicable
Article Title
Large Language Models Know What To Say But Not When To Speak