Tuesday, April 16, 2024

 

The South's aging water infrastructure is getting pounded by climate change. Fixing it is also a struggle

water pipe
Credit: Pixabay/CC0 Public Domain

Climate change is threatening America's water infrastructure as intensifying storms deluge communities and droughts dry up freshwater supplies in regions that aren't prepared.

The  that swept through the South April 10-11, 2024, illustrated some of the risks: In New Orleans, rain fell much faster as the city's pumps could remove it. A water line broke during the storm near Hattiesburg, Mississippi. Other communities faced power outages and advisories to boil water for safety before using it.

We study infrastructure resilience and sustainability and see a crisis growing, particularly in the U.S. Southeast, where aging water supply systems and stormwater infrastructure are leaving more communities at risk as weather becomes more extreme.

To find the best solutions and build resilient infrastructure, communities need to recognize both the threats in a warming world and the obstacles to managing them.

What a water crisis looks like

Water crises can be caused by either too much or too little water, and they can challenge drinking water systems in unexpected ways.

For much of the past decade, parts of northern and central Alabama have experienced significant droughts. In addition, wells dug to provide water have run dry, as water tables dropped from a combination of drought and overuse.

New Orleans' water supply was threatened by drought in another way in 2023: Saltwater from the Gulf of Mexico intruded farther than normal up the Mississippi River because the river's flow had slowed.

At the same time, torrential rain events increasingly have overwhelmed stormwater systems and threatened drinking water supplies. As global temperatures rise, the oceans heat up, and that warmer water provides more moisture to feed powerful storms.

An example of how extreme the situation can get has been playing out in Jackson, Mississippi, a city of nearly 150,000 residents. Jackson's water system had been plagued with leaks and pipe breaks for over a decade before 2022, when intense flooding overwhelmed the system, leaving most residents with little or no water for days.

Even before the flood, Jackson residents had been advised to boil their water before drinking it. Repairs are now underway with the aid of US$800 million in federal tax dollars, but questions remain about how to keep the system maintained in the future. The April storm hit the region again with damaging winds, rain and power outages.

The fragility of aging  is evident in many communities. The American Society of Civil Engineers' U.S. Infrastructure Report Card in 2021 estimated that a water main breaks every two minutes somewhere in the U.S., losing 6 billion gallons of treated water a day. The engineers gave U.S. municipal water systems overall a grade of C-minus.

Flood protection infrastructure earned even lower grades: U.S. levees and dams both received D grades, along with a warning that expanding development means more people and property are downstream and relying on levees and dams to function

Challenge 1: Many stakeholders; who decides?

Today's infrastructure ranges from brick and mortar facilities to electronic networks—each with varying needs, goals, responsibilities and vulnerabilities to climate change.

Moreover, infrastructure often functions interdependently. If one asset fails, such as a pipeline or the computer system that controls a , the damage can cascade to other systems. For example, untreated wastewater discharged into a stream because of a system failure can affect drinking water supplies for communities downstream.

Neighborhoods across the New Orleans area flooded on April 10 as the region’s pumps couldn’t keep up with the rainfall. Credit: Reed Timmer

Water issues cut across different levels of government, laws and regulations, and technical and academic expertise, requiring partnerships that can be difficult to govern. That can put different government agencies into conflict as disputes develop over regulatory control and responsibility, particularly between federal, state and local governments.

Challenge 2: Past decisions affect future choices

In many areas, water infrastructure built over the centuries has shaped subsequent development decisions, available resources and land use patterns, including the location of new homes, transportation facilities and businesses.

Today, that infrastructure may also be threatened by climate change in ways its developers never imagined.

More intense rainfall events have made long-standing flood maps obsolete in some areas, and areas never considered at risk of flooding before are now flooding regularly. This is especially true in coastal areas where storms may be coupled with abnormally high tides, sea level rise and subsidence.

Challenge 3: Who pays?

Questions about who pays for infrastructure improvements, or who decides project priorities, can also generate conflict.

Infrastructure is expensive. A single project, such as replacing water pipes or a treatment facility, will involve significant design and , as well as maintenance and repairs that many poorer communities struggle to afford.

The American Society of Civil Engineers in 2021 estimated the difference between infrastructure investments of all types needed over the decade of the 2020s ($5.9 trillion) and infrastructure work planned and funded ($3.3 trillion) was $2.6 trillion. It expects the annual gap for just drinking water and wastewater investment to be $434 billion by 2029.

Building new, climate-resilient infrastructure is beyond the financial capacity of many communities, particularly low-income communities.

The federal government has taken steps to provide more aid in recent years. The Bipartisan Infrastructure Law, passed in 2021, authorized $55 billion for drinking water, wastewater, water storage and water reuse projects. The Inflation Reduction Act, passed the following year, included $550 million to assist disadvantaged communities with water supply projects.

But those funds don't close the gap, and political pressure to reduce federal spending makes the future of federal support for infrastructure uncertain.

What can communities do?

Local communities, states and federal agencies need to reexamine the growing threats from aging infrastructure in a warming world and find new solutions. That doesn't just mean new engineering designs—it means thinking differently about governance, planning and financing, and societal goals.

Fixing water challenges might mean rebuilding infrastructure away from the threat, or building defenses against flooding. Some communities are experimenting with sponge landscapes and restoring wetlands to create natural environments that absorb excess rainfall to reduce flooding.

The challenge is not just which engineering solution to choose, but how to navigate the responsibilities of actually providing clean water to Americans as the climate continues to change.

Provided by The Conversation 

This article is republished from The Conversation under a Creative Commons license. Read the original article.The Conversation



 

Games are the secret to learning math and statistics, says new research

math
Credit: CC0 Public Domain

Games may be the secret to learning numbers based subjects like math and economics, according to new research.

Many students say they struggle with subjects like economics and statistics, with 83% of university courses in these subjects taught using a traditional lecturing approach.

However, new research has shown that by including games in the teaching of these subjects,  and satisfaction can be significantly increased, with numbers of students failing their course cut significantly.

Assistant Professor Joshua Fullard of Warwick Business School, who led the research, commented, "This research backs up what we already know—that traditional lecturing is not the best approach for learning, even in numbers-based  like economics or statistics."

"The effects of games on students are not small or limited to some people in the class. Applied across a college or university, the increased rates of student success would result in hundreds of students not failing, achieving higher grades and being more satisfied in their learning at the same time."

In the research, two groups of students went through their studies, with one incorporating games into learning, while the other did traditional teaching only.

The study found that the group who included games achieved significantly better grades, with the average exam score up by 7%.

Overall, the median student in the group with the games achieved a 69% as opposed to the median student in the other group who achieved a 60%—almost the difference between a 2:1 and a first in their degree.

The rate of failure for students who played games was also lower, only 7%. In the other group, almost a fifth of students failed. This suggests that games benefit all the students in the class, even those who do not get a higher grade.

The students who used games also had a much higher rate of student satisfaction, as well as higher attendance to lectures and seminars.

One reason why more teachers and lecturers don't use games in their teaching is due to  they are under, with lots of ground to cover and only limited time to achieve this.

The new research suggests several short, easy to implement activities to improve  without educators having to sacrifice hours of  time.

More information: Using games to improve students' engagement and understanding of statistics in higher education. libjournals.mtsu.edu/index.php … ticle/view/2475/1459


Provided by University of Warwick If university grades are going up, does that mean there's a problem?

 

AI's new power of persuasion: Study shows LLMs can exploit personal information to change your mind

AI's new power of persuasion: It can change your mind
Overview of the experimental workflow. (A) Participants fill in a survey about 
their demographic information and political orientation. (B) Every 5 minutes, 
participants are randomly assigned to one of four treatment conditions. 
The two players then debate for 10 minutes on an assigned proposition, 
randomly holding the PRO or CON standpoint as instructed. (C) After the 
debate, participants fill out another short survey measuring their opinion 
change. Finally, they are debriefed about their opponent's identity. 
Credit: arXiv (2024). DOI: 10.48550/arxiv.2403.14380

A new EPFL study has demonstrated the persuasive power of large language models, finding that participants debating GPT-4 with access to their personal information were far more likely to change their opinion compared to those who debated humans.

"On the internet, nobody knows you're a dog." That's the caption to a famous 1990s cartoon showing a large dog with his paw on a computer keyboard. Fast forward 30 years, replace "dog" with "AI" and this sentiment was a key motivation behind a new study to quantify the persuasive power of today's  (LLMs).

"You can think of all sorts of scenarios where you're interacting with a language model although you don't know it, and this is a fear that people have—on the internet are you talking to a dog or a chatbot or a real human?" asked Associate Professor Robert West, head of the Data Science Lab in the School of Computer and Communication Sciences. "The danger is superhuman like chatbots that create tailor-made, convincing arguments to push false or misleading narratives online."

AI and personalization

Early work has found that language models can generate content perceived as at least on par and often more persuasive than human-written messages, however there is still limited knowledge about LLMs' persuasive capabilities in direct conversations with humans, and how personalization—knowing a person's gender, age and —can improve their performance.

"We really wanted to see how much of a difference it makes when the AI model knows who you are (personalization)—your age, gender, ethnicity, education level, employment status and —and this scant amount of information is only a proxy of what more an AI model could know about you through social media, for example," West continued.

Human v AI debates

In a pre-registered study, the researchers recruited 820 people to participate in a controlled trial in which each participant was randomly assigned a topic and one of four treatment conditions: debating a human with or without  about the participant, or debating an AI chatbot (OpenAI's GPT-4) with or without personal information about the participant.

This setup differed substantially from previous research in that it enabled a direct comparison of the persuasive capabilities of humans and LLMs in real conversations, providing a framework for benchmarking how state-of-the-art models perform in online environments and the extent to which they can exploit personal data.

Their article, "On the Conversational Persuasiveness of large language models: A Randomized Controlled Trial," posted to the arXiv preprint server, explains that the debates were structured based on a simplified version of the format commonly used in competitive academic debates and participants were asked before and afterwards how much they agreed with the debate proposition.

The results showed that participants who debated GPT-4 with access to their personal information had 81.7% higher odds of increased agreement with their opponents compared to participants who debated humans. Without personalization, GPT-4 still outperformed humans, but the effect was far lower.

Cambridge Analytica on steroids

Not only are LLMs able to effectively exploit personal information to tailor their arguments and out-persuade humans in online conversations through microtargeting, they do so far more effectively than humans.

"We were very surprised by the 82% number and if you think back to Cambridge Analytica, which didn't use any of the current tech, you take Facebook likes and hook them up with an LLM, the Language Model can personalize its messaging to what it knows about you. This is Cambridge Analytica on steroids," said West.

"In the context of the upcoming U.S. elections, people are concerned because that's where this kind of technology is always first battle tested. One thing we know for sure is that people will be using the power of large language models to try to swing the election."

One interesting finding of the research was that when a human was given the same personal information as the AI, they didn't seem to make  of it for persuasion. West argues that this should be expected—AI models are consistently better because they are almost every human on the internet put together.

The models have learned through online patterns that a certain way of making an argument is more likely to lead to a persuasive outcome. They have read many millions of Reddit, Twitter and Facebook threads, and been trained on books and papers from psychology about persuasion. It's unclear exactly how a model leverages all this information but West believes this is a key direction for future research.

"LLMs have shown signs that they can reason about themselves, so given that we are able to interrogate them, I can imagine that we could ask a model to explain its choices and why it is saying a precise thing to a particular person with particular properties. There's a lot to be explored here because the models may be doing things that we don't even know about yet in terms of persuasiveness, cobbled together from many different parts of the knowledge that they have."

More information: Francesco Salvi et al, On the Conversational Persuasiveness of Large Language Models: A Randomized Controlled Trial, arXiv (2024). DOI: 10.48550/arxiv.2403.14380


Journal information: arXiv 

Large language models in health: Useful, but not a miracle cure

 

AI-generated models could bring more diversity to the fashion industry—or leave it with less

AI-generated models could bring more diversity to the fashion industry — or leave it with less
Fashion model Alexsandrah poses with a computer showing an AI generated 
image of her, in London, Friday, March 29, 2024. The use of computer-generated
 supermodels has complicated implications for diversity. Although AI modeling 
agencies -- some of them Black-owned -- can render models of all races, genders
 and sizes at the click of a finger, real models of color who have historically faced
 higher barriers to entry may be put out of work. Credit: AP Photo/Kirsty Wigglesworth

London-based model Alexsandrah has a twin, but not in the way you'd expect: Her counterpart is made of pixels instead of flesh and blood.

The virtual twin was generated by  and has already appeared as a stand-in for the real-life Alexsandrah in a photo shoot. Alexsandrah, who goes by her first name professionally, in turn receives credit and compensation whenever the AI version of herself gets used—just like a .

Alexsandrah says she and her alter-ego mirror each other "even down to the baby hairs." And it is yet another example of how AI is transforming creative industries—and the way humans may or may not be compensated.

Proponents say the growing use of AI in fashion modeling showcases diversity in all shapes and sizes, allowing consumers to make more tailored purchase decisions that in turn reduces fashion waste from product returns. And digital modeling saves money for companies and creates opportunities for people who want to work with the technology.

But critics raise concerns that  may push human models—and other professionals like makeup artists and photographers—out of a job. Unsuspecting consumers could also be fooled into thinking AI models are real, and companies could claim credit for fulfilling diversity commitments without employing actual humans.

"Fashion is exclusive, with limited opportunities for  to break in," said Sara Ziff, a former fashion model and founder of the Model Alliance, a nonprofit aiming to advance workers' rights in the fashion industry. "I think the use of AI to distort racial representation and marginalize actual models of color reveals this troubling gap between the industry's declared intentions and their real actions."

Women of color in particular have long faced higher barriers to entry in modeling and AI could upend some of the gains they've made. Data suggests that women are more likely to work in occupations in which the technology could be applied, and are more at risk of displacement than men.

In March 2023, iconic denim brand Levi Strauss & Co. announced that it would be testing AI-generated models produced by Amsterdam-based company Lalaland.ai to add a wider range of body types and underrepresented demographics on its website. But after receiving widespread backlash, Levi clarified that it was not pulling back on its plans for live photo shoots, the use of live models or its commitment to working with diverse models.

"We do not see this (AI) pilot as a means to advance diversity or as a substitute for the real action that must be taken to deliver on our diversity, equity and inclusion goals and it should not have been portrayed as such," Levi said in its statement at the time.

The company last month said that it has no plans to scale the AI program.

The Associated Press reached out to several other retailers to ask whether they use AI fashion models. Target, Kohl's and fast-fashion giant Shein declined to comment; Temu did not respond to a request for comment.

Meanwhile, spokespeople for Nieman Marcus, H&M, Walmart and Macy's said their respective companies do not use AI models, although Walmart clarified that "suppliers may have a different approach to photography they provide for their products but we don't have that information."

Nonetheless, companies that generate AI models are finding a demand for the technology, including Lalaland.ai, which was co-founded by Michael Musandu after he was feeling frustrated by the absence of clothing models who looked like him.

"One model does not represent everyone that's actually shopping and buying a product," he said. "As a person of color, I felt this painfully myself."

Musandu says his product is meant to supplement traditional photo shoots, not replace them. Instead of seeing one model, shoppers could see nine to 12 models using different size filters, which would enrich their shopping experience and help reduce product returns and fashion waste.

The technology is actually creating new jobs, since Lalaland.ai pays humans to train its algorithms, Musandu said.

AI-generated models could bring more diversity to the fashion industry — or leave it with less
Fashion model Alexsandrah poses for a photograph, in London, Friday, March 29, 2024. The use of computer-generated supermodels has complicated implications for diversity. Although AI modeling agencies -- some of them Black-owned -- can render models of all races, genders and sizes at the click of a finger, real models of color who have historically faced higher barriers to entry may be put out of work. Credit: AP Photo/Kirsty Wigglesworth

And if brands "are serious about inclusion efforts, they will continue to hire these models of color," he added.

London-based model Alexsandrah, who is Black, says her digital counterpart has helped her distinguish herself in the fashion industry. In fact, the real-life Alexsandrah has even stood in for a Black computer-generated model named Shudu, created by Cameron Wilson, a former fashion photographer turned CEO of The Diigitals, a U.K.-based digital modeling agency.

Wilson, who is white and uses they/them pronouns, designed Shudu in 2017, described on Instagram as the "The World's First Digital Supermodel." But critics at the time accused Wilson of cultural appropriation and digital Blackface.

Wilson took the experience as a lesson and transformed The Diigitals to make sure Shudu—who has been booked by Louis Vuitton and BMW—didn't take away opportunities but instead opened possibilities for women of color. Alexsandrah, for instance, has modeled in-person as Shudu for Vogue Australia, and writer Ama Badu came up with Shudu's backstory and portrays her voice for interviews.

Alexsandrah said she is "extremely proud" of her work with The Diigitals, which created her own AI twin: "It's something that even when we are no longer here, the future generations can look back at and be like, 'These are the pioneers.'"

But for Yve Edmond, a New York City area-based model who works with major retailers to check the fit of clothing before it's sold to consumers, the rise of AI in fashion modeling feels more insidious.

Edmond worries modeling agencies and companies are taking advantage of models, who are generally independent contractors afforded few labor protections in the U.S., by using their photos to train AI systems without their consent or compensation.

She described one incident in which a client asked to photograph Edmond moving her arms, squatting and walking for "research" purposes. Edmond refused and later felt swindled—her modeling agency had told her she was being booked for a fitting, not to build an avatar.

"This is a complete violation," she said. "It was really disappointing for me."

But absent AI regulations, it's up to companies to be transparent and ethical about deploying AI technology. And Ziff, the founder of the Model Alliance, likens the current lack of legal protections for  workers to "the Wild West."

That's why the Model Alliance is pushing for legislation like the one being considered in New York state, in which a provision of the Fashion Workers Act would require management companies and brands to obtain models' clear written consent to create or use a model's digital replica; specify the amount and duration of compensation, and prohibit altering or manipulating models' digital replica without consent.

Alexsandrah says that with ethical use and the right legal regulations, AI might open up doors for more models of color like herself. She has let her clients know that she has an AI replica, and she funnels any inquires for its use through Wilson, who she describes as "somebody that I know, love, trust and is my friend." Wilson says they make sure any compensation for Alexsandrah's AI is comparable to what she would make in-person.

Edmond, however, is more of a purist: "We have this amazing Earth that we're living on. And you have a person of every shade, every height, every size. Why not find that person and compensate that person?"

© 2024 The Associated Press. All rights reserved. This material may not be published, broadcast, rewritten or redistributed without permission.

 

Israel using AI to identify human targets raising fears that innocents are being caught in the net

tracking people with AI
Credit: AI-generated image

report by Jerusalem-based investigative journalists published in +972 magazine finds that AI targeting systems have played a key role in identifying—and potentially misidentifying—tens of thousands of targets in Gaza. This suggests that autonomous warfare is no longer a future scenario. It is already here and the consequences are horrifying

There are two technologies in question. The first, "Lavender", is an AI recommendation system designed to use algorithms to identify Hamas operatives as targets. The second, the grotesquely named "Where's Daddy?", is a system which tracks targets geographically so that they can be followed into their family residences before being attacked. Together, these two systems constitute an automation of the find-fix-track-target components of what is known by the modern military as the "kill chain".

Systems such as Lavender are not autonomous weapons, but they do accelerate the kill chain and make the process of killing progressively more autonomous. AI targeting systems draw on data from computer sensors and other sources to statistically assess what constitutes a potential target. Vast amounts of this data are gathered by Israeli intelligence through surveillance on the 2.3 million inhabitants of Gaza.

Such systems are trained on a set of data to produce the profile of a Hamas operative. This could be data about gender, age, appearance, movement patterns, social network relationships, accessories, and other "relevant features". They then work to match actual Palestinians to this profile by degree of fit. The category of what constitutes relevant features of a target can be set as stringently or as loosely as is desired. In the case of Lavender, it seems one of the key equations was "male equals militant". This has echoes of the infamous "all military-aged males are potential targets" mandate of the 2010 US drone wars in which the Obama administration identified and assassinated hundreds of people designated as enemies "based on metadata".

What is different with AI in the mix is the speed with which targets can be algorithmically determined and the mandate of action this issues. The +972 report indicates that the use of this technology has led to the dispassionate annihilation of thousands of eligible—and ineligible—targets at speed and without much human oversight.

The Israel Defense Forces (IDF) were swift to deny the use of AI targeting systems of this kind. And it is difficult to verify independently whether and, if so, the extent to which they have been used, and how exactly they function. But the functionalities described by the report are entirely plausible, especially given the IDF's own boasts to be "one of the most technological organizations" and an early adopter of AI.

With military AI programs around the world striving to shorten what the US military calls the "sensor-to-shooter timeline" and "increase lethality" in their operations, why would an organization such as the IDF not avail themselves of the latest technologies?

The fact is, systems such as Lavender and Where's Daddy? are the manifestation of a broader trend which has been underway for a good decade and the IDF and its elite units are far from the only ones seeking to implement more AI-targeting systems into their processes.

When machines trump humans

Earlier this year, Bloomberg reported on the latest version of Project Maven, the US Department of Defense AI pathfinder program, which has evolved from being a sensor data analysis program in 2017 to a full-blown AI-enabled target recommendation system built for speed. As Bloomberg journalist Katrina Manson reports, the operator "can now sign off on as many as 80 targets in an hour of work, versus 30 without it".

Manson quotes a US army officer tasked with learning the system describing the process of concurring with the algorithm's conclusions, delivered in a rapid staccato: "Accept. Accept, Accept". Evident here is how the  is deeply embedded in digital logics that are difficult to contest. This gives rise to a logic of speed and increased output that trumps all else.

The efficient production of death is reflected also in the +972 account, which indicated an enormous pressure to accelerate and increase the production of targets and the killing of these targets. As one of the sources says, "We were constantly being pressured: bring us more targets. They really shouted at us. We finished [killing] our targets very quickly".

Built-in biases

Systems like Lavender raise many  pertaining to training data, biases, accuracy, error rates and, importantly, questions of automation bias. Automation bias cedes all authority, including moral authority, to the dispassionate interface of statistical processing.

Speed and lethality are the watchwords for military tech. But in prioritizing AI, the scope for human agency is marginalized. The logic of the system requires this, owing to the comparatively slow cognitive systems of the human. It also removes the human sense of responsibility for computer-produced outcomes.

I've written elsewhere how this complicates notions of control (at all levels) in ways that we must take into consideration. When AI, machine learning and human reasoning form a tight ecosystem, the capacity for human control is limited. Humans have a tendency to trust whatever computers say, especially when they move too fast for us to follow.

The problem of speed and acceleration also produces a general sense of urgency, which privileges action over non-action. This turns categories such as "collateral damage" or "military necessity", which should serve as a restraint to violence, into channels for producing more violence.

I am reminded of the military scholar Christopher Coker's words: "we must choose our tools carefully, not because they are inhumane (all weapons are) but because the more we come to rely on them, the more they shape our view of the world". It is clear that military AI shapes our view of the world. Tragically, Lavender gives us cause to realize that this view is laden with violence.

Provided by The Conversation 

This article is republished from The Conversation under a Creative Commons license. Read the original article.The Conversation


 

Digital tools, including AI, alter consumer trust and purchasing decisions, says research

ai
Credit: CC0 Public Domain

Colleen Harmeling, a Florida State University College of Business researcher, points to photo filters, overly edited photos and other distortions of user-generated content as impediments to consumer trust. In turn, they are potential barriers to the performance of products that users present and discuss online.

Harmeling, FSU's Dr. Persis E. Rockwood associate professor of marketing and the co-director of the doctoral program's major in marketing, argues that consumers who rely on blog posts, , testimonials and other user-generated content to inform their purchase decisions require digital platforms that minimize suspicion of missing or misrepresented content.

"The availability of certain features, including photo filters or the ability to delete content once you've posted it, dramatically impacts whether user-generated content affects brand performance or firm performance," Harmeling said. "Those features don't even have to be used to affect product performance. The mere presence of such features in a digital space affects  in that content for their purchase decision."

Harmeling's study with Rachel Hochstein, an assistant professor of marketing at the University of Missouri-Kansas City, and Taylor Perko, who served as an FSU research assistant while earning her MBA from the College of Business, appears in the Journal of the Academy of Marketing Science.

The authors assert that their findings reinforce the notion that  "serve as custodians of trust in modern society."

Harmeling credits the study largely to Hochstein, who studied under her as a marketing doctoral student at FSU before landing her faculty post. Hochstein compiled a database of nearly 20 years of digital platform innovations, recording the timing of changes in user features such as photo filters across online platforms, including Facebook, Amazon and Twitter, now known as X. She combined the database with data from 77 published studies on the effects of consumer-generated content on company performance.

Harmeling called it "a very significant study" that "I think has profound implications for how and what we trust as consumers, for how marketers engage with their consumers and how to increase the trustworthiness of the information that's being circulated about the brand."

"We're very proud of Colleen, Rachel and Taylor's work to develop strategies that promote trust and offset consumers' concerns in a setting where it's needed most," said Michael Brady, director of Dr. Persis E. Rockwood School of Marketing and the Bob Sasser Professor of Marketing in the FSU College of Business. "Their study exemplifies our world-renowned faculty's cutting-edge research, which asks crucial questions and seeks achievable solutions for the benefit of students, consumers and industries."

The results of the study showcased how those features influenced trust, distrust and purchasing decisions.

"We found that trust can be violated if people believe that their digitally presented content isn't what unfolds in reality—that's where the photo filters come into play—or if there is  from their observable example," Harmeling said. "If we suspect missing data, then we start to call into question whether the online user's experience with a product is reflective of the population."

The researchers wrote that their findings also carry implications for social media influencers, who might want to avoid using photo filters or overly edited photos and videos. "Focusing instead on live videos or real-time 'story' posts may be perceived as more trustworthy," they noted.

More importantly, the researchers urged marketers to help consumers distinguish real from fake and missing from complete content. Such judgments "require the ability to identify patterns in a body of user-generated content, recognize anomalies in the pattern and estimate the impact of truth-altering features on each online platform," they wrote.

They added, "Helping consumers make these judgments should be a priority of marketers because suspicion of even single posts can erode trust in the entire body of user-generated content and sow uncertainty in consumers' purchase decisions."

Harmeling also emphasized policy implications, especially amid the rapid proliferation of artificial intelligence. The European Union's AI Act, scheduled to take effect this year, will require labeling deepfake AI-generated photos, videos or audio of existing people and places as artificially manipulated.

"We think of photo filtering as one of the first movers in AI, changing reality to something slightly enhanced," Harmeling said. "Now we have AI that's much more advanced—replicating people's voices, for example. I think there are numerous implications for this research and for further research down this path."

More information: Rachel E. Hochstein et al, Toward a theory of consumer digital trust: Meta-analytic evidence of its role in the effectiveness of user-generated content, Journal of the Academy of Marketing Science (2023). DOI: 10.1007/s11747-023-00982-y

 

US to grant Samsung up to $6.4 bn for chip plants

ALL CAPITALI$M IS STATE CAPITALI$M

South Korean semiconductor giant Samsung will build a new chip facility in Texas and expand its existing one, according the agreement
South Korean semiconductor giant Samsung will build a new chip facility in Texas and expand its existing one, according the agreement.

The United States announced on Monday grants of up to $6.4 billion to South Korean semiconductor giant Samsung to produce cutting-edge chips in Texas.

The award is the latest from the US government as it looks to cement its lead in the —especially for chips needed for the development of AI—both on national security grounds and also in the face of competition with China.

President Joe Biden's administration has previously approved billions in grants to US titan Intel and Taiwan Semiconductor Manufacturing Company (TSMC) as it tries to avoid the prospect of shortages of semiconductors—the lifeblood of the modern global economy.

"The U.S. Department of Commerce and Samsung Electronics (Samsung) have signed a non-binding preliminary memorandum of terms (PMT) to provide up to $6.4 billion in direct funding under the CHIPS and Science Act," said a statement published by the Department of Commerce.

Samsung "is expected to invest more than $40 billion in the region in the coming years, and the proposed investment would support the creation of over 20,000 jobs," it said.

Chips are crucial in powering everything from smartphones to fighter jets.

They are also increasingly in demand by automakers, especially for , adding to the pressure to raise production.

The global chip industry is currently dominated by just a few firms, including TSMC and US-based NVIDIA.

That means the United States is highly dependent on Asia for chips and is vulnerable to shocks to  supply chains, especially during geopolitical crises that affect places such as Taiwan.

This has fueled a US push to strengthen production.

The Chips and Science Act, passed in 2022, calls for tens of billions of dollars in funding to overhaul the US semiconductor industry, with the idea that making public money available for this purpose will lure private investment.

The Samsung agreement will "cement central Texas's role as a state-of-the-art semiconductor ecosystem," Biden said in a statement.

"These facilities will support the production of some of the most powerful chips in the world, which are essential to advanced technologies like  and will bolster U.S. national security."

Under the latest agreement, Samsung will not only build a new facility to produce advanced chips but also expand its existing facility in Texas, according to the Department of Commerce.

"We're not just expanding production facilities; we're strengthening the local semiconductor ecosystem and positioning the U.S. as a global semiconductor manufacturing destination," Samsung's Kye Hyun Kyung said in the Commerce Department statement.

US officials revealed this month that a preliminary agreement with TSMC would see the company receive up to $6.6 billion in direct funding and up to another $5 billion in loans under the CHIPS Act.

In March, Biden unveiled almost $20 billion in grants and loans for Intel's domestic chip-making plants, his administration's biggest investment yet in the sector.

The United States has also awarded funding to GlobalFoundries, BAE Systems Electronic Systems and Microchip Technology under the 2022 law.

In February, Commerce Secretary Gina Raimondo expressed confidence that the United States could house the entire silicon supply chain for making advanced chips.

"The brutal fact is, the United States cannot lead the world as a technology and innovation leader on such a shaky foundation," she said during a speech in Washington.

"We need to make these chips in America."

© 2024 AFP