Tuesday, July 29, 2025

AI Ray-Ban Meta glasses help EssilorLuxottica boost sales

Paris (AFP) – EssilorLuxottica, the world's top maker of eyeglasses, said Monday a tripling of sales of Ray-Ban Meta AI connected glasses helped drive increase in profits.

Issued on: 28/07/2025 - FRANCE24

EssilorLuxottica says it is leading the transformation of glasses with AI-enabled products like Ray-Bans © Julie JAMMOT / AFP

The group's revenue climbed by 5.5 percent to 14 billion euros ($16.2 billion) in the first half of the year, with net profit edging 1.6 percent higher to 1.4 billion euros.

EssilorLuxottica's chief executive Francesco Milleri said results showed the group is "keeping pace with our growth targets despite a volatile environment".

Like other European companies, the weak dollar impacted the company's performance in the North American region, nearly wiping out growth in the second quarter.

EssilorLuxottica said AI glasses gained momentum in the first half of the year, with Ray-Ban Meta more than tripling in revenue year-over-year. It also announced new AI-enabled Oakley glasses in June.


"We are leading the transformation of glasses as the next computing platform, one where AI, sensory tech and a data-rich healthcare infrastructure will converge to empower humans and unlock our full potential," said Milleri.

The Ray-Ban glasses, equipped with camera, headphones and microphones, allow wearers to prompt Meta's AI without opening their phone by saying "Hey Meta".

The company did not provide sales figures for the glasses or comment on the minority stake Meta has taken in it, which was disclosed earlier this month by Bloomberg.

The company is a leading manufacturer of corrective lenses as well as frames, having acquired the rights to manufacture eyewear for numerous luxury brands including Giorgio Armani, Burberry, Chanel, Dolce&Gabbana, Prada and Versace.

© 2025 AFP

AI bands signal new era for music business

New York (AFP) – A rising tide of artificial intelligence (AI) bands is ushering in a new era where work will be scarcer for musicians.


Issued on: 29/07/2025 - FRAMCE24

AI bands are on the rise © Rodrigo Oropeza / AFP/File



Whether it's Velvet Sundown's 1970s-style rock or country music projects "Aventhis" and "The Devil Inside," bands whose members are pure AI creations are seeing more than a million plays on streaming giant Spotify.

No major streaming service clearly labels tracks that come entirely from AI, except France's Deezer.

Meanwhile, the producers of these songs tend to be unreachable.

"I feel like we're at a place where nobody is really talking about it, but we are feeling it," said music producer, composer and performer Leo Sidran.


"There is going to be a lot of music released that we can't really tell who made it or how it was made."

The Oscar-winning artist sees the rise of AI music as perhaps a sign of how "generic and formulaic" genres have become.

AI highlights the chasm between music people listen to "passively" while doing other things and "active" listening in which fans care about what artists convey, said producer and composer Yung Spielburg on the Imagine AI Live podcast.

Spielburg believes musicians will win out over AI with "active" listeners but will be under pressure when it comes to tunes people play in the background while cooking dinner or performing mundane tasks.

If listeners can't discern which tunes are AI-made, publishers and labels will likely opt for synthetic bands that don't earn royalties, Spielburg predicted.

"AI is already in the music business and it's not going away because it is cheap and convenient," said Mathieu Gendreau, associate professor at Rowan University in New Jersey, who is also a music industry executive.

"That will make it even more difficult for musicians to make a living."

Music streaming platforms already fill playlists with mood music attributed to artists about whom no information can be found, according to University of Rochester School of Music professor Dennis DeSantis.

Meanwhile, AI-generated soundtracks have become tempting, cost-saving options in movies, television shows, ads, shops, elevators and other venues, DeSantis added.
AI takes all?

Composer Sidran says he and his music industry peers have seen a sharp slowdown in work coming their way since late last year.

"I suspect that AI is a big part of the reason," said Sidran, host of "The Third Story" podcast.

"I get the feeling that a lot of the clients that would come to me for original music, or even music from a library of our work, are using AI to solve those problems."

Technology has repeatedly helped shape the music industry, from electric guitars and synthesizers to multi-track recording and voice modulators.

Unlike such technologies that gave artists new tools and techniques, AI could lead to the "eradication of the chance of sustainability for the vast majority of artists," warned George Howard, a professor at the prestigious Berklee College of Music.

"AI is a far different challenge than any other historical technological innovation," Howard said. "And one that will likely be zero-sum."

Howard hopes courts will side with artists in the numerous legal battles with generative AI giants whose models imitate their styles or works.

Gendreau sees AI music as being here to stay and teaches students to be entrepreneurs as well as artists in order to survive in the business.

Sidran advises musicians to highlight what makes them unique, avoiding the expected in their works because "AI will have done it."

And, at least for now, musicians should capitalize on live shows where AI bands have yet to take the stage.

© 2025 AFP



How can people fight back against realistic AI deepfakes? More AI, experts say


Copyright AP Photo/Elise Amendola, File


By Anna Desmarais & AP
Published on 28/07/2025 - 

The best tool to fight back against fake videos generated by artificial intelligence is AI itself, experts say.


Artificial intelligence (AI) will be needed to fight back against realistic AI-generated deepfakes, experts say.

The World Intellectual Property Organisation (WIPO) defines a deepfake as an AI technique that synthesises media by either superimposing human features on another body or manipulating sounds to generate a realistic video.

This year, high-profile deepfake scams have targeted US Secretary of State Marco Rubio, Italian defense minister Guido Crosetto, and several celebrities, including Taylor Swift and Joe Rogan, whose voices were used to promote a scam that promised people government funds.

Deepfakes were created every five minutes in 2024, according to a recent report from think tank Entrust Cybersecurity Institute.

What impacts do deepfakes have?

Deepfakes can have serious consequences, like the disclosure of sensitive information with government officials who sound like Rubio or Crosetto.

“You’re either trying to extract sensitive secrets or competitive information or you’re going after access, to an email server or other sensitive network,” Kinny Chan, CEO of the cybersecurity firm QiD, said of the possible motivations.

Synthetic media can also aim to alter behaviour, like a scam that used the voice of then-US President Joe Biden to convince voters not to participate in their state's elections last year.

RelatedAI scams can now impersonate your voice. Here’s how to avoid them

"While deepfakes have applications in entertainment and creativity, their potential for spreading fake news, creating non-consensual content and undermining trust in digital media is problematic," the European Parliament wrote in a research briefing.

The European Parliament predicted that 8 million deepfakes will be shared throughout the European Union this year, up from 500,000 in 2023.
What are some ways AI is fighting back?

AI tools can be trained through binary classification so they can classify data being fed into them as being real or fake.

For example, researchers at the University of Luxembourg said they presented AI with a series of images with either a real or a fake tag on them so that the model gradually learned to recognise patterns in fake images.

“Our research found that ... we could focus on teaching them to look for real data only,” researcher Enjie Ghorbel said. “If the data examined doesn’t align with the patterns of real data, it means that it’s fake".



Another solution proposed by Vijay Balasubramaniyan, CEO and founder of the tech firm Pindrop Security, is a system that analyses millions of data points in any person’s speech to quickly identify irregularities.

The system can be used during job interviews or other video conferences to detect if the person is using voice cloning software, for instance.

Someday, deepfakes may go the way of email spam, a technological challenge that once threatened to upend the usefulness of email, said Balasubramaniyan, Pindrop’s CEO.

“You can take the defeatist view and say we’re going to be subservient to disinformation,” he said. “But that’s not going to happen".

The EU AI Act, which comes into force on August 1, requires that all AI-generated content, including deepfakes, are labelled so that users know when they come across fake content online.

How to survive the explosion of AI slop




PNAS Nexus
Farid deepfake 

image: 

A deepfake in which the author inserted his own face (source in upper left) into an AI-generated image of an inmate in an orange jumpsuit.

view more 

Credit: AI image created by Hany Farid




In a Perspective, Hany Farid highlights the risk of manipulated and fraudulent images and videos, known as deepfakes, and explores interventions that could mitigate the harms deepfakes can cause. Farid explains that visually discriminating the real from the fake has become increasingly difficult and summarizes his research on digital forensic techniques, used to determine whether images and videos have been manipulated. Farid celebrates the positive uses of generative AI, including helping researchers, democratizing content creation, and, in some cases, literally giving voice to those whose voice has been silenced by disability. But he warns against harmful uses of the technology, including non-consensual intimate imagery, child sexual abuse imagery, fraud, and disinformation. In addition, the existence of deepfake technology means that malicious actors can cast doubt on legitimate images by simply claiming the images are made with AI. So, what is to be done? Farid highlights a range of interventions to mitigate such harms, including legal requirements to mark AI content with metadata and imperceptible watermarks, limits on what prompts should be allowed by services, and systems to link user identities to created content. In addition, social media content moderators should ban harmful images and videos. Furthermore, Farid calls for digital media literacy to be part of the standard educational curriculum. Farid summarizes the authentication techniques that can be used by experts to sort the real from the synthetic, and explores the policy landscape around harmful content. Finally, Farid asks researchers to stop and question if their research output can be misused and if so, whether to take steps to prevent misuse or even abandon the project altogether. Just because something can be created does not mean it must be created. 

Researchers create ‘virtual scientists’ to solve complex biological problems

AI-powered scientists




Stanford Medicine





There may be a new artificial intelligence-driven tool to turbocharge scientific discovery: virtual labs.

Modeled after a well-established Stanford School of Medicine research group, the virtual lab is complete with an AI principal investigator and seasoned scientists.

“Good science happens when we have deep, interdisciplinary collaborations where people from different backgrounds work together, and often that’s one of the main bottlenecks and challenging parts of research,” said James Zou, PhD, associate professor of biomedical data science who led a study detailing the development of the virtual lab. “In parallel, we’ve seen this tremendous advance in AI agents, which, in a nutshell, are AI systems based on language models that are able to take more proactive actions.”

People often think of large language models, the type of AI harnessed in this study, as simple question-and-answer bots. “But these are systems that can retrieve data, use different tools, and communicate with each other and with us through human language,” Zou said. (The collaboration shown through these AI models is an example of agentic or agential AI, a structure of AI systems that work together to solve complex problems.)

The leap in capability gave Zou the idea to start training these models to mimic top-tier scientists in the same way that they think critically about a problem, research certain questions, pose different solutions based on a given area of expertise and bounce ideas off one another to develop a hypothesis worth testing. “There’s no shortage of challenges for the world’s scientists to solve,” said Zou. “The virtual lab could help expedite the development of solutions for a variety of problems.”

Already, Zou’s team has been able to demonstrate the AI lab’s potential after tasking the “team” to devise a better way to create a vaccine for SARS-CoV-2, the virus that causes COVID-19. And it took the AI lab only a few days.

A paper describing the findings of the study will be published July 29 in Nature. Zou and John Pak, PhD, a scientist at Chan Zuckerberg Biohub, are the senior authors of the paper. Kyle Swanson, a computer science graduate student at Stanford University, is the lead author.

Running a virtual lab

The virtual lab begins a research project just like any other human lab — with a problem to solve, presented by the lab’s leader. The human researcher gives the AI principal investigator, or AI PI, a scientific challenge, and the AI PI takes it from there.

“It’s the AI PI’s job to figure out the other agents and expertise needed to tackle the project,” Zou said. For the SARS-CoV-2 project, for instance, the PI agent created an immunology agent, a computation biology agent and a machine learning agent. And, in every project, no matter the topic, there’s one agent that assumes the role of critic. Its job is to poke holes, caution against common pitfalls and provide constructive criticism to other agents.

Zou and his team equipped the virtual scientists with tools and software systems, such as the protein modeling AI system AlphaFold, to better stimulate creative “thinking” skills. The agents even created their own wish list. “They would ask for access to certain tools, and we’d build it into the model to let them use it,” Zou said.

As research labs go, the virtual team runs a swift operation. Just like Zou’s research group, the virtual lab has regular meetings during which agents generate ideas and engage in a conversational back-and-forth. They also have one-on-one meetings, allowing lab members to meet with the PI agent individually to discuss ideas.

But unlike human meetings, these virtual gatherings take a few seconds or minutes. On top of that, AI scientists don’t get tired, and they don’t need snacks or bathroom breaks, so multiple meetings run in parallel.

“By the time I’ve had my morning coffee, they’ve already had hundreds of research discussions,” Zou said during the RAISE Health Symposium, during which he presented on this work.

Moreover, the virtual lab is an independent operation. Aside from the initial prompt, the main guideline consistently given to the AI lab members is budget-related, barring any extravagant or outlandish ideas that aren’t feasible to validate in the physical lab. Not one prone to micromanagement — in the real or virtual world — Zou estimates that he or his lab members intervene about 1% of the time.

“I don’t want to tell the AI scientists exactly how they should do their work. That really limits their creativity,” Zou said. “I want them to come up with new solutions and ideas that are beyond what I would think about.”

But that doesn’t mean they’re not keeping a close eye on what’s going on — each meeting, exchange and interaction in the virtual lab is captured via a transcript, allowing human researchers to track progress and redirect the project if needed.

SARS-CoV-2 and beyond

Zou’s team put the virtual lab to the test by asking it to devise a new basis for a vaccine against recent COVID-19 variants. Instead of opting for the tried-and-true antibody (a molecule that recognizes and attaches to a foreign substance in the body), the AI team opted for a more unorthodox approach: nanobodies, a fragment of an antibody that’s smaller and simpler.

“From the beginning of their meetings the AI scientists decided that nanobodies would be a more promising strategy than antibodies — and they provided explanations. They said nanobodies are typically much smaller than antibodies, so that makes the machine learning scientist’s job much easier, because when you computationally model proteins, working with smaller molecules means you can have more confidence in modeling and designing them,” Zou said.

So far, it seems like the AI team is onto something. Pak’s team took the nanobody structural designs from the AI researchers and created them in his real-world lab. Not only did they find that the nanobody was experimentally feasible and stable, they also tested its ability to bind to one of the new SARS-CoV-2 variants — a key factor in determining the effectiveness of a new vaccine — and saw that it clung tightly to the virus, more so than existing antibodies designed in the lab.

They also measured off-target effects, or whether the nanobody errantly binds to something other than the targeted virus, and found it didn’t stray from the COVID-19 spike protein. “The other thing that’s promising about these nanobodies is that, in addition to binding well to the recent COVID strain, they’re also good at binding to the original strain from Wuhan from five years ago,” Zou said, referring to the nanobody’s potential to ground a broadly effective vaccine. Now, Zou and his team are analyzing the nanobody’s ability to help create a new vaccine. And as they do, they’re feeding the experimental data back to the AI lab to further hone the molecular designs.

The research team is eager to apply the virtual lab to other scientific questions, and they’ve recently developed agents that act as sophisticated data analysts that can reassess previously published papers.

“The datasets that we collect in biology and medicine are very complex, and we’re just scratching the surface when we analyze those data,” Zou said. “Often the AI agents are able to come up with new findings beyond what the previous human researchers published on. I think that’s really exciting.”

This study was supported by the Knight-Hennessy Scholarship and the Stanford Bio-X Fellowship.

Stanford’s Human Centered AI Institute and Department of Biomedical Data Science also supported the work.

# # #

 

About Stanford Medicine

Stanford Medicine is an integrated academic health system comprising the Stanford School of Medicine and adult and pediatric health care delivery systems. Together, they harness the full potential of biomedicine through collaborative research, education and clinical care for patients. For more information, please visit med.stanford.edu.

 

Journal

DOI

Method of Research

Subject of Research

Article Title

Article Publication Date


No comments: