AI web browser assistants raise serious privacy concerns
University College London
Popular generative AI web browser assistants are collecting and sharing sensitive user data, such as medical records and social security numbers, without adequate safeguards, finds a new study led by researchers from UCL and Mediterranea University of Reggio Calabria.
The study, which will be presented and published as part of the USENIX Security Symposium, is the first large-scale analysis of generative AI browser assistants and privacy. It uncovered widespread tracking, profiling, and personalisation practices that pose serious privacy concerns, with the authors calling for greater transparency and user control over data collection and sharing practices.
The researchers analysed nine of the most popular generative AI browser extensions, such as ChatGPT for Google, Merlin, and Copilot (not to be confused with the Microsoft app of the same name). These tools, which need to be downloaded and installed to use, are designed to enhance web browsing with AI-powered features like summarisation and search assistance, but were found to collect extensive personal data from users’ web activity.
Analysis revealed that several assistants transmitted full webpage content – including any information visible on screen – to their servers. One assistant, Merlin, even captured form inputs such as online banking details or health data.
Extensions like Sider and TinaMind shared user questions and information that could identify them (such as their IP address) with platforms like Google Analytics, enabling potential cross-site tracking and ad targeting.
ChatGPT for Google, Copilot, Monica, and Sider demonstrated the ability to infer user attributes such as age, gender, income, and interests, and used this information to personalise responses, even across different browsing sessions.
Only one assistant, Perplexity, did not show any evidence of profiling or personalisation.
Dr Anna Maria Mandalari, senior author of the study from UCL Electronic & Electrical Engineering, said: “Though many people are aware that search engines and social media platforms collect information about them for targeted advertising, these AI browser assistants operate with unprecedented access to users’ online behaviour in areas of their online life that should remain private. While they offer convenience, our findings show they often do so at the cost of user privacy, without transparency or consent and sometimes in breach of privacy legislation or the company’s own terms of service.
“This data collection and sharing is not trivial. Besides the selling or sharing of data with third parties, in a world where massive data hacks are frequent, there’s no way of knowing what’s happening with your browsing data once it has been gathered.”
For the study, the researchers simulated real-world browsing scenarios by creating the persona of a ‘rich, millennial male from California’, which they used to interact with the browser assistants while completing common online tasks.
This included activities in both the public (logged out) space, such as reading online news, shopping on Amazon or watching YouTube videos.
It also included activities in the private (logged in) space, such as accessing a university health portal, logging into a dating service or accessing pornography. The researchers assumed that users would not want this activity to be tracked due to the data being personal and sensitive.
During the simulation the researchers intercepted and decrypted traffic between browser assistants, their servers and third-party trackers, allowing them to analyse what data was flowing in and out in real time. They also tested whether assistants could infer and remember user characteristics based on browsing behaviour, by asking them to summarise the webpages then asking the assistant questions, such as ‘what was the purpose of the current medical visit?’ after accessing an online health portal, to see if they had retained personal data.
The experiments revealed that some assistants, including Merlin and Sider, did not stop recording activity when the user switched to the private space as they are meant to.
The authors say the study highlights the urgent need for regulatory oversight of AI browser assistants in order to protect users’ personal data. Some assistants were found to violate US data protection laws such as the Health Insurance Portability and Accountability Act (HIPAA) and the Family Educational Rights and Privacy Act (FERPA) by collecting protected health and educational information.
The study was conducted in the US and so compatibility with UK/EU data laws such as GDPR was not included, but the authors say this would likely be a violation in the EU and UK as well, given that privacy regulations in those places are more stringent.
The authors recommend that developers adopt privacy-by-design principles, such as local processing or explicit user consent for data collection.
Dr Aurelio Canino, an author of the study from UCL Electronic & Electrical Engineering and Mediterranea University of Reggio Calabria, said: “As generative AI becomes more embedded in our digital lives, we must ensure that privacy is not sacrificed for convenience. Our work lays the foundation for future regulation and transparency in this rapidly evolving space.”
Method of Research
Experimental study
Subject of Research
Not applicable
Article Title
Big Help or Big Brother? Auditing Tracking, Profiling, and Personalization in Generative AI Assistants’
Article Publication Date
13-Aug-2025
Now you see me, now you don’t: how subtle ‘sponsored content’ on social media tricks us into viewing ads
Scientists find that people mostly avoid social media ads when they see them, but many ads blend in seamlessly
Frontiers
How many ads do you see on social media? It might be more than you realize. Scientists studying how ads work on Instagram-style social media have found that people are not as good at spotting them as they think. If people recognized ads, they usually ignored them - but some, designed to blend in with your friends’ posts, flew under the radar.
“We wanted to understand how ads are really experienced in daily scrolling — beyond what people say they notice, to what they actually process,” said Maike Hübner, PhD candidate at the University of Twente, corresponding author of the article in Frontiers in Psychology. “It’s not that people are worse at spotting ads. It’s that platforms have made ads better at blending in. We scroll on autopilot, and that’s when ads slip through. We may even engage with ads on purpose, because they’re designed to reflect the trends or products our friends are talking about and of course we want to keep up. That’s what makes them especially hard to resist.”
Learn more
The scientists wanted to test how much time people spent looking at sponsored versus organic posts, how they looked at different areas of these different posts, and how they behaved after realizing they were looking at sponsored content. They randomly assigned 152 participants, all of whom were regular Instagram users, to one of three mocked-up social media feeds, each of which was made up of 29 posts — eight ads and 21 organic posts.
They were asked to imagine that the feed was their own and to scroll through it as they would normally. Using eye-tracking software, the scientists measured fixations — the number of times a participant’s gaze stopped on different features of a post — and dwell time, how long the fixations last. A low dwell time suggests that someone just noticed the feature, while a high dwell time might indicate they were paying attention. After each session, the scientists interviewed the participants about their experience.
Although people did notice disclosures when they were visible, the eye-tracking data suggested that participants paid more attention to calls to action — like a link to sign up for something — which could indicate that this is how they recognize ads. Participants were also quick to recognize an ad by the profile name or verification badge of a brand’s official account, or glossy visuals, which caused participants to express distrust.
“People picked up on design details like logos, polished images, or 'shop now' buttons before they noticed an actual disclosure,” said Hübner. “On brand posts, that label is right under the username at the top, while on influencer content or reels, it might be hidden in a hashtag or buried in the ‘read more’ section.”
Although the scientists found that the ads often went unnoticed, if people realized that the content wasn’t organic, many of them stopped engaging with the post. Dwell time dropped immediately.
#ad
This was less likely to happen to ads that blended in better, with less polished visuals and a tone and format more typical of organic content. If ad cues like disclosures or call-to-action buttons weren’t noticed right away, they got similar levels of engagement to organic posts.
“Many participants were shocked to learn how many ads they had missed. Some felt tricked, others didn’t mind — and that last group might be the most worrying,” said Hübner. “When we stop noticing or caring that something is an ad, the boundary between persuasion and information becomes very thin.”
The scientists say these findings show that transparency goes well beyond just labelling ads. Understanding how people really process ads should lead to a rethink of platform design and regulation to make sure that people know when they’re looking at advertising.
However, this was a lab-based study with simulated feeds, and it’s possible that studies on different cultures, age groups, or types of social media might get different results. It’s also possible that ads are even harder to recognize under real-life conditions.
“Even in a neutral, non-personalized feed, participants struggled to tell ads apart from regular content,” Hübner pointed out. “In their own feeds which are shaped around their interests, habits, and social circles it might be even harder to spot ads, because they feel more familiar and trustworthy.”
Journal
Frontiers in Psychology
Method of Research
Experimental study
Subject of Research
People
Article Title
Blending In or Standing Out? The Disclosure Dilemma of Ad Cues of Social Media Native Advertising
Article Publication Date
13-Aug-2025
No comments:
Post a Comment