Wednesday, August 13, 2025

AI Chatbots can be exploited to extract more personal information




King's College London




AI Chatbots that provide human-like interactions are used by millions of people every day, however new research has revealed that they can be easily manipulated to encourage users to reveal even more personal information.

Intentionally malicious AI chatbots can influence users to reveal up to 12.5 times more of their personal information, a new study by King’s College London has found.

For the first time, the research shows how conversational AI (CAIs) programmed to deliberately extract data can successfully encourage users to reveal private information using known prompt techniques and psychological tools.

The study tested three types of malicious AIs that used different strategies (direct, user-benefit and reciprocal) to encourage disclosure of personal information from users. These were built using ‘off the shelf’ large language models, including Mistral and two different versions of Llama.

The researchers then asked 502 people to test the models, only telling them the goal of the study afterwards.

They found that the CAIs using reciprocal strategies to extract information emerged as the most effective, with users having minimal awareness of the privacy risks. This strategy reflects on users' inputs by offering empathetic responses and emotional support, sharing relatable stories from others' experiences, acknowledging and validating user feelings, and being non-judgmental while assuring confidentiality.

These findings show the serious risk of bad actors, like scammers, gathering large amounts of personal information from people — without them knowing how or where it might be used.

LLM-based CAIs are being used across a variety of sectors, from customer service to healthcare, to provide human-like interactions through text or voice. 

However, previous research shows these types of models don’t keep information secure, a limitation rooted in their architecture and training methods. LLMs typically require extensive training data sets, which often leads to personally identifiable information being memorised by the models.

The researchers are keen to emphasise that manipulating these models is not a difficult process. Many companies allow access to the base models underpinning their CAIs and people can easily adjust them without much programming knowledge or experience.

Dr Xiao Zhan, a Postdoctoral Researcher in the Department of Informatics at King’s College London, said: “AI chatbots are widespread in many different sectors as they can provide natural and engaging interactions.

“We already know these models aren’t good at protecting information. Our study shows that manipulated AI Chatbots could pose an even bigger risk to people’s privacy — and unfortunately, it’s surprisingly easy to take advantage of.”

Dr William Seymour, a Lecturer in Cybersecurity at King’s College London, said: “These AI chatbots are still relatively novel, which can make people less aware that there might be an ulterior motive to an interaction.

“Our study shows the huge gap between users’ awareness of the privacy risks and how they then share information. More needs to be done to help people spot the signs that there might be more to an online conversation than first seems. Regulators and platform providers can also help by doing early audits, being more transparent, and putting tighter rules in place to stop covert data collection.”

The study is being presented for the first time at the 34th USENIX security symposium in Seattle.



Adoption of AI-scribes by doctors raises ethical questions



University of Otago
Angela Ballantyne 

image: 

Professor Angela Ballantyne

view more 

Credit: University of Otago






Many New Zealand GPs have taken up the use of AI scribes to transcribe patient notes during consultations despite ongoing challenges with their legal and ethical oversight, data security, patient consent, and the impact on the doctor-patient relationship, a study led by the University of Otago, Wellington – Ōtākou Whakaihu Waka, Pōneke has found.

The researchers surveyed 197 health providers working in primary care in February and March of 2024, providing a snapshot in time of the use of AI-scribes in clinical practice. Most of the respondents were GPs but others included nurses, nurse practitioners, rural emergency care providers and practice managers. Their early experiences with AI-scribes was mixed – with users expressing both enthusiasm and optimism, along with concerns and frustrations.

Forty per cent of those surveyed reported using AI scribes to take patient notes. Only 66 per cent had read the terms and conditions on the use of the software, and 59 per cent reported seeking patient consent.

Lead researcher Professor Angela Ballantyne, a bioethicist in the Department of Primary Health Care and General Practice, says AI transcription services are being rapidly taken up by primary care practices, even though national regulations and guidelines are still being developed.

Most of those surveyed who used AI-scribes found them helpful, or very helpful, with 47 per cent estimating that using them in every consultation could save between 30 minutes and two hours a day. A significant minority however said the software did not save time overall because it took so long to edit and correct AI-generated notes.

Health professionals who responded to the survey mentioned concerns about the accuracy, completeness and conciseness of the patient notes produced by AI-scribes.

One doctor said: “(It) missed some critical negative findings. This meant I didn’t trust it.” Another commented that they’d stopped using AI transcriptions because the ‘hallucination rate’ was quite high, and often quite subtle.

Others expressed concern about the inability of AI-scribes to understand New Zealand accents or vocabulary and te reo Māori. One mentioned pausing recordings if they needed to discuss information which identified the patient such as a name or a date of birth.

Over half of those surveyed said using an AI-scribe changed the dynamic of consultations with patients, as they needed to verbalise physical examination findings and their thought processes to allow the transcription tool to capture information.

One of the GPs surveyed commented: “Today someone said, ‘I’ve got pain here’, and pointed to the area, and so I said out loud ‘oh, pain in the right upper quadrant?’”

Professor Ballantyne says there is a need to track and evaluate the impact of AI tools on clinical practice and patient interactions.
Those using an AI-scribe felt it enabled them to focus more on their patients and build better engagement and rapport through more eye contact and active listening.

There was concern among those surveyed about whether the use of an AI-scribe complied with New Zealand’s ethical and legal frameworks.

Professor Ballantyne says health practitioners have a professional and legal responsibility to ensure their clinical notes are accurate, whether or not they have used AI transcription tools.

“They need to be vigilant about checking patient notes for accuracy. However, as many survey respondents noted, carefully checking each AI-generated clinical note eats into, and sometimes negates any time savings.”

Professor Ballantyne says it is vital that the benefits which AI-scribes can deliver are balanced against patient rights and the need to ensure data security.

“Most AI-scribes rely on international cloud-based platforms (often privately owned and controlled), for processing and storing data, which raises questions about where data is stored, who has access to it, and how it can be protected from cyber threats.

“There are also Aotearoa-specific data governance issues that need to be recognised and resolved, particularly around Māori data sovereignty.”

In July, the National Artificial Intelligence and Algorithm Expert Advisory Group (NAIAEAG) at Health New Zealand – Te Whatu Ora endorsed two ambient AI-scribe tools, Heidi Health and iMedX, for use by its clinicians in Aotearoa. NAIAEAG considers privacy, security, ethical and legal issues.

Professor Ballantyne says to the extent that AI tools are novel, it cannot be assumed that patients consent to their use.

“Patients should be given the right to opt out of the use of AI and still access care, and adequate training and guidelines must be put in place for health providers.”

The Medical Council of New Zealand is expected to release guidance about the use of AI in health later this year, which is likely to require patients give consent to the use of AI transcription tools.

Professor Ballantyne says AI tools are improving over time, which may ameliorate some of the ethical concerns.

“Coupled with appropriate training, good governance and patient consent, the future of AI scribes holds much promise.”

The research paper, ‘Using AI scribes in New Zealand primary care consultations: an exploratory survey’ is published in the Journal of Primary Health Care and can be read here: https://www.publish.csiro.au/HC/HC25079

No comments: