Privacy dilemmas and opportunities in large language models
image:
LLM Process and Privacy Issues.
view moreCredit: HIGHER EDUCATON PRESS
The growing number of cases indicates that large language model (LLM) brings transformative advancements while raising privacy concerns. Despite promising recent
surveys proposed in the literature, there is still a lack of comprehensive analysis dedicated to text privacy specifically for LLM.
To remedy the problems, a research team led by Jie WU published their new research on 15 October 2025 in Frontiers of Computer Science co-published by Higher Education Press and Springer Nature.
The team conducted an in-depth investigation into privacy issues within LLMs, providing a detailed analysis of five privacy issues and solutions in LLM training and invocation. Additionally, the team delved into three privacy-centric research focuses in LLM application that were not mentioned previously. Based on the investigation, the research discussed further research directions and provided insights into LLM native security mechanisms with view: LLM privacy research is in the technical exploration phase, and there exists a certain gap from practical application.
Future work can focus on committing to ongoing monitoring of new research and continuous refinement of the work. The team hopes this paper provides researchers and practitioners with a comprehensive understanding to better address the privacy challenges that LLMs may encounter in real-world applications.
Journal
Frontiers of Computer Science
Method of Research
Experimental study
Subject of Research
Not applicable
Article Title
Privacy dilemmas and opportunities in large language models: a brief review
Online tracking and privacy on hospital websites
Researchers find that tracking pixels—small pieces of embedded code that can transmit user data to third parties—significantly increase data breach risk on hospital websites. Hilal Atasoy and colleagues analyzed 12 years of archived website data from 1,201 large US hospitals between 2012 and 2023, examining the adoption of pixel tracking and their relationship to data breaches. The authors found pixel tracking in 66% of hospital-year observations, despite stringent privacy regulations. Hospitals using third-party pixels experienced at least a 1.4 percentage point increase in breach probability, representing a 46% relative increase compared to the 3% baseline breach rate. Third-party pixels, which transmit patient data to vendors like Meta and Google, significantly increased breach risk, while first-party pixels that keep data within the hospital showed no significant relationship with breaches. Physical breaches caused by misplaced documents or devices showed no relationship with pixel use, supporting the digital transmission mechanism. According to the authors, the findings reveal a critical regulatory gap in healthcare privacy protections, as tracking pixels operate outside traditional Health Insurance Portability and Accountability Act safeguards. The authors recommend hospitals strengthen data governance policies to protect patient information.
Journal
PNAS Nexus
Article Title
Beyond the click: Pixel tracking technologies and patient data security in hospitals
Article Publication Date
9-Dec-2025
No comments:
Post a Comment