A.I.
Hiding in plain sight
Generative AI used to replace confidential information in images with similar visuals to protect image privacy
Image privacy could be protected with the use of generative artificial intelligence. Researchers from Japan, China and Finland created a system which replaces parts of images that might threaten confidentiality with visually similar but AI-generated alternatives. Named “generative content replacement,” in tests, 60% of viewers couldn’t tell which images had been altered. The researchers intend for this system to provide a more visually cohesive option for image censoring, which helps to preserve the narrative of the image while protecting privacy. This research was presented at the Association for Computing Machinery’s CHI Conference on Human Factors in Computing Systems, held in Honolulu, Hawaii, in the U.S., in May 2024.
With just a few text prompts, generative AI can offer a quick fix for a tricky school essay, a new business strategy or endless meme fodder. The advent of generative AI into daily life has been swift, and the potential scale of its role and influence are still being grappled with. Fears over its impact on future job security, online safety and creative originality have led to strikes from Hollywood writers, court cases over faked photos and heated discussions about authenticity.
However, a team of researchers has proposed using a sometimes controversial feature of generative AI – its ability to manipulate images – as a way to solve privacy issues.
“We found that the existing image privacy protection techniques are not necessarily able to hide information while maintaining image aesthetics. Resulting images can sometimes appear unnatural or jarring. We considered this a demotivating factor for people who might otherwise consider applying privacy protection,” explained Associate Professor Koji Yatani from the Graduate School of Engineering at the University of Tokyo. “So, we decided to explore how we can achieve both — that is, robust privacy protection and image useability — at the same time by incorporating the latest generative AI technology.”
The researchers created a computer system which they named generative content replacement (GCR). This tool identifies what might constitute a privacy threat and automatically replaces it with a realistic but artificially created substitute. For example, personal information on a ticket stub could be replaced with illegible letters, or a private building exchanged for a fake building or other landscape features.
“There are a number of commonly used image protection methods, such as blurring, color filling or just removing the affected part of the image. Compared to these, our results show that generative content replacement can better maintain the story of the original images and higher visual harmony,” said Yatani. “We found that participants couldn’t detect GCR in 60% of images.”
For now, the GCR system requires a lot of computation resources, so it won’t be available on any personal devices just yet. The tested system was fully automatic, but the team has since developed a new interface to allow users to customize images, giving more control over the final outcome.
Although some may be concerned about the risks of this type of realistic image alteration, where the lines between original and altered imagery become more ambiguous, the team is positive about its advantages. “For public users, we believe that the greatest benefit of this research is providing a new option for image privacy protection,” said Yatani. “GCR offers a novel method for protecting against privacy threats, while maintaining visual coherence for storytelling purposes and enabling people to more safely share their content.”
#####
Paper Title
Anran Xu, Shitao Fang, Huan Yang, Simo Hosio, and Koji Yatani. 2024. Examining Human Perception of Generative Content Replacement in Image Privacy Protection. In CHI Conference on Human Factors in Computing Systems. ACM, New York, NY, USA, 22 pages. 14 May 2024. https://dl.acm.org/doi/10.1145/3613904.3642103
Useful Links:
Graduate School of Engineering: https://www.t.u-tokyo.ac.jp/en/soe
Interactive Intelligent Systems Laboratory: https://iis-lab.org/
Funding:
This research is part of the results of Microsoft Research Asia CORE-D program as well as Value Exchange Engineering, a joint research project between R4D, Mercari Inc., and the RIISE.
Examples of popular methods for image content replacement and protection (outlined here by red boxes), and how they compare to GCR in the far-right column.
Can’t hand-le it.
About the University of Tokyo
The University of Tokyo is Japan’s leading university and one of the world’s top research universities. The vast research output of some 6,000 researchers is published in the world’s top journals across the arts and sciences. Our vibrant student body of around 15,000 undergraduate and 15,000 graduate students includes over 4,000 international students. Find out more at www.u-tokyo.ac.jp/en/ or follow us on X at @UTokyo_News_en.
METHOD OF RESEARCH
Imaging analysis
SUBJECT OF RESEARCH
People
ARTICLE TITLE
Examining Human Perception of Generative Content Replacement in Image Privacy Protection
New UN research reveals impact of AI and cybersecurity on women, peace and security in south-east Asia
UNITED NATIONS UNIVERSITY
Systemic issues can put women’s security at risk when artificial intelligence (AI) is adopted, and gender biases across widely used AI-systems pose a significant obstacle to the positive use of AI in the context of peace and security in South-East Asia.
Moreover, women human rights defenders (WHRDs) and women’s Civil Society Organisations (WCSOs) in the region are at high risk of experiencing cyber threats and, while largely aware of these risks, are not necessarily able to prepare for, or actively recover from, cyber-attacks.
These are among the key findings of groundbreaking research released today by UN Women and the United Nations University Institute in Macau (UNU Macau) examining the connections between AI, digital security and the women, peace and security (WPS) agenda in South-East Asia.
The research was made possible with support from the Government of Australia, under the Cyber and Critical Tech Cooperation Program (CCTCP) of the Department for Foreign Affairs and Trade (DFAT), and the Government of the Republic of Korea through the UN Women initiative, Women, Peace and Cybersecurity: Promoting Women, Peace and Security in the Digital World.
With AI projected to add USD 1 trillion to the gross domestic product of South-East Asian countries by 2030, understanding the impact of these technologies on the WPS agenda is critical to supporting these countries to regulate the technologies and mitigate their risks.
The report Artificial Intelligence and the Women, Peace and Security Agenda in South-East Asia, examines the opportunities and risks of AI from this unique perspective by focusing on four types of gender biases in AI – discrimination, stereotyping, exclusion, and insecurity – which need to be addressed before the region can fully benefit from new technological developments.
This research examines the relationship between AI and WPS according to three types of AI and its applications: AI for peace, neutral AI, and AI for conflict.
This report notes that across these categories, there are favourable and unfavourable effects of AI for gender-responsive peace and women’s agency in peace efforts.
While using AI for peace purposes can have multiple benefits, such as improving inclusivity and the effectiveness of conflict prevention and tracking evidence of human rights breaches, it is used unequally between genders, and pervasive gender biases render women less likely to benefit from the application of these technologies.
The report also highlights risks related to the use of these technologies for military purposes.
This research identifies two dimensions to improving the dynamics of AI and the WPS agenda in the region: mitigating the risks of AI systems to advancing the WPS agenda, especially on social media, but also on other tools, such as chatbots and mobile applications; and fostering the development of AI tools built explicitly to support gender-responsive peace in line with WPS commitments.
The second report, Cybersecurity Threats, Vulnerabilities and Resilience among Women Human Rights Defenders and Civil Society in South-East Asia, explores cybersecurity risks and vulnerabilities in this context with the goal of promoting cyber-resilience and the human and digital rights of women in all their diversity.
While there is increasing awareness of the risks women and girls face in cyberspace, there is little understanding of the impacts of gender on cybersecurity, or of the processes and practices used to protect digital systems and networks from cyber risks and their harms.
This work differs from previous research into cybersecurity as it focuses on human-centric as compared to techno-centric cybersecurity and emphasises human factors rather than technical skills as well as the centralisation of gender as critical to cybersecurity.
Furthermore, cyber threats are understood to be gendered in nature, whereby WCSOs and WHRDs are specifically targeted due to the focus of their work and are likely to be attacked with misogynistic and sexualised harassment.
The results highlight that digital technologies are central to the work of WCSOs and WHRDs, while simultaneously noting that WCSOs had higher threat perceptions and threat experiences compared to CSOs that do not work on gender and women’s rights, carrying disproportionate risks of disrupting their work, damaging their reputation, and even creating harm or injury, all of which contribute to marginalising women’s voices.
The largest differences of experienced threats between the groups were for online harassment, trolling (deliberately provoking others online to incite reactions) and doxxing (when private or identifying information is distributed about someone online without their permission).
This report’s recommendations include fostering inclusive and collaborative approaches in cybersecurity policy development and engagement, and building the knowledge of civil society, government, private-sector actors and other decision makers to develop appropriate means of prevention and response to cyberattacks and their disproportionate impacts on WCSOs and WHRDs.
Specific attention should be given to at-risk individuals and organizations, such as women’s groups operating in politically volatile and conflict and crisis-affected contexts and situations where civic space is shrinking.
The launch took place during a UN Women youth conference, Gen-Forum 2024: Young Leaders for Women, Peace and Security in Asia and the Pacific which commenced today in Bangkok, Thailand.
UNU Macau and UN Women aim for this research, conducted over 12 months, to contribute to the global discourse on ethics and norms surrounding AI and digital governance at large.
Next, training materials based on the research findings and consultations with women’s rights advocates in the region will be rolled out, initially in Thailand and Vietnam, with e-learning modules and training handbooks to be publicly available in English, Thai and Vietnamese for interested stakeholders from mid-2024.
Download full reports and research summaries
- Artificial Intelligence and the Women, Peace and Security Agenda in South-East Asia [Full Report] [Research Summary]
- Cybersecurity Threats, Vulnerabilities and Resilience among Women Human Rights Defenders and Civil Society in South-East Asia [Full Report] [Research Summary]
USTC reveals how to effectively utilize large language models
UNIVERSITY OF SCIENCE AND TECHNOLOGY OF CHINA
Nowadays, Large Language Models (LLMs) are extensively applied in various situations from writing to solving complex problems. However, how to effectively interact with Artificial Intelligence and explore its potential remains to receive little attention.
Recently, Researcher LIN Zhicheng from Department of Psychology, University of Science and Technology of China (USTC) proposed practical strategies and guidelines to help us better understand and utilize LLMs. He emphasized that well-crafted prompts can enhance the accuracy and relevance of responses, preventing poor performance due to low-quality instructions. This commentary was published in Nature Human Behavior on 4th March.
Trained with deep learning, LLMs simulate neural network, with a distinctive feature of self-attention. LLMs are able to understand human language, thus being more user-friendly. Effective engagement with LLMs adds to the accuracy and relevance of the outputs, while reversely, poorly structured prompts can lead to inadequate answers. Though interacting with LLMs is seemingly simple, LIN pointed out that, designing effective prompts for LLMs is challenging.
The commentary highlighted the importance of "prompt engineering", a technique to optimize LLMs outputs through accurate input control. LIN proposed a series of strategies, including giving explicit instructions, adding relevant context, asking for multiple options and so on. These methods can help lead to ideal answers and reduce the compounding effect of errors.
This commentary serves as a practical guide for interactions with LLMs, helping users achieve ideal outcomes from LLMs and adding to our understanding on their potential. The strategies and opinions can provide valuable reference for the users who are expecting an enhanced efficiency in the interaction with LLMs.
JOURNAL
Nature Human Behaviour
ARTICLE TITLE
How to write effective prompts for large language models
Korea and NYU establish global AI frontier lab
Brooklyn-based joint research will be led by prize-winning AI researchers Yann LeCun and Kyunghyun Cho
NEW YORK UNIVERSITY
Republic of Korea Minister of Science and ICT Lee Jong-ho and New York University President Linda G. Mills today announced the establishment of the Global AI Frontier Lab. The Lab—which will be based in NYU facilities in Brooklyn and draw top AI researchers from the U.S., Korea, and around the world—is the latest advance of the joint research effort launched in 2023. The Institute of Information and Communication Technology Planning and Evaluation (IITP) President Hong Jin-bae and NYU signed a Memorandum of Agreement to establish the Global AI Frontier Lab and to outline its structure and operating guidelines.
The Global AI Frontier Lab will be led by two AI scholars from NYU’s esteemed Courant Institute of Mathematical Sciences and Center for Data Science: Yann LeCun, a Turing Prize-winning professor at NYU and Meta’s chief AI scientist, and Kyunghyun Cho, winner of the Samsung Ho-Am Award for Engineering, senior director of Frontier Research at Genentech, and a graduate of KAIST.
An open call for Korean researchers who wish to participate in the Global AI Frontier Lab and conduct world-class joint research with NYU was issued earlier this week by IITP in Korea (more details may be found on the IITP website). The Memorandum of Agreement will provide support for Korean researchers to participate in the joint research, as well as detail the structure of the research project and delineate priority-setting for specific joint research undertakings. The new lab is expected to be established in almost 13,000 sq. ft. of space in 1 MetroTech Center, adjacent to NYU’s Tandon School of Engineering and NYU’s 370 Jay Street technology and multimedia center.
Minister Lee Jong-ho said, “The Global AI Frontier Lab is the first step in a new international joint research paradigm and will serve as a stepping stone for Korea’s AI G3 leap and global solidarity and expansion,” adding, “We will provide policy support so that researchers can come together and actively contribute to AI innovation and sustainable AI development.”
NYU President Mills said, “These are important steps forward in ensuring the success of this joint research effort—an agreement on the Global AI Frontier Lab’s structure, world-class AI scholars in place as leadership, and a location in Downtown Brooklyn. Altogether, an outstanding combination. This project builds on NYU’s assets as an unrivaled global institution and on a foremost area of scholarly strength in science and technology for NYU. We and our Korean partners are very pleased with the development of this project; I am confident that this global partnership, steeped in scholarly excellence, will make a transformative contribution to the field of artificial intelligence.”
# # #
Will generative AI change the way universities communicate?
A new study in JCOM monitors changes in university communication within the German landscape
Since the launch of ChatGPT 3 in November 2022, we've been abuzz with talk of artificial intelligence: is it an unprecedented opportunity, or will it rob everyone of jobs and creativity? As we debate on social media (and perhaps use ChatGPT almost daily), generative AIs have also entered the arena of university communication. These tools—based on “Large Language Models” that were optimized for interactive communication—can indeed support, expand, and innovate university communication offerings. Henke analyzed the situation of German realities about six months after the launch of ChatGPT 3: "The research was conducted about a year ago when enthusiasm was high, but it was still early for people to understand the potential of the medium", he explains.
This initial early monitoring showed that usage was already widespread at that point. Henke distributed a questionnaire to all press/communication offices of the country's universities, receiving 101 responses, about a third of the total. Practically all those who responded declared that they make some use of generative AIs.
Translations, text corrections and text generation are the main uses recorded by Henke. The other functions suggested in the questionnaire—image creation, slide production, or document analysis—are instead marginal. "What we observe in this initial work is that as far as communication is concerned, artificial intelligence is adopted by universities mainly to increase process efficiency, for example, to speed them up, doing more things in less time," explains Henke.
What also emerges, especially in some open answers, is a certain caution and growing awareness towards ethical aspects. An example is data protection. "For instance, one wonders whether it is wise, or right, to feed these intelligences—owned by private companies—with university data. The issue of privacy is also important," comments the researcher. In this sense, "more and more universities in Germany are releasing their own instances of generative AI chatbots, on dedicated servers", precisely to try to maintain control over these delicate aspects.
"There is not only a technological shift underway but also a cultural one," adds Henke. "Usually the early adopters tend to be younger and fresher in the profession, more open to change." The problem highlighted, however, is that there is no policy that works for everyone. Many are also worried by the possibility that these technologies could replace jobs. "You need the social aspect of technology adoption to be taken seriously," Henke recommends.
Henke, who is now working on a new survey to assess the situation a year after the first, believes he will observe further evolution of the situation: "I know that the use of generative AI tools is bound to increase", he says. "Last year people were experimenting, but in the comments, they also explained that sometimes they were not satisfied with the results. It was probably a matter of competence. They didn't know, for example, how to make an effective prompt for their goals. Probably today this aspect will have improved. We have to now turn our head and focus on a more strategic and integrated AI approach", especially in light of the continuous updates and advancements of these tools (just a few days ago, Chat GPT 4o was launched, sparking new controversies regarding security, even among the staff of Open AI, the company that owns Chat GPT)."
Henke believes it is important that universities learn to use these new instruments without calling into question the work they have done so far and the future goals they have already planned. "Communication is about building relationships and trust. In particular, one of the main purposes of science communication (of which university communication is a particular case) serves to build trust and relationships between the public and scientific research. If you compromise these relationships using 'automated' press releases or mainly use bots to talk to the public, the latter will end up losing interest or worse, start having doubts about the institution itself. It's important that humans remain part of the process. Artificial intelligence should enhance communication, not replace it", concludes Henke.
JOURNAL
Journal of Science Communication
METHOD OF RESEARCH
Observational study
SUBJECT OF RESEARCH
People
ARTICLE TITLE
Communication Strategies and Perspectives on Generative AI Tools.
ARTICLE PUBLICATION DATE
27-May-2024
No comments:
Post a Comment