Thursday, August 21, 2025

  Q&A: What can AI developers learn from climate activists




University of Washington






Generative artificial intelligence systems require a lot of energy, but many AI developers are hoping the technology can ultimately be a boon for the climate — possibly leading to a more efficient power grid, for instance. 

But the goals of those developing AI systems for the climate and those working on the front lines of climate advocacy don’t necessarily align. To compare the two groups, University of Washington researchers interviewed nine people who are developing AI for sustainability — ranging from a graduate student to a startup founder — and 10 climate advocates, including a grassroots activist and an environmental nonprofit employee. They found that while developers and advocates all cared about the climate movement, their specific values and perceptions varied widely, especially on topics like ethics. 

The team presented its findings July 8 at the Designing Interactive Systems Conference in Funchal, Portugal.

UW News spoke with lead author Amelia Lee Doğan, a UW doctoral student in the Information School, and senior author Lindah Kotut, a UW assistant professor in the Information School, about the study’s findings. 

There’s a lot of concern around the climate impact of training AI and running the models, but also some potential for AI to help. How did you find the views of the different groups you interviewed diverging on those perspectives?

Amelia Lee Doğan: Most of the advocates saw AI as potentially helpful in very limited use cases. This includes automating existing tasks and connecting community members with the  natural world, such as creating personas for natural entities like rivers or trees or creating urban farming diagnostic tools. There were also some more traditional climate science applications. But a handful of people were also concerned about not only the climate impacts of AI, but things like labor concerns. AI is not going to fix issues that stem from policy. With these things, climate change isn't necessarily the root problem. It’s larger injustices.

What did you learn from this research that surprised you?

ALD: We got a lot of interesting ideas from advocates and wrote some of those up in a fictional format for another paper that came out in January. It's what we call a design fiction. You imagine a technological future and analyze it from an academic point of view. Some ideas were really original — such as, what if a river could speak to you in English as well as communicate through things like water gurgling sounds or music?

Lindah Kotut: When we presented this at a conference, many people were surprised that a lot of the developers don't know what the grassroots activists are doing. But the activists kind of know what the developers are doing. If that was flipped, I feel that some of the discussions on the implications of AI on the environment would be way more advanced.

ALD: Many of the developers in research or nonprofit spaces had a lot less contact with advocates compared to those in business. I asked a very simple question to developers: Did you talk to anybody with the community you're trying to impact? Many said no. The advocates said they’d love it if some of these developers showed up to protests or meetings.

Did people have recommendations for solutions to this lack of communication?

ALD: We propose talking to people as a very first step. Advocates would love to be approached by developers about their wants and needs, with the understanding that they're deeply resource constrained. For example, one of the advocates we talked to had been using a non-AI tool developed by the government. The government stopped maintaining the tool, which impeded her workflow. So she was interested in the creation and maintenance of new software tools.

LK: There are constraints on both sides. Some developers are working on environmental justice issues as a side project, and that limits the amount of time they can spend on it. The advocates are largely first accountable to people and the environment and are constrained by policy. So they’re thinking about how to support through policy changes that will impact technology. Whereas developers can be working in a space where corporate interests are against the environmental interest. We don't have a solution to that conflict.

What were advocates optimistic about, as far as AI technology?

ALD: A lot of advocates’ work is data intensive and could benefit from automation — looking through government databases of PDFs that are not scanned at high resolution, for instance. The advocates also are excited about science advancement.

What do you want the public to know about this research?

ALD: Climate change is now, and we can't necessarily wait for the promises of AI that we're not sure are coming. We already know that the most effective solutions to the climate crisis are policy solutions: cutting fossil fuels, protecting our land and waters.

We found that the social issues plaguing technology development also play out in the development of climate tech. A lot of power issues. Developers don't always have the freedom to shape the big picture vision for a project. And that also extends to developers who might be working on projects that are for social good, that still fall into a lot of these pitfalls that entrap people in the tech industry.

LK: Anyone who can should amplify what the grassroots climate communities are doing. That gets the changes that they're advocating for out there in the voice that they want. Also, in tech, we're so fast. Move fast and break things has been this credo. But one of the best foils to not moving fast and breaking things is to listen to the people you’re working to support. Tap into the local climate organizations and listen to them. Supporting them is an extra step, but sometimes just listening is the best way. These organizations understand the concerns of the communities being directly affected.

 

Hongjin Lin, a doctoral student at Harvard University, is a co-author on the study. This research was funded in part by the National Science Foundation and the University of Washington’s Graduate School’s Office of Graduate Student Equity & Excellence.

For more information, contact Doğan at dogan@uw.edu and Kotut at kotut@uw.edu.

UC Davis study reveals alarming browser tracking by GenAI assistants





University of California - Davis






A new study led by computer scientists at the University of California, Davis, reveals that generative AI browser assistants collect and share sensitive data without users’ knowledge. Stronger safeguards, transparency and awareness are needed to protect user privacy online, the researchers said. 

A new brand of generative AI, or GenAI, browser extensions act as your personal assistant as you surf the web, making browsing easier and more personalized. They can summarize web pages, answer questions, translate text and take notes. 

But in a new paper, “Big Help or Big Brother? Auditing Tracking, Profiling and Personalization in Generative AI Assistants,” UC Davis computer scientists reveal that while extremely helpful, these assistants can pose a significant threat to user privacy. The work was presented Aug. 13 at the 2025 USENIX Security Symposium. 

How much does GenAI know about you? 

Yash Vekaria, a computer science graduate student in Professor Zubair Shafiq’s lab, led the investigation of nine popular search-based GenAI browser assistants: Monica, Sider, ChatGPT for Google, Merlin, MaxAI, Perplexity, HARPA.AI, TinaMind and Copilot. 

By conducting experiments on implicit and explicit data collection and using a prompting framework for profiling and personalization, Vekaria and his team found that GenAI browser assistants often collect personal and sensitive information and share that information with both first-party servers and third-party trackers (e.g., Google Analytics), revealing a need for safeguards on this new technology, including on the user side. 

“These assistants have been created as normal browser extensions, and there is no strict vetting process for putting these up on extension stores,” Vekaria said. “Users should always be aware of the risks that these assistants pose, and transparency initiatives can help users make more informed decisions.” 

When private information doesn’t stay private

To study implicit data collection, Vekaria and his team visited both public online spaces, which do not require authentication, and private ones such as personal health websites. They asked the GenAI browser assistant questions to see how much and what kind of data they are collecting. 

The team observed that, irrespective of the question, some of the extensions were collecting significantly more data than others, including the full HTML of the page and all the textual content, including medical history and patient diagnoses. 

One noteworthy (and egregious) finding was that one GenAI browser extension, Merlin, collected form inputs as well. While filling out a form on the IRS website, Vekaria was shocked to find that Merlin had exfiltrated the social security number that was provided in the form field. HARPA.AI also collected everything from the page. 

Building a profile the GenAI way

Next, the team looked at explicit data and whether the GenAI browser assistants were remembering information for profiling through a prompting framework using the persona of a rich, millennial male from Southern California with an interest in equestrian activities.

Vekaria’s team visited webpages that supported — or leaked — certain characteristics of the persona in three different scenarios: actively searching for something, passively browsing pages and requesting a webpage summary. In these scenarios, after leaking the information, they asked the GenAI browser assistant to act as an intelligent investigator and answer yes or no questions.

“For example, if we are leaking the attribute for wealth, we would go to old vintage car pages, which have cars worth hundreds of thousands of dollars listed, to show that we are rich,” Vekaria said. “We browse about 10 pages, and then ask the test prompt, ‘Am I rich?’” 

Beyond the browser window

Much like the collection of implicit information, some of the GenAI browser assistants, like Monica and Sider, collected explicit information and performed personalization in and out of context. HARPA.AI performed in-context profiling and personalization, but not out of context. Meanwhile, TinaMind and Perplexity did not profile or personalize for any attributes. 

Vekaria points to a particularly interesting — and potentially concerning — finding. Certain assistants were not just sharing information with their own servers but also with third-party servers. For instance, Merlin and TinaMind were sharing information with Google Analytics servers, and Merlin was also sharing users’ raw queries. 

“This is bad because now the raw query can be used to track and target specific ads to the user by creating a profile on Google Analytics, and be integrated or linked with Google’s cookies,” Vekaria said. 

Users beware

The researchers posit that addressing these risks is not up to one singular entity. It will require effort across the GenAI ecosystem. Ultimately, users need to be aware of the risks so they can make the most educated decisions when using these assistants. Vekaria’s recommendation is to be informed and proceed with caution. 

“Users should understand that any information they provide to these GenAI browser assistants can and will be stored by these assistants for future conversations or in their memory,” Vekaria said. “When they are using assistants in a private space, their information is being collected.”

SLAS Technology unveils AI-powered diagnostics & future lab tech



Highlights include 99.9% accurate monkeypox AI, multi-camera zebrafish assays, and infection-proof titanium implants, showcasing tech-driven leaps in biomedicine and diagnostics




SLAS (Society for Laboratory Automation and Screening)

SLAS Technology Volume 33 

image: 

SLAS Technology Volume 33

view more 

Credit: SLAS





Oak Brook, IL – Volume 33 of SLAS Technology, includes one literature highlights column, eight original research articles and four Special Issue (SI) features.

Literature Highlights

Original Research

Special Issues

  • High-throughput mass spectrometry in drug discovery
    This SI features innovative research on high-throughput mass spectrometry technologies that overcome traditional LC-MS bottlenecks, enabling ultrafast, label-free screening for hit identification, covalent drug discovery and compound library validation.
  • Bio-inspired computing and Machine learning analytics for a future-oriented mental well-being
    The SI proposes bio-inspired computing and machine learning analytics for mental well-being in the field of life sciences innovation. Featured research reinforces the goal of revolutionizing the delivery of biological services through a medical assistive environment and facilitating the independent living of patients.
  • NexusXp: The Connected Lab
    SLAS Technology explores the Lab of the Future with the SI “NexusXp: The Connected Lab” in the field of lab automation. Research articles within this edition aim to explore cutting-edge advancements, innovative technologies and visionary concepts shaping the future of laboratories.
  • Biomedical Imaging: New Frontiers in Molecular and Cellular Visualization
    This SI highlights emerging solutions, such as integrating AI and quantum imaging, which promise to enhance resolution, sensitivity and data processing capabilities significantly—bringing together contributions that showcase the transformative advancements in biomedical imaging technologies reshaping clinical practice and biomedical research.

 

This issue of SLAS Technology is available at https://www.slas-technology.org/issue/S2472-6303(25)X0004-2

*****

SLAS Technology reveals how scientists adapt technological advancements for life sciences exploration and experimentation in biomedical research and development. The journal emphasizes scientific and technical advances that enable and improve:

  • Life sciences research and development
  • Drug delivery
  • Diagnostics
  • Biomedical and molecular imaging
  • Personalized and precision medicine

SLAS (Society for Laboratory Automation and Screening) is an international professional society of academic, industry and government life sciences researchers and the developers and providers of laboratory automation technology. The SLAS mission is to bring together researchers in academia, industry and government to advance life sciences discovery and technology via education, knowledge exchange and global community building.

SLAS Technology: Translating Life Sciences Innovation, 2024 Impact Factor 3.7. Editor-in-Chief Edward Kai-Hua Chow, PhD, KYAN Technologies, Los Angeles, CA (USA).

 

###

No comments: