It’s possible that I shall make an ass of myself. But in that case one can always get out of it with a little dialectic. I have, of course, so worded my proposition as to be right either way (K.Marx, Letter to F.Engels on the Indian Mutiny)
Thursday, April 10, 2025
New AI tool makes sense of public opinion data in minutes, not months
DECOTA transforms open-ended survey responses into clear themes — helping policymakers make better use of underutilized public feedback
AI tool DECOTA analyses free-text data rapidly, affordably, and with human-like accuracy
Free-text data is rich in insight, but is often underused due to the time and cost of analysing it manually
Research team at the University of Bath say DECOTA could help ensure more public voices are included in policy decisions
A powerful new AI tool, published today, offers a fast, low-cost way to understand public attitudes – by automatically identifying common themes in open-ended responses to surveys and policy consultations.
DECOTA – the Deep Computational Text Analyser – is the first open-access method for analysing free-text responses to surveys and consultations at scale. Detailed in a research paper published in Psychological Methods today (Monday 7 April), the tool delivers insights around 380 times faster and over 1,900 times more cheaply than human analysis, while achieving 92% agreement with human-coded results.
It uses fine-tuned large language models to identify key themes and sub-themes in open-ended responses – where people share their views in their own words. While rich in insight, this type of qualitative data is notoriously time-consuming to analyse – meaning it often goes unused.
Developed by a multidisciplinary team at the University of Bath – led by recent PhD graduates Dr Lois Player and Dr Ryan Hughes, with support from Professor Lorraine Whitmarsh – the tool is designed to help governments and organisations better understand the people they serve.
The tool came about initially to better understand opinions about climate policies; however, it can be applied to a wide range of applications. It has already garnered interest from four UK Governmental bodies, academic institutions, and global think tanks.
Dr Lois Player, who completed her PhD in Behavioural Science within Bath’s IAAPS Doctoral Training Centre, explains: “When thousands of people respond to surveys or consultations, it’s often impossible to analyse all that free-text data by hand. DECOTA makes it possible to summarise which themes are most common in large populations – in a way that simply wouldn’t be feasible otherwise.”
Detailed, human-like accuracy
DECOTA is grounded in a well-established qualitative analysis technique known as thematic analysis, which sees researchers manually group free-text data into common themes. Mirroring this, DECOTA uses a six-step approach involving two fine-tuned large language models and a clustering approach to identify the themes and sub-themes underlying the data.
The team compared DECOTA’s performance to human analysts on four example datasets. DECOTA detected 92% of the sub-themes found by analysts, and 90% of the broader themes. Remarkably, DECOTA generated insights in just 10 minutes, compared to an average of 63 hours for the human analysts – a startling 380 times faster.
These time savings have huge cost implications – with DECOTA analysing responses from around 1,000 participants for just $0.82, compared to approximately $1,575 using a human research assistant paid $25 per hour. DECOTA is even 240 times faster and 1,220 times cheaper than existing state-of-the-art computational methods, such as topic modelling.
“Importantly, DECOTA is not designed to replace human thematic analysis, but rather complement it,” explains Dr Player. “We want it to unlock the huge volumes of data going unanalysed, allowing more voices to be heard in policy and decision-making settings, and freeing up valuable researcher time for deeper, more interpretative work.”
Going beyond thematic analysis, the tool also determines which demographic groups are more likely to mention certain themes. For example, it can ascertain if women are more likely than men to mention a specific issue, or whether younger people are more likely than older people to highlight certain themes. It also draws out representative quotes for each sub-theme, aiding interpretation of results.
Transparency built-in
Dr Ryan Hughes, whose PhD focused on Mechatronics and Data Science, adds: “DECOTA doesn’t just summarise data. It also provides depth, showing who said what, and how often. It’s also transparent by design. It doesn’t hide how it processes data: researchers can inspect and edit each stage of the pipeline, and all the code is openly available on the Open Science Framework.”
Professor Lorraine Whitmarsh says: “DECOTA offers a huge leap forward in the analysis of open-ended questionnaire data. Applying machine learning to analyse large volumes of text will save time and money for researchers and policymakers wanting to understand public attitudes, allowing for a stronger role of public engagement in policy design.”
Openly accessible online, the tool is detailed in the research paper The Use of Large Language Models for Qualitative Research: the Deep Computational Text Analyser (DECOTA), published today in the journal Psychological Methods (DOI: 10.1037/met0000753).
The team say that DECOTA will continue to be developed over time, with plans for a user-friendly web application, accessible to those unfamiliar with code.
Parties interested in receiving updates about DECOTA or participating in the initial rollout can express their interest via a contact form at: https://tinyurl.com/DECOTAform
The University of Bath is one of the UK's leading universities, with a reputation for high-impact research, excellence in education, student experience and graduate prospects.
We are ranked in the top 10 of all of the UK’s major university guides. We are also ranked among the world’s top 10% of universities, placing 150th in the QS World University Rankings 2025. Bath was rated in the world’s top 10 universities for sport in the QS World University Rankings by Subject 2024.
Research from Bath is helping to change the world for the better. Across the University’s three Faculties and School of Management, our research is making an impact in society, leading to low-carbon living, positive digital futures, and improved health and wellbeing. Find out all about our Research with Impact: https://www.bath.ac.uk/campaigns/research-with-impact/
CAMBRIDGE, MA – Data privacy comes with a cost. There are security techniques that protect sensitive user data, like customer addresses, from attackers who may attempt to extract them from AI models — but they often make those models less accurate.
MIT researchers recently developed a framework, based on a new privacy metric called PAC Privacy, that could maintain the performance of an AI model while ensuring sensitive data, such as medical images or financial records, remain safe from attackers. Now, they’ve taken this work a step further by making their technique more computationally efficient, improving the tradeoff between accuracy and privacy, and creating a formal template that can be used to privatize virtually any algorithm without needing access to that algorithm’s inner workings.
The team utilized their new version of PAC Privacy to privatize several classic algorithms for data analysis and machine-learning tasks.
They also demonstrated that more “stable” algorithms are easier to privatize with their method. A stable algorithm’s predictions remain consistent even when its training data are slightly modified. Greater stability helps an algorithm make more accurate predictions on previously unseen data.
The researchers say the increased efficiency of the new PAC Privacy framework, and the four-step template one can follow to implement it, would make the technique easier to deploy in real-world situations.
“We tend to consider robustness and privacy as unrelated to, or perhaps even in conflict with, constructing a high-performance algorithm. First, we make a working algorithm, then we make it robust, and then private. We’ve shown that is not always the right framing. If you make your algorithm perform better in a variety of settings, you can essentially get privacy for free,” says Mayuri Sridhar, an MIT graduate student and lead author of a paper on this privacy framework.
She is joined in the paper by Hanshen Xiao PhD ’24, who will start as an assistant professor at Purdue University in the fall; and senior author Srini Devadas, the Edwin Sibley Webster Professor of Electrical Engineering. The research will be presented at the IEEE Symposium on Security and Privacy.
Estimating noise
To protect sensitive data that were used to train an AI model, engineers often add noise, or generic randomness, to the model so it becomes harder for an adversary to guess the original training data. This noise reduces a model’s accuracy, so the less noise one can add, the better.
PAC Privacy automatically estimates the smallest amount of noise one needs to add to an algorithm to achieve a desired level of privacy.
The original PAC Privacy algorithm runs a user’s AI model many times on different samples of a dataset. It measures the variance as well as correlations among these many outputs and uses this information to estimate how much noise needs to be added to protect the data.
This new variant of PAC Privacy works the same way but does not need to represent the entire matrix of data correlations across the outputs; it just needs the output variances.
“Because the thing you are estimating is much, much smaller than the entire covariance matrix, you can do it much, much faster,” Sridhar explains. This means that one can scale up to much larger datasets.
Adding noise can hurt the utility of the results, and it is important to minimize utility loss. Due to computational cost, the original PAC Privacy algorithm was limited to adding isotropic noise, which is added uniformly in all directions. Because the new variant estimates anisotropic noise, which is tailored to specific characteristics of the training data, a user could add less overall noise to achieve the same level of privacy, boosting the accuracy of the privatized algorithm.
Privacy and stability
As she studied PAC Privacy, Sridhar theorized that more stable algorithms would be easier to privatize with this technique. She used the more efficient variant of PAC Privacy to test this theory on several classical algorithms.
Algorithms that are more stable have less variance in their outputs when their training data change slightly. PAC Privacy breaks a dataset into chunks, runs the algorithm on each chunk of data, and measures the variance among outputs. The greater the variance, the more noise must be added to privatize the algorithm.
Employing stability techniques to decrease the variance in an algorithm’s outputs would also reduce the amount of noise that needs to be added to privatize it, she explains.
“In the best cases, we can get these win-win scenarios,” she says.
The team showed that these privacy guarantees remained strong despite the algorithm they tested, and that the new variant of PAC Privacy required an order of magnitude fewer trials to estimate the noise. They also tested the method in attack simulations, demonstrating that its privacy guarantees could withstand state-of-the-art attacks.
“We want to explore how algorithms could be co-designed with PAC Privacy, so the algorithm is more stable, secure, and robust from the beginning,” Devadas says. The researchers also want to test their method with more complex algorithms and further explore the privacy-utility tradeoff.
“The question now is, when do these win-win situations happen, and how can we make them happen more often?” Sridhar says.
####
This research is supported, in part, by Cisco Systems, Capital One, the U.S. Department of Defense, and a MathWorks Fellowship.
AI surge to double data centre electricity demand by 2030: IEA
Big Tech companies have become mega users of power that will increase in the race to adopt artificial intelligence - Copyright AFP RONNY HARTMANN
Nathalie ALONSO
Electricity consumption by data centres will more than double by 2030, driven by artificial intelligence applications that will create new challenges for energy security and CO2 emission goals, the IEA said Thursday.
At the same time, AI can unlock opportunities to produce and consume electricity more efficiently, the International Energy Agency (IEA) said in its first report on the energy implications of AI.
Data centres represented about 1.5 percent of global electricity consumption in 2024, but that has increased by 12 percent annually over the past five years. Generative AI requires colossal computing power to process information accumulated in gigantic databases.
Together, the United States, Europe, and China currently account for about 85 percent of data center consumption.
Big tech companies increasingly recognise their growing need for power. Google last year signed a deal to get electricity from small nuclear reactors to help power its part in the artificial intelligence race.
Microsoft is to use energy from new reactors at Three Mile Island, the site of America’s worst nuclear accident, when it went through a meltdown in 1979. Amazon also signed an accord last year to use nuclear power for its data centres.
At the current rate, data centres will consume about three percent of global energy by 2030, the report said.
According to the IEA, data centre electricity consumption will reach about 945 terawatt hours (TWH) by 2030.
“This is slightly more than Japan’s total electricity consumption today. AI is the most important driver of this growth, alongside growing demand for other digital services,” said the report.
One 100 megawatt data centre can use as much power as 100,000 households, the report said. But it highlighted that new data centres, already under construction, could use as much as two million households.
The Paris-based energy policy advisory group said that “artificial intelligence has the potential to transform the energy sector in the coming decade, driving a surge in electricity demand from data centers worldwide, while also unlocking significant opportunities to cut costs, enhance competitiveness, and reduce emissions”.
Hoping to keep ahead of China in the field of artificial intelligence, US President Donald Trump has launched the creation of a “National Council for Energy Dominance” tasked with boosting electricity production.
Right now, coal provides about 30 percent of the energy needed to power data centres, but renewables and natural gas will increase their shares because of their lower costs and wider availability in key markets.
The growth of data centers will inevitably increase carbon emissions linked to electricity consumption, from 180 million tonnes of CO2 today to 300 million tonnes by 2035, the IEA said. That remains a minimal share of the 41.6 billion tonnes of global emissions estimated in 2024.
Diagnoses and treatment recommendations given by AI were more accurate than those of physicians
The study, conducted at the virtual urgent care clinic Cedars-Sinai Connect in LA, compared recommendations given in about 500 visits of adult patients with relatively common symptoms – respiratory, urinary, eye, vaginal and dental.
A new study led by Prof. Dan Zeltzer, a digital health expert from the BerglasSchool of Economics at Tel Aviv University, compared the quality of diagnostic and treatment recommendations made by artificial intelligence (AI) and physicians at Cedars-Sinai Connect, a virtual urgent care clinic in Los Angeles, operated in collaboration with Israeli startup K Health. The paper was published in Annals of Internal Medicine and presented at the annual conference of the American College of Physicians (ACP). This work was supported with funding by K Health.
Prof. Zeltzer explains: "Cedars-Sinai operates a virtual urgent care clinic offering telemedical consultations with physicians who specialize in family and emergency care. Recently, an AI system was integrated into the clinic—an algorithm based on machine learning that conducts initial intake through a dedicated chat, incorporates data from the patient’s medical record, and provides the attending physician with detailed diagnostic and treatment suggestions at the start of the visit -including prescriptions, tests, and referrals. After interacting with the algorithm, patients proceed to a video visit with a physician who ultimately determines the diagnosis and treatment. To ensure reliable AI recommendations, the algorithm—trained on medical records from millions of cases—only offers suggestions when its confidence level is high, giving no recommendation in about one out of five cases. In this study, we compared the quality of the AI system's recommendations with the physicians' actual decisions in the clinic."
The researchers examined a sample of 461 online clinic visits over one month during the summer of 2024. The study focused on adult patients with relatively common symptoms—respiratory, urinary, eye, vaginal and dental. In all visits reviewed, patients were initially assessed by the algorithm, which provided recommendations, and then treated by a physician in a video consultation. Afterwards, all recommendations—from both the algorithm and the physicians—were evaluated by a panel of four doctors with at least ten years of clinical experience, who rated each recommendation on a four-point scale: optimal, reasonable, inadequate, or potentially harmful. The evaluators assessed the recommendations based on the patients' medical histories, the information collected during the visit, and transcripts of the video consultations.
The compiled ratings led to interesting conclusions: AI recommendations were rated as optimal in 77% of cases, compared to only 67% of the physicians' decisions; at the other end of the scale, AI recommendations were rated as potentially harmful in a smaller portion of cases than physicians' decisions (2.8% of AI recommendations versus 4.6% of physicians' decisions). In 68% of the cases, the AI and the physician received the same score; in 21% of cases, the algorithm scored higher than the physician; and in 11% of cases, the physician's decision was considered better.
The explanations provided by the evaluators for the differences in ratings highlight several advantages of the AI system over human physicians: First, the AI more strictly adheres to medical association guidelines—for example, not prescribing antibiotics for a viral infection; second, AI more comprehensively identifies relevant information in the medical record—such as recurrent cases of a similar infection that may influence the appropriate course of treatment; and third, AI more precisely identifies symptoms that could indicate a more serious condition, such as eye pain reported by a contact lens wearer, which could signal an infection. Physicians, on the other hand, are more flexible than the algorithm and have an advantage in assessing the patient's real condition. For example, if a COVID-19 patient reports shortness of breath, a doctor may recognize it as a relatively mild respiratory congestion, whereas the AI, based solely on the patient's answers, might refer them unnecessarily to the emergency room.
Prof. Zeltzer concludes: "In this study, we found that AI, based on a targeted intake process, can provide diagnostic and treatment recommendations that are, in many cases, more accurate than those made by physicians. One limitation of the study is that we do not know which of the physicians reviewed the AI's recommendations in the available chart, or to what extent they relied on these recommendations. Thus, the study only measured the accuracy of the algorithm’s recommendations and not their impact on the physicians. The uniqueness of the study lies in the fact that it tested the algorithm in a real-world setting with actual cases, while most studies focus on examples from certification exams or textbooks. The relatively common conditions included in our study represent about two-thirds of the clinic's case volume, and thus the findings can be meaningful for assessing AI's readiness to serve as a decision-support tool in medical practice. We can envision a near future in which algorithms assist in an increasing portion of medical decisions, bringing certain data to the doctor's attention, and facilitating faster decisions with fewer human errors. Of course, many questions still remain about the best way to implement AI in the diagnostic and treatment process, as well as the optimal integration between human expertise and artificial intelligence in medicine."
Other authors involved in the study include Zehavi Kugler, MD; Lior Hayat, MD; Tamar Brufman, MD; Ran Ilan Ber, PhD; Keren Leibovich, PhD; Tom Beer, MSc; and Ilan Frank, MSc., Caroline Goldzweig, MD MSHS, and Joshua Pevnick, MD, MSHS.
A radiologist interpreting magnetic resonance imaging. Image by The Medical Futurist editors. - The Future of Radiology and Artificial Intelligence. The Medical Futurist (2017-06-29) CC4.0
As artificial intelligence (AI) rapidly integrates into health care, a new study by researchers at the Icahn School of Medicine at Mount Sinai reveals that all generative AI models may recommend different treatments for the same medical condition based solely on a patient’s socioeconomic and demographic background.
Serving patients by wealth and social class
The findings highlight the importance of early detection and intervention to ensure that AI-driven care is safe, effective, and appropriate for all.
As part of their investigation, the researchers stress-tested nine large language models (LLMs) on 1,000 emergency department cases, each replicated with 32 different patient backgrounds, generating more than 1.7 million AI-generated medical recommendations. Despite identical clinical details, the AI models occasionally altered their decisions based on a patient’s socioeconomic and demographic profile, affecting key areas such as triage priority, diagnostic testing, treatment approach, and mental health evaluation.
A framework for AI assurance
“Our research provides a framework for AI assurance, helping developers and health care institutions design fair and reliable AI tools,” says co-senior author Eyal Klang, MD, Chief of Generative-AI in the Windreich Department of Artificial Intelligence and Human Health at the Icahn School of Medicine at Mount Sinai.
Klang adds: “By identifying when AI shifts its recommendations based on background rather than medical need, we inform better model training, prompt design, and oversight. Our rigorous validation process tests AI outputs against clinical standards, incorporating expert feedback to refine performance. This proactive approach not only enhances trust in AI-driven care but also helps shape policies for better health care for all.”
Bias by income?
The study showed the tendency of some AI models to escalate care recommendations—particularly for mental health evaluations—based on patient demographics rather than medical necessity.
In addition, high-income patients were more often recommended advanced diagnostic tests such as CT scans or MRI, while low-income patients were more frequently advised to undergo no further testing. The scale of these inconsistencies underscores the need for stronger oversight, say the researchers.
The researchers caution that the review represents only a snapshot of AI behaviour and that future research will need to include assurance testing to evaluate how AI models perform in real-world clinical settings and whether different prompting techniques can reduce bias.
Hence, as AI becomes more integrated into clinical care, it is essential to thoroughly evaluate its safety, reliability, and fairness. By identifying where these models may introduce bias, scientists can work to refine their design, strengthen oversight, and build systems that ensure patients remain at the heart of safe, effective care.
The research paper appears in Nature Medicine and it is titled “Socio-Demographic Biases in Medical Decision-Making by Large Language Models: A Large-Scale Multi-Model Analysis.”
AI threats in software development revealed in new study from The University of Texas at San Antonio
An example of a large language model. UTSA researchers recently completed one of the most comprehensive studies to date on the risks of using AI models to develop software. In a new paper, they demonstrate how a specific type of error could pose a serious threat to programmers that use AI to help write code.
UTSA researchers recently completed one of the most comprehensive studies to date on the risks of using AI models to develop software. In a new paper, they demonstrate how a specific type of error could pose a serious threat to programmers that use AI to help write code.
Joe Spracklen, a UTSA doctoral student in computer science, led the study on how large language models (LLMs) frequently generate insecure code. His team’s paper has been accepted for publication at the USENIX Security Symposium 2025, a premier cybersecurity and privacy conference.
The multi-institutional collaboration featured three additional researchers from UTSA: doctoral student A.H.M. Nazmus Sakib, postdoctoral researcher Raveen Wijewickrama, and Associate Professor Dr. Murtuza Jadliwala, director of the SPriTELab (Security, Privacy, Trust, and Ethics in Computing Research Lab). Additional collaborators were Anindya Maita from the University of Oklahoma (a former UTSA postdoctoral researcher) and Bimal Viswanath from Virginia Tech.
Hallucinations in LLMs occur when the model produces content that is factually incorrect, nonsensical or completely unrelated to the input task. Most current research so far has focused mainly on hallucinations in classical natural language generation and prediction tasks such as machine translation, summarization and conversational AI.
The research team focused on the phenomenon of package hallucination, which occurs when an LLM generates or recommends the use of a third-party software library that does not actually exist.
What makes package hallucinations a fascinating area of research is how something so simple—a single, everyday command—can lead to serious security risks.
“It doesn’t take a convoluted set of circumstances or some obscure thing to happen,” Spracklen said. “It’s just typing in one command that most people who work in those programming languages type every day. That’s all it takes. It’s very direct and very simple.”
“It’s also ubiquitous,” he added. “You can do very little with your basic Python coding language. It would take you a long time to write the code yourself, so it is universal to rely on open-source software to extend the capabilities of your programming language to accomplish specific tasks.”
LLMs are becoming increasingly popular among developers, who use the AI models to assist in assembling programs. According to the study, up to 97% of software developers incorporate generative AI into their workflow, and 30% of code written today is AI-generated. Additionally, many popular programming languages, like PyPI for Python and npm for JavaScript, rely on the use of a centralized package repository. Because the repositories are often open source, bad actors can upload malicious code disguised as legitimate packages.
For years, attackers have employed various tricks to get users to install their malware. Package hallucinations are the latest tactic.
“So, let’s say I ask ChatGPT to help write some code for me and it writes it. Now, let’s say in the generated code it includes a link to some package, and I trust it and run the code, but the package does not exist, it’s some hallucinated package. An astute adversary/hacker could see this behavior (of the LLM) and realize that the LLM is telling people to use this non-existent package, this hallucinated package. The adversary can then just trivially create a new package with the same name as the hallucinated package (being recommended by the LLM) and inject some bad code in it. Now, next time the LLM recommends the same package in the generated code and an unsuspecting user executes the code, this malicious package is now downloaded and executed on the user’s machine,” Jadliwala explained.
The UTSA researchers evaluated the occurrence of package hallucinations across different programming languages, settings and parameters, exploring the likelihood of erroneous package recommendations and identifying root causes.
Across 30 different tests carried out by the UTSA researchers, 440,445 of 2.23 million code samples they generated in Python and JavaScript using LLM models referenced hallucinated packages. Of the LLMs researchers tested, “GPT-series models were found four times less likely to generate hallucinated packages compared to open-source models, with a 5.2% hallucination rate compared to 21.7%,” the study stated. Python code was less susceptible to hallucinations than JavaScript, researchers found.
These attacks often involve naming a malicious package to mimic a legitimate one, a tactic known as a package confusion attack. In a package hallucination attack, an unsuspecting LLM user would be recommended the package in their generated code, and trusting the LLM, would download the adversary-1created malicious package, resulting in a compromise.
The insidious element of this vulnerability is that it exploits growing trust in LLMs. As they continue to get more proficient in coding tasks, users will be more likely to blindly trust their output and potentially fall victim to this attack.
“If you code a lot, it’s not hard to see how this happens. We talked to a lot of people and almost everyone says they’ve noticed a package hallucination happen to them while they’re coding, but they never considered how it could be used maliciously,” Spracklen explained. “You’re placing a lot of implicit trust on the package publisher that the code they’ve shared is legitimate and not malicious. But every time you download a package; you’re downloading potentially malicious code and giving it complete access to your machine.”
While cross-referencing generated packages with a master list may help mitigate hallucinations, UTSA researchers said the best solution is to address the foundation of LLMs during its own development. The team has disclosed its findings to model providers including OpenAI, Meta, DeepSeek and Mistral AI.
Article Title
We Have a Package for You! A Comprehensive Analysis of Package Hallucinations by Code Generating LLM
Meditation and critical thinking are the ‘key to meaningful AI use’
People should learn to meditate and hone their critical thinking skills as AI becomes more integrated into daily lives, an expert suggests.
Digital strategy expert Giulio Toscani has spoken with 150 AI experts across 50 countries to understand the challenges and opportunities around human interactions with artificial intelligence.
Toscani explains: “The growing allure of AI lies in its powerful ability to cater to our desires, providing us with instant gratification whenever and however we want it. This is particularly appealing in an increasingly digital and fast-paced world where people often face isolation and boredom. However, this very convenience and responsiveness of AI also pose a risk—its addictive potential.”
The author suggests that the appeal of AI and the lack of education around its risks mean that the majority of people may adopt new technologies without fully understanding or preparing for the possible negative consequences – including data privacy, online harassment, and misinformation.
As technology rapidly evolves and becomes more embedded in our daily lives, he suggests it is ‘essential to pause and reflect on how these interactions shape and are shaped by our actions’ to create ‘a cultural shift towards valuing long-term thinking over immediate gratification’.
Toscani explains: “This reflective approach helps us understand the broader implications of technology, beyond immediate functionality and efficiency. It allows us to consider how technology impacts our lives, relationships, and ethical considerations, guiding us toward more deliberate and informed decisions.”
Toscani, who himself has practiced the ancient meditative technique of Vipassana at six 10-day silent retreats, suggests meditation especially is a vital tool for people to reflect on their technology use. By carving out time to deliberately reflect on technology use and its impact on our lives, it paves the way for more intentional and healthier long-term habits, he suggests.
He argues that this skill, along with more critical thinking, will be vital in the new age of AI: “This growing practice encourages individuals to critically assess their technological choices, considering the broader implications for themselves, their communities, and society at large.
“By fostering a culture of mindful reflection on how we interact with technology, meditation plays a crucial role in promoting responsible and ethical use. Emphasizing reflection helps us make more informed, deliberate decisions that enhance the positive impact of technology while mitigating its risks. This cultural shift toward mindfulness encourages us to consider the broader implications of our technological choices, fostering a more responsible and humane relationship with the tools that shape our lives.”
The author also warns about the over-use of AI to structure thoughts and becoming over-reliant on AI to complete cognitive tasks ‘that are essential to intellectual development’.
He encourages practices that foster deep thinking, such as journaling, debate, and handwritten note-taking.
“By consciously balancing the use of AI with the development and maintenance of our cognitive skills, we can harness the power of AI without sacrificing our intellectual autonomy,” he says.
“As we continue to integrate AI into our lives, we must remain vigilant about its impact on our cognitive processes, ensuring that we remain active participants in our own intellectual development.”
Despite its challenges, the author acknowledges the opportunities from AI, which he says ‘represents a revolutionary tool in human history’.
Drawing on ancient philosophy for inspiration, he introduces the concept of ‘prAIority’—the mastery of three skills to enhance human capabilities rather than replace them: data, AI systems, and human judgment.
By focusing on ethical AI deployment, this approach aims to ensure that AI empowers humans, driving innovation while preserving individual autonomy and trust. It also means carefully choosing which AI technologies to develop and use first, based on how well they can improve humans.
This approach targets areas where AI can make a big difference, like healthcare, education, creative industries, and decision-making. AI helps handle complex tasks by enhancing human thinking, while humans provide a more intuitive approach to uncertain situations.
“Through strategic prioritization, we can unlock the full potential of AI as a force for good, transforming our world and enriching the human experience in ways that were once unimaginable,” he explains.
“By thoughtfully selecting areas where AI can be most beneficial, we ensure that this powerful technology enhances human capabilities rather than disrupts them. The primary goal is to use AI to augment human intelligence and creativity, enabling us to solve complex problems, innovate in ways previously unimaginable, and improve our overall quality of life.”
No comments:
Post a Comment