A.I.
Contrary to common belief, artificial intelligence will not put you out of work
New research unveils the complex nature of human-AI interactions: AI favors junior workers, but not for the reasons you'd expect
Peer-Reviewed PublicationINFORMS Journal Management Science New Study Key Takeaways:
- While AI benefits workers with greater task-based experience, senior workers gain less from AI due to lower trust in AI.
- Lower trust is likely triggered by the senior workers’ broader job responsibilities.
- Employers should consider different worker experience levels and experience types when evaluating job performance in roles that require teaming with AI.
BALTIMORE, MD, November 2, 2023 – New research in the INFORMS journal Management Science is providing insights for business leaders on how work experience affects employees interacting with AI.
The study, “Friend or Foe? Teaming Between Artificial Intelligence and Workers with Variation in Experience,” looks at the influence of two major types of human work experience (narrow experience based on the specific task volume and broad experience based on seniority) on the human-AI team dynamics.
“We developed an AI solution for medical chart coding in a publicly traded company and conducted a field study among the knowledge workers,” says Weiguang Wang of the University of Rochester and leading author of the study. “We were surprised by what we found in the study. The different dimensions of work experience have distinct interactions with AI and play unique roles in human-AI teaming.”
“While one might think that less experienced workers should benefit more from the help of AI, we find the opposite – AI benefits workers with greater task-based experience. At the same time, senior workers, despite their greater experience, gain less from AI than their junior colleagues,” says Guodong (Gordon) Gao of Johns Hopkins Carey Business School, and study co-author.
Further investigation reveals that the relatively lower productivity lift from AI is not a result of seniority per se but rather their higher sensitivity to the imperfection of AI, which lowers their trust in AI.
“This finding presents a dilemma: Employees with greater experience are in a better position to leverage AI for productivity, but the senior employees who assume greater responsibilities and care about the organization tend to shy away from AI because they see the risks of relying on AI’s assistance. As a result, they are not effectively leveraging AI,” says Ritu Agarwal of Johns Hopkins Carey Business School, a co-author of the study.
The researchers urge employers to carefully consider different worker experience types and levels when introducing AI into the work. New workers with less task experience are disadvantaged in leveraging AI. Meanwhile, senior workers with more organizational experience may be concerned about the potential risks imposed by AI. Addressing these unique challenges are key to productive human-AI teaming.
About INFORMS and Management Science
Management Science is a premier peer-reviewed scholarly journal focused on research using quantitative approaches to study all aspects of management in companies and organizations. It is published by INFORMS, the leading international association for data and decision science professionals. More information is available at www.informs.org or @informs.
###
Subscribe and stay up to date on the latest from INFORMS.
JOURNAL
Management Science
METHOD OF RESEARCH
Observational study
SUBJECT OF RESEARCH
People
ARTICLE TITLE
“Friend or Foe? Teaming Between Artificial Intelligence and Workers with Variation in Experience”
Learning to forget – a weapon in the arsenal against harmful AI
With the AI summit well underway, researchers are keen to raise the very real problem associated with the technology – teaching it how to forget
Peer-Reviewed PublicationWith the AI summit well underway, researchers are keen to raise the very real problem associated with the technology – teaching it how to forget.
Society is now abuzz with modern AI and its exceptional capabilities; we are constantly reminded its potential benefits, across so many areas, permeating practically all facets of our lives – but also its dangers.
In an emerging field of research, scientists are highlighting an important weapon in our arsenal towards mitigating the risks of AI – ‘machine unlearning’. They are helping to figure out new ways of making AI models known as Deep Neural Networks (DNNs) forget data which poses a risk to society.
The problem is re-training AI programmes to ‘forget’ data is a very expensive and an arduous task. Modern DNNs such as those based on ‘Large Language Models’ (like ChatGPT, Bard, etc.) require massive resources to be trained – and take weeks or months to do so. They also require tens of Gigawatt-hours of energy for every training programme, some research estimating as much energy as to power thousands on households for one year.
Machine Unlearning is a burgeoning field of research that could remove troublesome data from DNNs quickly, cheaply and using less resources. The goal is to do so while continuing to ensure high accuracy. Computer Science experts at the University of Warwick, in collaboration with Google DeepMind, are at the forefront of this research.
Professor Peter Triantafillou, Department of Computer Science, University of Warwick, recently co-authored a publication ‘Towards Unbounded Machine Unlearning’. He said: “DNNs are extremely complex structures, comprised of up to trillions of parameters. Often, we lack a solid understanding of exactly how and why they achieve their goals. Given their complexity, and the complexity and size of the datasets they are trained on, DNNs may be harmful to society.
“DNNs may be harmful, for example, by being trained on data with biases – thus propagating negative stereotypes. The data might reflect existing prejudices, stereotypes and faulty societal assumptions – such as a bias that doctors are male, nurses female – or even racial prejudices.
“DNNs might also contain data with ‘erroneous annotations’ – for example, the incorrect labelling of items, such as labelling an image as being a deep fake or not.
“Alarmingly, DNNs may be trained on data which violates the privacy of individuals. This poses a huge challenge to mega-tech companies, with significant legislation in place (for example GDPR) which aims to safeguard the right to be forgotten – that is the right of any individual to request that their data be deleted from any dataset and AI programme.
“Our recent research has derived a new ‘machine unlearning’ algorithm that ensures DNNs can forget dodgy data, without compromising overall AI performance. The algorithm can be introduced to the DNN, causing it to specifically forget the data we need it to, without having to re-train it entirely from scratch again. It’s the only work that differentiated the needs, requirements, and metrics for success among the three different types of data needed to be forgotten: biases, erroneous annotations and issues of privacy.
“Machine unlearning is an exciting field of research that can be an important tool towards mitigating the risks of AI.”
Read the full paper here: https://arxiv.org/abs/2302.09880
Notes to Editors
This research is to be presented in the Thirty-Seventh Annual Conference on Neural Information Processing Systems (NeurIPS), in December 2023. It is a collaborative effort between Professor Peter, a PhD student at the Department of Computer Science at the University of Warwick (Meghdad Kurmanji) and researchers from Google DeepMind (Eleni Triantafillou and Jamie Hayes).
The team are also organizing the first ever competition on machine unlearning in NeurIPS 2023, https://unlearning-challenge.github.io/, hosted by Kaggle (with currently ca. 950 participating teams from across the world) to derive unlearning algorithms for a challenging task (unlearning faces from a face data set), https://www.kaggle.com/competitions/neurips-2023-machine-unlearning/leaderboard. At the same time, we are organizing a workshop on machine unlearning in NeurIPS 2023.
Media contact
University of Warwick press office contact:
Annie Slinn 07876876934
Communications Officer | Press & Media Relations | University of Warwick Email: annie.slinn@warwick.ac.uk
AI should be better understood and managed – new research warns
Artificial Intelligence (AI) and algorithms can and are being used to radicalize, polarize, and spread racism and political instability, says a Lancaster University academic
Peer-Reviewed PublicationArtificial Intelligence (AI) and algorithms can and are being used to radicalize, polarize, and spread racism and political instability, says a Lancaster University academic.
Professor of International Security at Lancaster University Joe Burton argues that AI and algorithms are not just tools deployed by national security agencies to prevent malicious activity online, but can be contributors to polarization, radicalism and political violence - posing a threat to national security.
Further to this, he says, securitization processes (presenting technology as an existential threat) have been instrumental in how AI has been designed, used and to the harmful outcomes it has generated.
Professor Burton’s article ‘Algorithmic extremism? The securitization of Artificial Intelligence (AI) and its impact on radicalism, polarization and political violence’ is published in Elsevier’s high impact Technology in Society Journal.
“AI is often framed as a tool to be used to counter violent extremism,” says Professor Burton. “Here is the other side of the debate.”
The paper looks at how AI has been securitized throughout its history, and in media and popular culture depictions, and by exploring modern examples of AI having polarizing, radicalizing effects that have contributed to political violence.
The article cites the classic film series, The Terminator, which depicted a holocaust committed by a ‘sophisticated and malignant’ artificial intelligence, as doing more than anything to frame popular awareness of Artificial Intelligence and the fear that machine consciousness could lead to devastating consequences for humanity – in this case a nuclear war and a deliberate attempt to exterminate a species.
“This lack of trust in machines, the fears associated with them, and their association with biological, nuclear and genetic threats to humankind has contributed to a desire on the part of governments and national security agencies to influence the development of the technology, to mitigate risk and (in some cases) to harness its positive potentiality,” writes Professor Burton.
The role of sophisticated drones, such as those being used in the war in Ukraine, are, says Professor Burton, now capable of full autonomy including functions such as target identification and recognition.
And, while there has been a broad and influential campaign debate, including at the UN, to ban ‘killer robots’ and to keep the human in the loop when it comes to life-or-death decision-making, the acceleration and integration into armed drones has, he says, continued apace.
In cyber security – the security of computers and computer networks – AI is being used in a major way with the most prevalent area being (dis)information and online psychological warfare.
Putin’s government’s actions against US electoral processes in 2016 and the ensuing Cambridge Analytica scandal showed the potential for AI to be combined with big data (including social media) to create political effects centred around polarization, the encouragement of radical beliefs and the manipulation of identity groups. It demonstrated the power and the potential of AI to divide societies.
And during the pandemic, AI was seen as a positive in tracking and tracing the virus but it also led to concerns over privacy and human rights.
The article examines AI technology itself, arguing that problems exist in the design of AI, the data that it relies on, how it is used, and in its outcomes and impacts.
The paper concludes with a strong message to researchers working in cyber security and International Relations.
“AI is certainly capable of transforming societies in positive ways but also presents risks which need to be better understood and managed,” writes Professor Burton, an expert in cyber conflict and emerging technologies and who is part of the University's Security and Protection Science initiative.
“Understanding the divisive effects of the technology at all stages of its development and use is clearly vital.
“Scholars working in cyber security and International Relations have an opportunity to build these factors into the emerging AI research agenda and avoid treating AI as a politically neutral technology.
“In other words, the security of AI systems, and how they are used in international, geopolitical struggles, should not override concerns about their social effects.”
As one of only a handful of universities whose education, research and training is recognised by the UK’s National Cyber Security Centre (NCSC), part of GCHQ, Lancaster is investing heavily in the next generation of cyber security leaders. As well as boosting the skills and talent pipeline in the region by building on its NCSC certified Masters degree with a new undergraduate degree in cyber security, it launched a trailblazing Cyber Executive Masters in Business Education.
Lancaster University is renowned for its support for the region’s SME community through its digital knowledge exchange teams, and has delivered a number of cyber security programmes including via the Lancashire Cyber Foundry and Greater Manchester Cyber Foundry.
JOURNAL
Technology in Society
METHOD OF RESEARCH
Commentary/editorial
SUBJECT OF RESEARCH
Not applicable
ARTICLE TITLE
Algorithmic extremism? The securitization of artificial intelligence (AI) and its impact on radicalism, polarization and political violence
ARTICLE PUBLICATION DATE
1-Nov-2023
AI trained to identify least green homes by Cambridge researchers
University of Cambridge media release
First of its kind AI-model can help policy-makers efficiently identify and prioritize houses for retrofitting and other decarbonizing measures.
FOR IMMEDIATE RELEASE
- Model identified ‘hard to decarbonize’ houses with 90% precision and additional data will improve this.
- Model trained with open source data including from energy performance certificates, and street and aerial view images. It could be used anywhere in the world.
- Model can even identify specific parts of houses losing most heat, including roofs and windows.
‘Hard-to-decarbonize’ (HtD) houses are responsible for over a quarter of all direct housing emissions – a major obstacle to achieving net zero – but are rarely identified or targeted for improvement.
Now a new ‘deep learning’ model trained by researchers from Cambridge University’s Department of Architecture promises to make it far easier, faster and cheaper to identify these high priority problem properties and develop strategies to improve their green credentials.
Houses can be ‘hard to decarbonize’ for various reasons including their age, structure, location, social-economic barriers and availability of data. Policymakers have tended to focus mostly on generic buildings or specific hard-to-decarbonise technologies but the study, published in the journal Sustainable Cities and Society, could help to change this.
Maoran Sun, an urban researcher and data scientist, and his PhD supervisor Dr Ronita Bardhan, who leads Cambridge’s Sustainable Design Group, show that their AI model can classify HtD houses with 90% precision and expect this to rise as they add more data, work which is already underway.
Dr Bardhan said: “This is the first time that AI has been trained to identify hard-to-decarbonize buildings using open source data to achieve this.
“Policymakers need to know how many houses they have to decarbonize, but they often lack the resources to perform detail audits on every house. Our model can direct them to high priority houses, saving them precious time and resources.”
The model also helps authorities to understand the geographical distribution of HtD houses, enabling them to efficiently target and deploy interventions efficiently.
The researchers trained their AI model using data for their home city of Cambridge, in the United Kingdom. They fed in data from Energy Performance Certificates (EPCs) as well as data from street view images, aerial view images, land surface temperature and building stock. In total, their model identified 700 HtD houses and 635 non-HtD houses. All of the data used was open source.
Maoran Sun said: “We trained our model using the limited EPC data which was available. Now the model can predict for the city’s other houses without the need for any EPC data.”
Bardhan added: “This data is available freely and our model can even be used in countries where datasets are very patchy. The framework enables users to feed in multi-source datasets for identification of HtD houses.”
Sun and Bardhan are now working on an even more advanced framework which will bring additional data layers relating to factors including energy use, poverty levels and thermal images of building facades. They expect this to increase the model’s accuracy but also to provide even more detailed information.
The model is already capable of identifying specific parts of buildings, such as roofs and windows, which are losing most heat, and whether a building is old or modern. But the researchers are confident they can significantly increase detail and accuracy.
They are already training AI models based on other UK cities using thermal images of buildings, and are collaborating with a space products-based organisation to benefit from higher resolution thermal images from new satellites. Bardhan has been part of the NSIP – UK Space Agency program where she collaborated with the Department of Astronomy and Cambridge Zero on using high resolution thermal infrared space telescopes for globally monitoring the energy efficiency of buildings.
Sun said: “Our models will increasingly help residents and authorities to target retrofitting interventions to particular building features like walls, windows and other elements.”
Bardhan explains that, until now, decarbonization policy decisions have been based on evidence derived from limited datasets, but is optimistic about AI’s power to change this.
“We can now deal with far larger datasets. Moving forward with climate change, we need adaptation strategies based on evidence of the kind provided by our model. Even very simple street view photographs can offer a wealth of information without putting anyone at risk.”
The researchers argue that by making data more visible and accessible to the public, it will become much easier to build consensus around efforts to achieve net zero.
“Empowering people with their own data makes it much easier for them to negotiate for support,” Bardhan said.
She added: “There is a lot of talk about the need for specialised skills to achieve decarbonisation but these are simple data sets and we can make this model very user friendly and accessible for the authorities and individual residents.”
Aerial view images of houses in Cambridge, UK. Red represents region contributing most to the Hard to decarbonize identification. Blue represents low contribution.
CREDIT
Ronita Bardhan
Cambridge as a study site
Cambridge is an atypical city but informative site on which to base the initial model. Bardhan notes that Cambridge is relatively affluent meaning that there is a greater willingness and financial ability to decarbonize houses.
“Cambridge isn’t ‘hard to reach’ for decarbonisation in that sense,” Bardhan said. “But the city’s housing stock is quite old and building bylaws prevent retrofitting and the use of modern materials in some of the more historically important properties. So it faces interesting challenges.”
The researchers will discuss their findings with Cambridge City Council. Bardhan previously worked with the Council to assess council houses for heat loss. They will also continue to work with colleagues at Cambridge Zero and the University’s Decarbonisation Network.
Reference
M. Sun & R. Bardhan, ‘Identifying Hard-to-Decarbonize houses from multi-source data in Cambridge, UK’, Sustainable Cities and Society (2023). DOI: 10.1016/j.scs.2023.105015
Media contacts
Tom Almeroth-Williams, Communications Manager (Research), University of Cambridge: researchcommunications@admin.cam.ac.uk / tel: +44 (0) 7540 139 444
Ronita Bardhan, University of Cambridge: rb867@cam.ac.uk
JOURNAL
Sustainable Cities and Society
ARTICLE TITLE
Identifying Hard-to-Decarbonize houses from multi-source data in Cambridge, UK
ARTICLE PUBLICATION DATE
1-Nov-2023
No comments:
Post a Comment