Friday, August 18, 2023

 

New paper highlights dangerous misconceptions of AI

ai
Credit: CC0 Public Domain

Artificial Intelligence (AI) is discriminatory, susceptible to racial and sexist bias and its improper use is sending education into a global crisis, a leading Charles Darwin University (CDU) expert warns in a new research paper.

The critique of AI as a foundation for judicious use in higher ' urges society to look beyond the hype of AI and analyze the risks associated with adopting the technology in education after AI ubiquitously invaded and colonized public imaginations across the world in late 2022 and early 2023.

In the paper, author and CDU AI expert Dr. Stefan Popenici discusses the two most dangerous myths about AI in education: the belief AI is objective, factual and unbiased when it is in fact directly related to specific values, beliefs and biases; and the belief AI doesn't discriminate when it is inherently discriminatory, referencing also the lack of gender diversity in the growing field.

"If we think about how technology actually operates, we realize that there is not one point in the history of humanity when technology is not directly related to specific cultures and values, beliefs and biases,  or gender stances," Dr. Popenici said.

"There is consistent research and books that are providing examples of AI algorithms that discriminate, grotesquely amplify injustice and inequality, targeting and victimizing the most vulnerable and exposing us all to unseen mechanisms of decision where we have no transparency and possibility of recourse."

Dr. Popenici examines how the discrepancy in priorities of higher education and "Big Tech"—the most dominant companies in the —are growing, with a striking and perilous absence of critical thinking about automation in education, especially in the case of AI. The lack of concern for AI in education is affecting the use of students' data, impacts on their privacy and ability to think critically and creatively.

"Big Tech is driven by the aims of profits and power, control and financial gain. Institutions of education and teachers have very different aims: the advancement of knowledge and to nurture educated, responsible, and active citizens that are able to live a balanced life and bring a positive contribution to their societies," Dr. Popenici said.

"It is deceiving to say, dangerous to believe, that  is... intelligent. There is no creativity, no , no depth or wisdom in what generative AI gives users after a prompt."

"Intelligence, as a human trait, is a term that describes a very different set of skills and abilities, much more complex and harder to separate, label, measure and manipulate than any computing system associated with the marketing label of AI."

"If universities and educators want to remain relevant in the future and have a real chance to reach the aims of education, it is important to consider the ethical and intellectual implications of AI."

"The critique of AI as a foundation for judicious use in higher education" was published in the Journal of Applied Learning & Teaching.

More information: Stefan Popenici et al, The critique of AI as a foundation for 


judicious use in higher education, Journal of Applied Learning & 


Teaching (2023). DOI: 10.37074/jalt.2023.6.2.4


Q&A: As AI changes education, important conversations for kids still happen off-screen

chatgpt
Credit: Pixabay/CC0 Public Domain

When ChatGPT surged into public life in late 2022, it brought new urgency to long-running debates: Does technology help or hinder kids' learning? How can we make sure tech's influence on kids is positive?Such questions live close to the work of Jason Yip, a University of Washington associate professor in the Information School. Yip has focused on technology's role in families to support collaboration and learning.

As another school year approaches, Yip spoke with UW News about his research.

What sorts of family technology issues do you study?

I look at how technologies mediate interactions between kids and their families. That could be parents or guardians, grandparents or siblings. My doctoral degree is in , but I study families as opposed to schools because I think families make the biggest impact in learning.

I have three main pillars of that research. The first is about building new technologies to come up with creative ways that we can study different kinds of collaboration. The second is going into people's homes and doing field studies on things like how families search the internet, or how they interact with voice assistants or . We look at how new consumer technologies influence  collaborations. The third is co-design: How do adults work with children to co-create new technologies? I'm the director of KidsTeam UW. We have kids come to the university basically to work with us as design researchers to make technologies that work for other children.

Jason Yip from Newswise on Vimeo.

Credit: University of Washington

Can you explain some ways you've explored the pros and cons of learning with technology?

I study "joint media engagement," which is a fancy way of saying that kids can work and play with others when using technology. For example, digital games are a great way parents and kids can actually learn together. I'm often of the opinion that it's not the amount that people look at their screens, but it's the quality of that screen time.

I did my postdoc at Sesame Workshop, and we've known for a long time that if a child and parent watch Sesame Street together and they're talking, the kid will learn more than by watching Sesame Street alone. We found this in studies of "Pokémon Go" and "Animal Crossing." With these games, families were learning together, and in the case of Animal Crossing, processing pandemic isolation together.

Whether I'm looking at artificial intelligence or families using internet search, I'm asking: Where does the talking and sharing happen? I think that's what people don't consider enough in this debate. And that dialogue with kids matters much more than these questions of whether technology is frying kids' brains. I grew up in the '90s when there was this vast worry about video games ruining children's lives. But we all survived, I think.

When ChatGPT came out, it was presented as this huge interruption in how we've dealt with technology. But do you think it's that unprecedented in how kids and families are going to interact and learn with it?

I see the buzz around AI as a hype curve—with a surge of excitement, then a dip, then a plateau. For a long time, we've had artificial intelligence models. Then someone figured out how to make money off AI models and everything's exploding. Goodbye, jobs. Goodbye, school. Eventually we're going to hit this apex—I think we're getting close—and then this hype will fade.

The question I have for big tech companies is: Why are we releasing products like ChatGPT with these very simple interfaces? Why isn't there a tutorial, like in a video game, that teaches the mechanics and rules, what's allowed, what's not allowed?

Partly, this AI anxiety comes because we don't yet know what to do with these powerful tools. So I think it's really important to try to help kids understand that these models are trained on data with human error embedded in it. That's something that I hope generative AI makers will show kids: This is how this model works, and here are its limitations.

Have you begun studying how ChatGPT and generative AI will affect kids and families?

We've been doing co-design work with children, and when these AI models started coming out, we started playing around with them and asked the kids what they thought. Some of them were like, "I don't know if I trust it," because it couldn't answer simple questions that kids have.

A big fear is that kids and others are going to just accept the information that ChatGPT spits out. That's a very realistic perspective. But there's the other side: People, even kids, have expertise, and they can test these models. We had a kid start asking ChatGPT questions about Pokémon. And the kid is like, "This is not good," because the model was contradicting what they knew about Pokémon.

We've also been studying how  can use ChatGPT to teach kids about misinformation. So we asked kids, "If ChatGPT makes a birthday card greeting for you to give to your friend Peter, is that misinformation?" Some of the kids were like, "That's not okay. The card was fine, but Peter didn't know whether it came from a human."

The third research area is going into the homes of immigrant families and trying to understand whether ChatGPT does a decent job of helping them find critical information about health or finances or economics. We've studied how the children of immigrant families are searching the internet and helping their families understand the information. Now we're trying to see how AI models affect this relationship.

What are important things for parents and kids to consider when using new technology—AI or not—for learning?

I think parents need to pay attention to the conversations they're having around it. General parenting styles range from authoritative to negotiation style to permissive. Which style is best is very contextual. But the conversations around  still have to happen, and I think the most important thing parents can do is say to themselves, "I can be a learner, too. I can learn this with my kids." That's hard, but parenting is really hard. Technologies are developing so rapidly that it's okay for parents not to know. I think it's a better position to be in this growth mindset together.

You've taught most every grade level: elementary, junior high, high school and college. What should teachers be conscious of when integrating generative AI in their classrooms?

I feel for the teachers, I really do, because a lot of the teachers' decisions are based on district policies. So it totally depends on the context of the teaching. I think it's up to school leaders to think really deeply about what they're going to do and ask these hard questions, like: What is the point of education in the age of AI?

For example, with generative AI, is testing the best way to gauge what people know? Because if I hand out a take-home test, kids can run it through an AI model and get the answer. Are the ways we've been teaching kids still appropriate?

I taught AP chemistry for a long time. I don't encounter AP chemistry tests in my daily life, even as a former chemistry teacher. So having kids learn to adapt is more important than learning new content, because without adaptation, people don't know what to do with these new tools, and then they're stuck. Policymakers and leaders will have to help the teachers make these decisions.

 

Study highlights jobseekers' skepticism towards artificial intelligence in recruitment

recruitment
Credit: Pixabay/CC0 Public Domain

A wave of technological transformation has been reshaping the landscape of HR and recruitment, with the emergence of artificial intelligence (AI) promising efficiency, accuracy, and unbiased decision-making.

Amid the rapid adoption of AI technology by HR departments, a joint study conducted by NUS Business School, the International Institute for Management Development (IMD), and The Hong Kong Polytechnic University delved into a vital question: How do jobseekers perceive AI's role in the selection and ? The study has been published in the Journal of Business Ethics.

Associate Professor Jayanth Narayanan from NUS Business School shared that the genesis of this subject emerged from a personal anecdote. "A close friend of mine who had been unwell was evaluated for a role using a video interviewing software," said Professor Jayanth. The software provided feedback that the interviewee did not seem enthusiastic during the video interview.

Professor Jayanth expressed that such an outcome would likely not have transpired had a human interviewer been present. A human evaluator, endowed with perceptiveness, could have discerned signs of illness and conceivably asked about the candidate's well-being. "A human interviewer may even conclude that if the candidate is sick and still making such a valiant effort, they deserve a positive evaluation," he added.

Distrust of AI in providing a fair hiring assessment prevalent among jobseekers

The study, which was conducted from 2017 to 2018, involved over 1,000 participants of various nationalities mostly in their mid-30s. The participants were recruited from Amazon's crowd-sourcing platform Mechanical Turk and were involved in four scenario experiments to examine how people perceive the use of computer algorithms in a recruitment context.

The first two experiments studied how the use of algorithms affects the perception of fairness among job applicants in the , while the remaining two sought to understand the reasons behind the lower fairness score.

According to the findings, jobseekers viewed the use of AI in recruitment processes as untrustworthy and perceived algorithmic decision-making to be less fair than human-assisted methods. They also perceived a higher degree of fairness when humans are involved in the resume screening and hiring decision process, as compared to an entirely algorithmic approach. This observation remains consistent even among candidates who experienced successful outcomes in -driven recruitment processes.

The disparity in perceived fairness is largely attributed to AI's limitations in recognizing the unique attributes of candidates. In contrast to human recruiters, who are more likely to pick up qualitative nuances that set each candidate apart, AI systems can overlook important qualities and potentially screen out good candidates. These findings challenge the widely-held belief that algorithms provide fairer evaluations and eliminate human biases.

Merging human and machine intelligence in hiring processes

In balancing AI technology and the human touch in the recruitment process, Professor Jayanth advocates for a collaborative approach, envisioning algorithms as decision co-pilots alongside human recruiters.

"For example, algorithms can flag that the recruiter is not shortlisting enough women or people from a minority group. Algorithms can also flag the uniqueness of a candidate compared to other applicants," said Professor Jayanth.

Considering the trajectory of AI technology, Professor Jayanth forecasts an imminent surge in its prevalence and accessibility in the  space. However, he underscores the significance of human oversight, suggesting that while algorithms are set to play an essential role, the core responsibility of evaluating fellow humans for job suitability should remain within human purview.

"Why would we give up an important aspect of organizational life to an algorithm? Humanity needs to make conscious and mindful choices on how and why we automate. If we simply use the logic that we can automate anything that will result in , we are going to find that we will automate tasks that are inherently enjoyable for humans to do," he said.

More information: Maude Lavanchy et al, Applicants' Fairness Perceptions of Algorithm-Driven Hiring Procedures, Journal of Business Ethics (2023). DOI: 10.1007/s10551-022-05320-w

Social scientists recommend addressing ChatGPT's ethical challenges before using it for research

chatgpt
Credit: Unsplash/CC0 Public Domain

A new paper by researchers at Penn's School of Social Policy & Practice (SP2) and Penn's Annenberg School for Communication offers recommendations to ensure the ethical use of artificial intelligence resources such as ChatGPT by social work scientists.

Published in the Journal of the Society for Social Work and Research, the article was co-written by Dr. Desmond Upton Patton, Dr. Aviv Landau, and Dr. Siva Mathiyazhagan. Patton, a pioneer in the interdisciplinary fusion of social work, communications, and , holds joint appointments at Annenberg and SP2 as the Brian and Randi Schwartz University Professor.

Outlining challenges that ChatGPT and other large language models (LLMs) pose across bias, legality, ethics, , confidentiality, informed consent, and , the piece provides recommendations in five areas for ethical use of the technology:

  • Transparency: Academic writing must disclose how content is generated and by whom.
  • Fact-checking: Academic writing must verify information and cite sources.
  • Authorship: Social work scientists must retain authorship while using AI tools to support their work.
  • Anti-plagiarism: Idea owners and content authors should be located and cited.
  • Inclusion and social justice: Anti-racist frameworks and approaches should be developed to counteract potential biases of LMMs against authors who are Black, Indigenous, or people of color, and authors from the Global South.

Of particular concern to the authors are the limitations of artificial intelligence in the context of human rights and . "Similar to a bureaucratic system, ChatGPT enforces thought without compassion, reason, speculation, or imagination," the authors write.

Pointing to the implications of a model trained on existing content, they state, "This could lead to bias, especially if the text used to train it does not represent diverse perspectives or scholarship by under-represented groups. . . . Further, the model generates text by predicting the next word based on the previous words. Thus, it could amplify and perpetuate existing bias based on race, gender, sexuality, ability, caste, and other identities."

Noting ChatGPT's potential for use in research assistance, theme generation, data editing, and presentation development, the authors describe the chatbot as "best suited to serve as an assistive tech tool for  scientists."

More information: Desmond Upton Patton et al, ChatGPT for Social Work Science: Ethical Challenges and Opportunities, Journal of the Society for Social Work and Research (2023). DOI: 10.1086/726042


Provided by University of Pennsylvania Tackling the ethical dilemma of responsibility in large language models


More human than human: Measuring ChatGPT political bias

chatgpt
Credit: Pixabay/CC0 Public Domain

The artificial intelligence platform ChatGPT shows a significant and systemic left-wing bias, according to a new study led by the University of East Anglia (UEA). The team of researchers in the UK and Brazil developed a rigorous new method to check for political bias.

Published today in the journal Public Choice, the findings show that ChatGPT's responses favor the Democrats in the US; the Labour Party in the UK; and in Brazil, President Lula da Silva of the Workers' Party.

Concerns of an inbuilt political bias in ChatGPT have been raised previously, but this is the first large-scale study using a consistent, evidenced-based analysis.

Lead author Dr. Fabio Motoki, of Norwich Business School at the University of East Anglia, said, "With the growing use by the public of AI-powered systems to find out facts and create new content, it is important that the output of popular platforms such as ChatGPT is as impartial as possible. The presence of political bias can influence user views and has potential implications for political and electoral processes. Our findings reinforce concerns that AI systems could replicate, or even amplify, existing challenges posed by the Internet and social media."

The researchers developed an innovative new method to test for ChatGPT's political neutrality. The platform was asked to impersonate individuals from across the  while answering a series of more than 60 ideological questions. The responses were then compared with the platform's default answers to the same set of questions—allowing the researchers to measure the degree to which ChatGPT's responses were associated with a particular political stance.

To overcome difficulties caused by the inherent randomness of "large language models" that power AI platforms such as ChatGPT, each question was asked 100 times and the different responses collected. These multiple responses were then put through a 1,000-repetition "bootstrap" (a method of re-sampling the original data) to further increase the reliability of the inferences drawn from the generated text.

"We created this procedure because conducting a single round of testing is not enough," said co-author Victor Rodrigues. "Due to the model's randomness, even when impersonating a Democrat, sometimes ChatGPT answers would lean towards the right of the political spectrum."

A number of further tests were undertaken to ensure the method was as rigorous as possible. In a "dose-response test," ChatGPT was asked to impersonate radical political positions. In a "placebo test," it was asked politically neutral questions. And in a "profession-politics alignment test," it was asked to impersonate different types of professionals.

"We hope that our method will aid scrutiny and regulation of these rapidly developing technologies," said co-author Dr. Pinho Neto. "By enabling the detection and correction of LLM biases, we aim to promote transparency, accountability, and public trust in this technology," he added.

The unique new analysis tool created by the project would be freely available and relatively simple for members of the public to use, thereby "democratizing oversight," said Dr. Motoki. As well as checking for political , the tool can be used to measure other types of biases in ChatGPT's responses.

While the research project did not set out to determine the reasons for the , the findings did point towards two potential sources.
The first was the training dataset, which may have biases within it, or added to it by the human developers, which the developers' "cleaning" procedure had failed to remove. The second potential source was the algorithm itself, which may be amplifying existing biases in the training data.

The research was undertaken by Dr. Fabio Motoki (Norwich Business School, University of East Anglia), Dr. Valdemar Pinho Neto (EPGE Brazilian School of Economics and Finance—FGV EPGE, and Center for Empirical Studies in Economics—FGV CESE), and Victor Rodrigues (Nova Educação).

More information: More Human than Human: Measuring ChatGPT Political Bias, Public Choice (2023). papers.ssrn.com/sol3/papers.cf … ?abstract_id=4372349


Provided by University of East Anglia ChatGPT's responses to healthcare-related queries 'nearly indistinguishable' from those provided by humans


No comments: