Thursday, September 08, 2022

AI researchers improve method for removing gender bias in machines built to understand and respond to text or voice data

UNIVERSITY OF ALBERTA


EDMONTON, Alta. — Researchers have found a better way to reduce gender bias in the machines built to understand and respond to text or voice data while preserving vital information about the meanings of words, according to a recent study that could be a key step toward addressing the issue of human biases creeping into artificial intelligence.

While a computer itself is an unbiased machine, much of the data and programming that flows through computers is generated by humans. This can be a problem when conscious or unconscious human biases end up being reflected in the text samples AI models use to analyze and “understand” language. 

Though other attempts to reduce or remove gender bias in texts have been successful to some degree, the problem with those approaches is that gender bias isn’t the only thing removed from the texts.

“In many gender debiasing methods, when they reduce the bias in a word vector, they also reduce or eliminate important information about the word,” says Bei Jiang, associate professor in the Department of Mathematical and Statistical Sciences, who co-authored the paper along with graduate student Lei Ding.

For example, when considering a word like “nurse,” researchers want the system to remove any gender information associated with that term while still retaining information that links it with related words such as doctor, hospital and medicine. 

The new methodology is part of a larger project, entitled BIAS: Responsible AI for Gender and Ethnic Labour Market Equality, that is looking to solve real-world problems. 

For example, people reading the same job advertisement may respond differently to particular words in the description that often have a gendered association. A system using the methodology Ding and his collaborators created would be able to flag the words that may change a potential applicant’s perception of the job or decision to apply because of perceived gender bias, and suggest alternative words to reduce this bias.

To read the full story, click here.

To speak with Bei Jang or Lei Ding  about their study, please contact:

Michael Brown
U of A media strategist
mjbrown1@ualberta.ca

No comments: