Thursday, November 30, 2023

Industries more exposed to first AI wave saw job gains: report


New research is challenging fears that artificial intelligence technology will displace a large portion of the world’s workforce, with data from Europe finding occupations exposed to AI over the past 20 years actually saw an increase in employment.

Stefania Albanesi, one of the authors of a report published Tuesday by the European Central Bank, noted that while the data referenced only covers AI advances up until 2019, it demonstrates how the technology has interacted with the labour market so far.

“The study covers the 2000s and 2010s, where the kind of AI that was implemented was basically deep learning and machine learning models,” Albanesi, an economics professor at the University of Miami, told BNN Bloomberg in television interview.

That form of AI is different from large language models such as ChatGPT deployed in the last year, she noted, but the data is still an informative look at the possible future of work.

BUCKING THE TREND

The report found that in the “deep learning boom of the 2010s, occupations potentially more exposed to AI-enabled technologies actually increased their employment share in Europe.”

The study of 16 European countries found that the jobs created were primarily for younger, high-skill workers – mainly college graduates.

Albanesi said that these findings are a break in a pattern from the last major technological advance – the launching and diffusion of the internet – which caused a decline in routine administration and production jobs.

“Instead, for this wave, which was associated with the early phase of expansion in AI in the last 20 years, we saw a growth in employment.”

The report found that for occupations requiring “low and medium-skill” workers, AI exposure didn’t have a significant effect on employment numbers.

But for occupations requiring high-skill workers, it found “a positive and significant association.” AI exposure appeared to boost employment share by 3.1 per cent using one measure cited in the report, and 6.7 per cent using another measure.

TECHNOLOGY AND JOB FEARS

The researchers said that historically, technological advances have been accompanied by fears about potential job losses.

“This apprehension persists, even though history suggests that previous fears about labour becoming redundant were exaggerated,” the report said.

Albanesi said that until recently, AI technology has been limited to replacing routine and repetitive labour tasks.

However, she cautioned that new large language models could potentially replace non-routine jobs, which make up a larger portion of the workforce.

“That is sort of the risk,” she said.

“But that does not mean that in the aggregate, there will be a net decline in available jobs because new jobs may be generated in response to these abilities we are granted because we have access to this technology.”


AI won't pose immediate existential threat but 'safety brakes' needed: Microsoft

Microsoft’s president says he doesn't think artificial intelligence poses an immediate threat to humanity's existence, but governments and businesses still need to move faster to address the technology's risks by implementing what he calls "safety brakes."

"We don't see any risk in the coming years, over the next decade, that somehow AI is going to pose some kind of existential threat to humanity, but ... let's solve this problem before the problem arrives," Brad Smith said in an interview with The Canadian Press.

Smith — a stalwart of Microsoft who first joined the company in 1993 and now doubles as its vice-chair — said it's important to get the problems posed by the technology under control so the globe doesn't have to be "constantly worried and talking about it."

He feels the way to address potential problems is through safety brakes, which could act like the emergency mechanisms built into elevators, school buses and high-speed trains.

They should be built into high-risk AI systems that control critical infrastructure such as electrical grids, water system and traffic. 

"Let's learn from art," Smith said.

"Every movie in which technology imposes an existential threat ends the same way — human beings turn the technology off. (So) have an on-off switch, have a safety brake, ensure that it remains under human control. Let's embrace that and do it now."

The remarks from Smith come as a race to use and innovate with AI has broken out in the tech sector and beyond following the release of ChatGPT, an AI chatbot designed to generate humanlike responses to text prompts.

Microsoft has invested billions into ChatGPT’s creator, San Francisco-based OpenAI, and also has its own AI-based technology, Copilot, that helps users create drafts of content, suggest different ways to word text they've written and helps create PowerPoint presentations from Word documents.

But many have deep concerns about the pace of AI advancement. For example, Geoffrey Hinton, a British-Canadian deep learning pioneer often referred to as the “godfather of AI,” has said he feels the technology could lead to bias and discrimination, joblessness, echo chambers, fake news, battle robots and other risks.

Several governments, including Canada's, have begun devising guardrails around AI.

In a 48-page report Microsoft released Wednesday, Smith said his company is supportive of Canada's push toward regulating AI.

Those efforts include a voluntary code of conduct released in September whose signatories — including Cohere, OpenText Corp., BlackBerry Ltd. and Telus Corp. — promise they will assess and mitigate the risks of their AI-based systems, monitor them for incidents and act on issues they develop.

Though the code has detractors such as Shopify Inc. founder Tobi Lütke, who sees it as an example of the country using too many “referees” when it needs more “builders,” Smith said in the report that by shaping a code Canada has “showed early leadership” and is helping the globe work toward a common set of shared principles.

The voluntary code is expected to be followed by Canada’s forthcoming Artificial Intelligence and Data Act, which would create new criminal law provisions to prohibit “reckless and malicious” uses of AI that cause serious harm to Canadians. 

The act, known as Bill C-27, has passed its first and second reading but is still being considered at committee. Ottawa has said it will come into force no sooner than 2025.

Asked why he thinks governments need to move faster on AI, Smith said the globe has faced an "extraordinary year" since ChatGPT's release.

"When we say move faster, it's frankly not meant as a criticism," he said.

"It's meant as a recognition of the current reality where innovation has taken off at a faster rate than most people expected."

But he sees Canada as one of the countries most prepared to handle the pace of AI because universities have long emphasized the technology and cities such as Montreal, Toronto and Vancouver have been hotbeds for AI innovation.

"If there is a government that I think has a tradition on which it can build to adopt something like this, I think it's Canada. I hope it'll be the first," Smith said.

"It won't be the last if it's the first."

But as Canada's AI act faces "thoughtful deliberation," Smith thinks Canada should consider how it can adopt additional safeguards in the meantime.

For example, during the procurement process for high-risk AI systems, he thinks partners seeking contracts could be compelled to use third-party audits to certify that they comply with relevant international AI standards.

In the report, Smith also threw his support behind an approach to AI that will be “developed and used across borders” and “ensures that an AI system certified as safe in one jurisdiction can also qualify as safe in another.”

He compared this approach to the International Civil Aviation Organization, which uses uniform standards to ensure an airplane does not need to be refitted midflight from Brussels to New York to meet varying requirements each country may have.

An international code would help AI developers attest to the safety of their systems and boost compliance globally because they would be able to use standards that are internationally agreed upon.

“The model of a voluntary code provides an opportunity for Canada, the European Union, the United States, the other members of the G7 as well as India, Brazil, and Indonesia, to move forward together on a set of shared values and principles,” he said in the report.

“If we can work with others on a voluntary basis, then we will all move faster and with greater care and focus. That’s not just good news for the technology world, but for the whole world.”

This report by The Canadian Press was first published Nov. 29, 2023.

No comments:

Post a Comment