AI and jobs market: Is it a time for hope or fear?
By Dr. Tim Sandle
February 13, 2024
Will AI or robots take over? Image (C)Tim Sandle
Will artificial intelligence lead to a significant loss of employment? This is the concern with many politicians, economists and workers. Much of the focus on AI and its impact on the labour market is based on a fear-driven narrative that “thinking machines” will eventually replace many of the routine tasks performed by humans.
Under the worse-case scenarios, AI is poised to profoundly change the global economy with advanced economies at greater risk of disruption.
One country with a strong technology sector is Canada. The extent of AI’s impact on the labour force in Canada has been assessed by HCLTech Canada. The assessment warns that companies need to focus less on downsizing and more on upskilling their workforce if they are to have success in the new era of AI.
HCLTech is a global IT brand; in Canada it employees more than 2,600 Canadians at global delivery centers in Mississauga, Edmonton, Vancouver and Moncton.
The HCLTech assessment is that if companies want to succeed, they need to move past fear and train staff to leverage rapidly evolving AI automation. Furthermore, the assessment indicates that without that basic level of competence, GenAI will not provide the sort of performance boost longevity that the technology is capable of delivering.
This approach is in keeping with the World Economic Forum (WEC) Future of Jobs Report findings. These indicate that many businesses are becoming more sceptical about the potential for artificial intelligence to fully automate work tasks.
It also stands that future expectations for automation are also being revised down, as markets climb the human-machine landscape more slowly than previously anticipated. In alignment with this, respondents to a WEC 2023 survey forecast that an additional 9 percent of operational tasks will be automated in the next five years – a reduction of five percentage points compared to expectations in 2020.
While AI may not lead to a loss of jobs, research from LinkedIn suggests the rise of AI across the workforce is set to significantly transform many roles, including replacing some current roles with new ones. Hence, more than half of all jobs, and the skills required to do all jobs will change by up to 65 percent by 2030.
“While there were fears that this tool could replace a large number of workers and lead to efficiencies, business leaders are now grasping that automation will expand much slower than expected and therefore we’ll see a smaller impact on the labour market,” HCLTech Canada country leader Dave Chopra indicates in a statement.
Chopra adds that the advent of these changes should drive companies to view jobs as collections of skills and tasks, not just titles, anticipating how AI advancements will impact various tasks. In addition, the skills needed will constantly change.
“Finding an employee that has the perfect set of skills no longer makes sense. Those skills may work today, but as AI evolves, that employee will need a set of new skills. Training and upskilling are crucial if a company is to remain sustainable and competitive,” Chopra adds.
There is no proof that AI can be controlled, according to extensive survey
There is no current evidence that AI can be controlled safely, according to an extensive review, and without proof that AI can be controlled, it should not be developed, a researcher warns.
Despite the recognition that the problem of AI control may be one of the most important problems facing humanity, it remains poorly understood, poorly defined, and poorly researched, Dr Roman V. Yampolskiy explains.
In his upcoming book, AI: Unexplainable, Unpredictable, Uncontrollable, AI Safety expert Dr Yampolskiy looks at the ways that AI has the potential to dramatically reshape society, not always to our advantage.
He explains: “We are facing an almost guaranteed event with potential to cause an existential catastrophe. No wonder many consider this to be the most important problem humanity has ever faced. The outcome could be prosperity or extinction, and the fate of the universe hangs in the balance.”
Uncontrollable superintelligence
Dr Yampolskiy has carried out an extensive review of AI scientific literature and states he has found no proof that AI can be safely controlled – and even if there are some partial controls, they would not be enough.
He explains: “Why do so many researchers assume that AI control problem is solvable? To the best of our knowledge, there is no evidence for that, no proof. Before embarking on a quest to build a controlled AI, it is important to show that the problem is solvable.
“This, combined with statistics that show the development of AI superintelligence is an almost guaranteed event, show we should be supporting a significant AI safety effort.”
He argues our ability to produce intelligent software far outstrips our ability to control or even verify it. After a comprehensive literature review, he suggests advanced intelligent systems can never be fully controllable and so will always present certain level of risk regardless of benefit they provide. He believes it should be the goal of the AI community to minimize such risk while maximizing potential benefit.
What are the obstacles?
AI (and superintelligence), differ from other programs by its ability to learn new behaviors, adjust its performance and act semi-autonomously in novel situations.
One issue with making AI ‘safe’ is that the possible decisions and failures by a superintelligent being as it becomes more capable is infinite, so there are an infinite number of safety issues. Simply predicting the issues not be possible and mitigating against them in security patches may not be enough.
At the same time, Yampolskiy explains, AI cannot explain what it has decided, and/or we cannot understand the explanation given as humans are not smart enough to understand the concepts implemented. If we do not understand AI’s decisions and we only have a ‘black box’, we cannot understand the problem and reduce likelihood of future accidents.
For example, AI systems are already being tasked with making decisions in healthcare, investing, employment, banking and security, to name a few. Such systems should be able to explain how they arrived at their decisions, particularly to show that they are bias free.
Yampolskiy explains: “If we grow accustomed to accepting AI’s answers without an explanation, essentially treating it as an Oracle system, we would not be able to tell if it begins providing wrong or manipulative answers.”
Controlling the uncontrollable
As capability of AI increases, its autonomy also increases but our control over it decreases, Yampolskiy explains, and increased autonomy is synonymous with decreased safety.
For example, for superintelligence to avoid acquiring inaccurate knowledge and remove all bias from its programmers, it could ignore all such knowledge and rediscover/proof everything from scratch, but that would also remove any pro-human bias.
“Less intelligent agents (people) can’t permanently control more intelligent agents (ASIs). This is not because we may fail to find a safe design for superintelligence in the vast space of all possible designs, it is because no such design is possible, it doesn’t exist. Superintelligence is not rebelling, it is uncontrollable to begin with,” he explains.
“Humanity is facing a choice, do we become like babies, taken care of but not in control or do we reject having a helpful guardian but remain in charge and free.”
He suggests that an equilibrium point could be found at which we sacrifice some capability in return for some control, at the cost of providing system with a certain degree of autonomy.
Aligning human values
One control suggestion is to design a machine which precisely follows human orders, but Yampolskiy points out the potential for conflicting orders, misinterpretation or malicious use.
He explains: “Humans in control can result in contradictory or explicitly malevolent orders, while AI in control means that humans are not.”
If AI acted more as an advisor it could bypass issues with misinterpretation of direct orders and potential for malevolent orders, but the author argues for AI to be useful advisor it must have its own superior values.
“Most AI safety researchers are looking for a way to align future superintelligence to values of humanity. Value-aligned AI will be biased by definition, pro-human bias, good or bad is still a bias. The paradox of value-aligned AI is that a person explicitly ordering an AI system to do something may get a “no” while the system tries to do what the person actually wants. Humanity is either protected or respected, but not both,” he explains.
Minimizing risk
To minimize the risk of AI, he says it needs it to be modifiable with ‘undo’ options, limitable, transparent and easy to understand in human language.
He suggests all AI should be categorised as controllable or uncontrollable, and nothing should be taken off the table and limited moratoriums, and even partial bans on certain types of AI technology should be considered.
Instead of being discouraged, he says: “Rather it is a reason, for more people, to dig deeper and to increase effort, and funding for AI Safety and Security research. We may not ever get to 100% safe AI, but we can make AI safer in proportion to our efforts, which is a lot better than doing nothing. We need to use this opportunity wisely.”
No comments:
Post a Comment