Saturday, June 10, 2023

Which Jobs and Industries will Artificial Intelligence Replace First?

June 10, 2023
By Ilgar Nagiyev
MODERNDIPLOMACY.EU


You could be forgiven for feeling blindsided by the speed at which artificial intelligence has moved from technology of the future, to the here and now. Its rise has been so fast and sudden it’s outpaced even questions on its safety and controlling philosophy, leaving technology pioneers like Elon Musk and Steve Wozniak to call for a six-month pause in its development while these are considered.

As altruistic as this sounds, its unlikely to happen; companies do not give up a competitive edge when they have a significant jump over their rivals. The seriousness with which it is being taken, however, has poured napalm on the fire of what AI can do for us and specifically which industries it will affect first or potentially replace entirely.

First, an important point; no industry or occupation will be devastated overnight. That would require immediate and total acceptance from millions of people across multiple, distinct industries combined with a near unprecedented wave of investment. Artificial intelligence, however, is an earthquake that has already kick-started a tsunami of change. This wave will inevitably surge outward and some at sea level are going to be impacted first.

So where will the tsunami land?

No one can answer that for sure, but there are certain industries that are particularly vulnerable to AI encroachment. One of those, in a case of a machine replacing its creator in microcosm, are those working in IT, computer sciences and software engineering. A large software project can involve dozens, if not hundreds of human developers, each using slightly different code. The overall project, meanwhile, is broken down into separate goals, known as sprints and those sprints into separate tasks, known as tickets. These are overseen by senior developers and the resulting code tested by quality assurance teams to ensure that it works exactly as planned. In theory AI could replace many of those involved.

Next, the success of ChatGPT has proved the viability of large language model AI. This family of artificial intelligence is set to replace a significant number of customer service staff, with chatbots already providing assistance and filtering of calls. An argument against their use, is the lack of empathy a good customer service representative possesses. An argument for, is that they are always professional because that is what they are programmed to be and don’t suffer from staff retention issues. More than that, they will soon be substantially cheaper than a human workforce and that is not something employers are likely to overlook.

Further inland, the next industries replacement will not be imminent, but the cliff on which it sits on will soon face erosion – transportation. Globally there is a well-documented shortfall in trained drivers that coincides with a massive investment in self-driving vehicles. Major players in this field include Tesla, Uber, Ford and Mercedes Benz, but to date, they have faced problems, often publicly, such as the Tesla S that crashed in 2022 while in full self-drive mode. In 2022 alone there were four-hundred cases of autonomous vehicle crashes, all of which affected its perception.

Fusing self-drive technology with a controlling AI has the potential to make it far safer and remove the need for human drivers entirely in areas like haulage and logistics. This will not happen quickly; the public first need to accept and trust the technology. Likewise, significant investment in infrastructure will be required, raising the cost in the short term. In the mid to long term, however, AI can offer spatial perception, anticipation of potential hazards and split-second decision making at levels that far supersede a human and never get tired, sick or hungry. Added to the temptation of a massively reduced wage cost, this is likely to prove irresistible.

And the list goes on.

AI in agriculture can provide predictive analytics in real time, maximising a farms efficiency, potentially alongside day-to-day tasks like planting, harvesting, spraying, livestock monitoring and minimising negative impact on the environment. It will also require fewer human workers. The same applies to manufacturing, where AI is only likely to continue the reduction of human involvement that robotics began.

Another within the long-term path of AI could well be healthcare. More than most this would require significant buy-in from the public, but is it too much to imagine an AI carrying out many of the tasks within an overstretched, expensive and unwieldy medical system? Incrementally we are likely to see AI replace highly skilled professionals in areas like medical imaging and data analysis, all the way through to the real time monitoring of patients. It isn’t science fiction anymore to imagine AI doctors kept up to date instantly with new techniques and emerging science.

The list gets longer every day as the reckoning with AI’s potential continues: journalism, graphic design, law, education and many more could soon find themselves within its path. Where it leads is likely to be equal parts, challenging, threatening, fascinating and enduring.

Ilgar Nagiyev is an Azerbaijani entrepreneur, Chairman of the Board at Azer Maya, leading producer of nutritional yeast in Azerbaijan, and Chairman of the Board of Baku City Residence, a real-estate company. He is an alumnus of both the London School of Economics and Political Sciences and TRIUM Global Executive MBA.


The race to detect AI can be won


As regulation faces growing challenges, detection technology could provide a crucial edge for mitigating the potential risks of generative AI tools.


Synthetic audio technology or "voice clones" pose a serious threat to the public | iStock

BY JAN NICOLA BEYER
JUNE 10, 2023 
Jan Nicola Beyer is the research coordinator of the Digital Democracy unit at Democracy Reporting International.

The debate over the risks of generative artificial intelligence (AI) is currently in full swing.

On the one hand, advocates of the model for generative AI tools praise their potential to drive productivity gains not witnessed since the Industrial Revolution. On the other, there’s a growing chorus raising concerns regarding the potential dangers that these tools pose.

While there have been ample calls for regulatingor even stalling, new AI technology development, however, there’s a whole other dimension that appears to be missing from the debate — detection.

When compared with regulation, investing in technologies that discern between human and machine-generated content — such as DetectGPT and GPTZero for text, and AI Image Detector for visuals — may be seen by some as a substandard solution. As regulation will face insurmountable challenges, however, detection can offer a promising avenue for mitigating AI’s potential risks.

It’s undeniable that generative AI has the potential to enhance creativity and increase productivity. Yet, losing the ability to distinguish between natural and synthetic content could also empower nefarious actors. From simple forms of plagiarism in schools and universities to the breach of electronic security systems and the launch of professional disinformation campaigns, the dangers behind machines writing text, drawing pictures or making videos are manifold.

All these threats call for a response — not only a legal one but a technical one too. Yet, such technical solutions don’t receive the support they should.

Currently, funds allocated to new generative tools vastly outweigh investment in detection. Microsoft alone invested a whopping $10 billion in OpenAI, the company behind ChatGPT. To put that in perspective, the total European expenditure on AI is estimated at approximately $21 billion, and given that detection hasn’t featured strongly in the public debate, only a small fraction of this sum can be assumed to be directed toward this purpose.

But in order to mitigate this imbalance, we can’t simply rely on the industry to step up.

Private businesses are unlikely to match funds allocated for detection with their expenditure on generative AI, as profits from detecting generative output aren’t likely to be anywhere near as lucrative as those for developing new creative tools. And even in cases where lucrative investment opportunities for detection tools exist, specialized products will rarely reach the hands of the public.

Synthetic audio technology is a good example of this. Even though so-called voice clones pose a serious threat to the public — especially when used to impersonate politicians or public figures — private companies prioritize other concerns, such as detection mechanisms aimed at security systems in banks to prevent fraud. And developers of such tech have little interest in sharing their source code, as it would encourage attempts to bypass their security systems.

Meanwhile, lawmakers have so far emphasized the regulation of AI content over research funding for detection. The European Union, for example, has taken up the effort of regulation via the AI Act, a regulatory framework aimed at ensuring the responsible and ethical development and use of AI. Nevertheless, finding the right balance between containing high-risk technology and allowing for innovation is proving challenging.

Additionally, it remains to be seen whether effective regulation can even be achieved.

While ChatGPT may be subject to legal oversight because it was developed by OpenAI — an organization that can be held legally accountable — the same cannot be said for smaller projects creating large-language models (LLMs), which are the algorithms that underpin tools like ChatGPT. Using Meta’s LLaMA model, for example, Stanford University researchers were able to create their own LLM with similar performance to ChatGPT for the cost of only $600. This case demonstrates that other LLMs can be built rather easily and cheaply on already existing models and avoid self-regulation — an attractive option for criminals or disinformation actors. And in such instances, legal accountability may be quite impossible.

Robust detection mechanisms thus present a viable solution to gain an edge in the ever-evolving arms race against generative AI tools.


Already at the forefront of fighting disinformation and having pledged massive investments in AI, this is where the EU should lead in providing research funding. And the good news is that it isn’t even necessary to match the amount of funding dedicated to the development of generative AI tools and the money spent on developing tools that facilitate their detection. As a general rule, detection tools don’t require large amounts of scraped data and don’t have the high training costs associated with recent LLMs.

Nevertheless, as the models underlying generative AI advance, detection technology will need to keep pace as well. Additionally, detection mechanisms may also require the cooperation of domain experts too. When it comes to synthetic audio, for example, it’s necessary for machine learning engineers to collaborate with linguists and other researchers in order for such tools to be effective, and provided research funding should facilitate such collaborations.

COVID-19 showed the world states can drive innovation that can help overcome crises when needed. And governments have a role to play in ensuring the public is protected from potentially harmful AI content — investing in the detection of generative AI output is one way to do this.

No comments: