Saturday, June 24, 2023

How will AI change the world of scamming? | The Crypto Mile

Brian McGleenon
Fri, 23 June 2023 

Cybercriminals will leverage artificial intelligence to enhance the malicious bots used in hacks to create "sophisticated actors that can effectively impersonate family members to fraudulently extract value," a leading VC founder has claimed.

In the latest episode of The Crypto Mile, Yahoo Finance UK chatted with Jamie Burke, founder of Outlier Ventures, the world's leading Web3 accelerator by the volume of investments.

Burke discussed the evolution and impact of artificial intelligence (AI) and how it could be a particularly effective tool for cybercriminals.

"If we just look at the statistics of it, in a hack you need to catch out just one person in a hundred thousand, this requires lots of attempts, so malicious actors are going to be levelling up their level of sophistication of their bots into more intelligent actors, using artificial intelligence," Burke warned.

He highlighted the growing concern about rogue AI bots being used for malicious purposes, altering the landscape of the internet, or as he referred to it "The Agent-verse".

“The majority of traffic on the web are bots, and a growing proportion are malicious bots. Hackers generally aren't manually doing things, the art of the hack is automated as much as possible,” Burke said.

Read more: AI film apps could see 'blockbusters created in bedrooms by end of the year', claims web3 adviser

As these malicious bots level up their sophistication, the line between human and AI might become almost indistinguishable.

An AI-powered bot could mimic a real person to such an extent that they could participate in a video call without arousing suspicion.

The implications of such technology are far-reaching. It could open up new avenues for scams and fraud, with cybercriminals exploiting the capabilities of AI to trick unsuspecting individuals or corporations into sharing sensitive information or transferring funds.

Burke said: "Instead of receiving an email saying 'can you transfer some money', a person could get a zoom call booked in their diary, from a digital replication that looks like a friend, sounds like them, and says the same things that they would say, and it tricks the recipient by saying that they're stuck for money and can they please get some wired over."

In this scenario, proof of personhood systems would become critical to verify the real identities of individuals in digital interactions.

Burke said this could trigger a virtual arms race with different AI platforms – commercial, malicious, and governmental – battling for influence over internet users.

Read more: This AI tool ‘threatens human creativity’ and the art world is worried

"In this AI war you're going to have platforms that will have their own AI, and will largely be there to help serve you in return for something, but you will also have malicious AI trying to exploit gaps in your interactions with friends and colleague to try and extract value," Burke said.

He said it could become vital to ensure that people have a "sovereign agent" that serves their interests and helps them navigate an increasingly complex online environment.

These agents would act as a person's representative in virtual environments, defending them against potential threats and securing their presence in the digital world.

Burke said AI could become an autonomous actor influencing decisions and actions and ensuring our security and integrity in the "Agent-verse" will become an increasingly pressing challenge.

No comments:

Post a Comment