ByPaul Wallis
PublishedMarch 30, 2023
ChatGPT can deliver an essay, computer code... or legal text, within seconds. - Copyright AFP Abdul MAJEED
A lot of information is being generated by comparisons between AI bots. A hierarchy of sorts is emerging, but performance is highly variable. Selective use of subjects for comparison is another issue. Also, remember that these are first-generation “large language” AIs at work.
The New York Times did a pretty good job of comparing ChatGPT and Bard as executive assistants. ChatGPT did well, Bard didn’t really cut it on any level. The New York Times article is well worth reading because it also defines the parameters for comparison.
The natural inference from this comparison is that ChatGPT, which is now the previous iteration of that AI, has a much stronger learning base. You’d think so from the outcome.
What’s important about this inference is that the bar has been raised so high, so fast. This is like the pre-Windows 95 era, and you can expect the new tech to happen much faster.
The current intrusions into the consumer space are pretty tentative. If you have Bing, you’ll also note that there’s now an AI interface on the search engine. That was a very quick response to the AI breakout into the mainstream, and an attempt to get market share from Google. The obvious point to be made here is that the market is already driving the development, in that sense.
That’s not necessarily good news for the immediate future of AI. You could get a sort of AI domestic servant, not necessarily a high-bandwidth do-everything AI as a market model, sort of “for the housewives” AI.
Tech tends to do that. Most people want something to do all the basics and can live without the more advanced tech because they don’t really need it. So a cut-down version of AI is likely to take market share over a high-end AI.
What’s bad about that is that it reduces the demand for advanced development. Scientific AI, which gets very little attention, is a different type of AI that is ultra-functional and very useful for heavy lifting in the sciences. This “species” of AI is evolved to perform specialist tasks, and it’s doing very well.
If mainstream AI turns into a drudge job worker, you could wind up with dumbed-down purely consumer AI that’s not much more advanced than it is now in 2050. It will be able to do all the basics. The problem with that is that the AI will also have to catch up with current comms, tech innovations, new platforms, etc.
To reboot Moore’s Law – “The number of AIs will increase as the AIs evolve more capability.” That’s likely to mean a pretty high evolution rate and turnover in AI types and models. Which leads to an unavoidable question – How long can your AI be viable?
Add to this technotopia the usual bells and whistles attached to all types of tech. “Our AI can find your soul mate, do your washing, fix your tax return and housetrain the dog”.
Yeah, sure it will. A lot of superfluous and probably expensive crud is likely to come along for the ride. As with digital civilization in general, AI could easily be contaminated with whatever the equivalent of useless apps for AI will be.
Like the useless apps of the past, this will come out like low-quality dye in the wash. What’s likely to be far more important is the public image of AI they will create, which like those apps, will be largely fictional.
Points being:AI has proven its capacity at this current level, and no more.
AI will evolve rapidly, finding new roles.
That’s it. That’s the sum total of predictable information. The rest is paid hype and hysteria. You’re looking at an almost blank slate, colored in by a couple of this year’s “enlightened” chatbots.
This almost total lack of hard information is sparking terror:
AI could replace people. So could other people.
AI could run businesses. That’d probably be an improvement in many cases.
AI could remove those lousy low-paid jobs nobody wants. So what?
What’s really different is that AI is a truly open-ended, real-time, multitasking class of tech. That’s what’s actually new.
Fear of AI is useless. People including Elon Musk are now actually calling for a halt to the training of AI. That’s not going to happen and nobody can make it happen. Google and OpenAI are definitely not going to let their competitors catch up. China is definitely not going to stop development.
The “answer” to AI is critical thinking. The world’s not good at that, and that’s likely to be the real problem.
The risk and reward of ChatGPT in cybersecurity
By Dr. Tim Sandle
March 31, 2023
ChatGPT appeared in November and immediately generated a buzz as it wrote texts including poems - Copyright AFP/File Lionel BONAVENTURE
There is considerable hype and fear there’s been around ChatGPT, the artificial intelligence (AI) chatbot developed by OpenAI. This extends to articles about academics and teachers worrying that the platform will make cheating easier than ever. On the other side of the coin, you might have seen there are articles evangelising all of ChatGPT’s potential applications.
Alternatively, there are some more esoteric examples of people using the tool. One user, for example, got it to write an instruction guide for removing peanut butter sandwiches from a VCR in the style of the King James Bible. Another asked it to write a song in the style of Nick Cave; although the singer was less than enthused about the results.
According to JP Perez-Etchegoyen, CTO of Onapsis, amidst all that hype and discussion, there has not been nearly enough attention paid to the risks and rewards that AI tools like ChatGPT present in the cybersecurity arena, as he explains to Digital Journal.
Understanding ChatGPT
Perez-Etchegoyen says that: “In order to get a clearer idea of what those risks and rewards look like, it’s important to get a better understanding of what ChatGPT is and what it’s capable of.”
Perez-Etchegoyen’sclear explanation is: “ChatGPT (now in its latest version, ChatGPT-4, released on March 14th, 2023) is part of a larger family of AI tools developed by the US-based company OpenAI. While it’s officially called a chatbot, that doesn’t quite cover its versatility. Trained using both supervised and reinforcement learning techniques, it can do far more than most chatbots.”
Furthermore: “As part of its responses, it can generate content based on all the information it was trained on. That information includes general knowledge as well as programming languages and code. As a result, it can, for instance, simulate an entire chat room; play games like tic-tac-toe; and simulate an ATM.”
More importantly, for businesses and other large organisations, Perez-Etchegoyen states: “It can help improve businesses’ customer service through more personalised, accurate messaging. It can even write and debug computer programs. Some of those, and other, features mean that it could both be a cybersecurity ally and a threat.”
Education, filtering, and bolstering defences
Looking at a key sector – learning – Perez-Etchegoyen reveals: “On the positive front, there’s a lot to be said for ChatGPT. One of the most valuable roles it could play is also one of the most simple: spotting phishing. Organisations could entrench a habit in their employees whereby they use ChatGPT to determine if any content they’re not sure about is phishing or if it was generated with malicious intent.”
Outlining the importance, Perez-Etchegoyen states: “For all the technological advances made in recent years, social engineering attacks like phishing remain one of the most effective forms of cybercrime. In fact, research shows that, of the cyberattacks successfully identified in the UK in 2022, 83 percent involved some form of phishing.”
In addition: “There are numerous other ways that ChatGPT can be used to bolster cybersecurity efforts. It could, for example, provide a degree of assistance to more junior security workers, whether that’s in communicating any issues they might have or helping them better understand the context of what they’re meant to be working on at any given point. It could also help under-resourced teams curate the latest threats and in identifying internal vulnerabilities.”
The bad guys are using it too
There is a dark side to this AI advancement. Perez-Etchegoyen observes: “Even as cybersecurity professionals explore ways of using ChatGPT to their advantage, cybercriminals are too. They might, for example, make use of its ability to generate malicious code. Alternatively, they might use it to generate content that appears to be human-generated, potentially used to trick users into clicking on malicious links, unknowingly leading to dangerous consequences.”
The unsavoury practices continue in other areas. Here Perez-Etchegoyen adds: “Some are even using ChatGPT to convincingly mimic legitimate AI assistants on corporate websites, opening up a new avenue in the social engineering battlefront. Remember, the success of cybercriminals largely depends on being able to target as many possible vulnerabilities, as frequently and quickly as possible. AI tools like ChatGPT allow them to do that by essentially acting as a supercharged assistant that can help create all assets needed for malicious campaigns.”
Use the tools available
This translates into business advice, which Perez-Etchegoyen draws into a recommendation: “It should be clear then that, if cybercriminals are using ChatGPT and other AI tools to enhance their attacks, your security team should also be using them to bolster your cybersecurity efforts. Fortunately, you don’t have to do it alone.”
Perez-Etchegoyen further advises: “The right security provider won’t just engage in constant research around how cybercriminals are using the latest technologies to enhance their attacks but also how those technologies can be used to improve threat detection, prevention, and defence. And with the damage that a cybersecurity attack can do to your critical infrastructure, it’s something they should be proactively telling you about too.”
In a follow-up article, Perez-Etchegoyen provides his analysis of ChatGPT-4.
By Dr. Tim Sandle
March 31, 2023
ChatGPT appeared in November and immediately generated a buzz as it wrote texts including poems - Copyright AFP/File Lionel BONAVENTURE
There is considerable hype and fear there’s been around ChatGPT, the artificial intelligence (AI) chatbot developed by OpenAI. This extends to articles about academics and teachers worrying that the platform will make cheating easier than ever. On the other side of the coin, you might have seen there are articles evangelising all of ChatGPT’s potential applications.
Alternatively, there are some more esoteric examples of people using the tool. One user, for example, got it to write an instruction guide for removing peanut butter sandwiches from a VCR in the style of the King James Bible. Another asked it to write a song in the style of Nick Cave; although the singer was less than enthused about the results.
According to JP Perez-Etchegoyen, CTO of Onapsis, amidst all that hype and discussion, there has not been nearly enough attention paid to the risks and rewards that AI tools like ChatGPT present in the cybersecurity arena, as he explains to Digital Journal.
Understanding ChatGPT
Perez-Etchegoyen says that: “In order to get a clearer idea of what those risks and rewards look like, it’s important to get a better understanding of what ChatGPT is and what it’s capable of.”
Perez-Etchegoyen’sclear explanation is: “ChatGPT (now in its latest version, ChatGPT-4, released on March 14th, 2023) is part of a larger family of AI tools developed by the US-based company OpenAI. While it’s officially called a chatbot, that doesn’t quite cover its versatility. Trained using both supervised and reinforcement learning techniques, it can do far more than most chatbots.”
Furthermore: “As part of its responses, it can generate content based on all the information it was trained on. That information includes general knowledge as well as programming languages and code. As a result, it can, for instance, simulate an entire chat room; play games like tic-tac-toe; and simulate an ATM.”
More importantly, for businesses and other large organisations, Perez-Etchegoyen states: “It can help improve businesses’ customer service through more personalised, accurate messaging. It can even write and debug computer programs. Some of those, and other, features mean that it could both be a cybersecurity ally and a threat.”
Education, filtering, and bolstering defences
Looking at a key sector – learning – Perez-Etchegoyen reveals: “On the positive front, there’s a lot to be said for ChatGPT. One of the most valuable roles it could play is also one of the most simple: spotting phishing. Organisations could entrench a habit in their employees whereby they use ChatGPT to determine if any content they’re not sure about is phishing or if it was generated with malicious intent.”
Outlining the importance, Perez-Etchegoyen states: “For all the technological advances made in recent years, social engineering attacks like phishing remain one of the most effective forms of cybercrime. In fact, research shows that, of the cyberattacks successfully identified in the UK in 2022, 83 percent involved some form of phishing.”
In addition: “There are numerous other ways that ChatGPT can be used to bolster cybersecurity efforts. It could, for example, provide a degree of assistance to more junior security workers, whether that’s in communicating any issues they might have or helping them better understand the context of what they’re meant to be working on at any given point. It could also help under-resourced teams curate the latest threats and in identifying internal vulnerabilities.”
The bad guys are using it too
There is a dark side to this AI advancement. Perez-Etchegoyen observes: “Even as cybersecurity professionals explore ways of using ChatGPT to their advantage, cybercriminals are too. They might, for example, make use of its ability to generate malicious code. Alternatively, they might use it to generate content that appears to be human-generated, potentially used to trick users into clicking on malicious links, unknowingly leading to dangerous consequences.”
The unsavoury practices continue in other areas. Here Perez-Etchegoyen adds: “Some are even using ChatGPT to convincingly mimic legitimate AI assistants on corporate websites, opening up a new avenue in the social engineering battlefront. Remember, the success of cybercriminals largely depends on being able to target as many possible vulnerabilities, as frequently and quickly as possible. AI tools like ChatGPT allow them to do that by essentially acting as a supercharged assistant that can help create all assets needed for malicious campaigns.”
Use the tools available
This translates into business advice, which Perez-Etchegoyen draws into a recommendation: “It should be clear then that, if cybercriminals are using ChatGPT and other AI tools to enhance their attacks, your security team should also be using them to bolster your cybersecurity efforts. Fortunately, you don’t have to do it alone.”
Perez-Etchegoyen further advises: “The right security provider won’t just engage in constant research around how cybercriminals are using the latest technologies to enhance their attacks but also how those technologies can be used to improve threat detection, prevention, and defence. And with the damage that a cybersecurity attack can do to your critical infrastructure, it’s something they should be proactively telling you about too.”
In a follow-up article, Perez-Etchegoyen provides his analysis of ChatGPT-4.
No comments:
Post a Comment