Monday, November 20, 2023

CYBER SECURITY
Government intervention efforts leave too much time for exploitation


By Dr. Tim Sandle
DIGITAL JOURNAL
November 19, 2023

Image: © AFP

How will cybersecurity and business technology develop across the next twelve months. Kev Breen, Director of Cyber Threat Research, Immersive Labs, see both positive points and some important challenges, as he explains to Digital Journal.

FUD around GenAI will die down

Breen thinks that the buzz technologies of 2023 will simmer down through 2024: “GenAI hit the technology scene in a huge way this past year, and is already heavily being embraced — or companies are racing to get involved so they don’t fall behind. But among all its popularity, we’re simultaneously seeing a significant amount of FUD – fear, uncertainty and doubt – and misunderstanding.”

Breen adds a note of caution: “People still don’t fully understand the risks and vision of AI, which lends itself to paranoia and fears of unfounded massive cybersecurity attacks by AI. In the year ahead we’ll hopefully see the hype around AI die down and become more of the norm so that we can focus on the many benefits of using these tools to do work more efficiently and effectively.”

He adds, however, there are some pioneers in this field: “A handful of organizations are dedicating ample time and resources to the actual use cases of this technology and we can expect more businesses to follow suit.”

Too much time for exploitation

Are governments doing enough? Breen thinks that despite U.S. legislation challenges remain: “Despite government intervention to try and strengthen transparency and guidance around cybersecurity practices, many standard implementations still haven’t kept pace. For example, FedRAMP guidelines say organizations have 30 days to remediate high-risk threats — yet attackers just need one day to discover a vulnerability and take advantage to wreak havoc on systems and cause costly damage to organizations.”

As a consequence, finds Breen: “Cybercriminals will likely continue to have first mover advantage, so it is security teams’ responsibility to assume compromise and remain cyber resilient as it is unlikely that guidelines such as FedRAMP will be updated to meet the standards of today’s threat landscape.”

Continued development of AI policies

AI still stands to be the essential business technology going forwards: “We already began to see this towards the end of 2023, but in 2024, we can expect governments and AI service providers to continue to implement policies regulating the development of AI. The key differentiator will be if these entities have moved beyond the shock and awe of AI to focus on the benefits. Risk assessment will continue to be a part of the equation as it should with any advancement in technology, but prioritizing innovation in these policies rather than fear will set countries apart. In 2023, we focused on the potential risks of AI. In 2024, it will be essential to focus on the potential opportunities.”

Ransomware isn’t going anywhere, so be prepared

An important threat remains cybersecurity vulnerabilities, especially ransomware. Breen notes: “One can hope that organizations have learned from the major data breaches we’ve seen over the last year, but we unfortunately continue to see a lot of organizations who are simply not ready to handle the impact of a ransomware attack.”

Breen sees firms are tripping up on the same issues: “Organizations still fall victim to the tried and true tactics that cyber criminals use to gain access to their most sensitive information and despite government advisories saying otherwise, they continue to pay the ransom — which is why this attack style is still popular. We should expect to see ransomware groups leveraging new techniques in Endpoint Detection & Response (EDR) evasion, quickly weaponizing zero days and as well as new patched vulnerabilities, making it easy for them to bypass common defence strategies. As a result, security teams can’t rely on an old security playbook. Companies should not worry about how they can detect everything, and instead just assume at some point it will go badly so you should have plans in place to best respond.”

AI risks will largely stem from developers and application security

Breen’s final prediction for the next 12 months places an obstacle for the rise of AI: “When talking about the risks of AI, many think about threat actors using it in nefarious ways. But in actuality, in 2024 we should be most concerned about how our internal teams are using AI — specifically those in application security and software development. While it can be a powerful tool for certain teams like offensive and defensive teams and SOC analysts to enhance and parse through information, without proper parameters and rules in place regarding AI usage by organizations, it can potentially lead to unexpected risks for CISOs and business executives and leave holes in their cyber resilience to leave the door open for exploitation.”

No comments: