How Generative AI can be both a security friend and foe to businesses in Singapore

Generative AI can be a force multiplayer to help you stay one step ahead of adversaries in the cyber arms race.

Generative AI has been dominating the headlines recently, surpassing boundaries and expectations with its ability to summarise content, generate code, and simulate human conversations, among many use cases. But while generative AI presents opportunities for productivity gains and innovation, especially for Southeast Asia tech companies exploring AI integration, we should not discount its potential to improve attacker productivity and lower barriers to entry for adversaries. So how is generative AI being used for adversarial purposes?

Let’s start with social engineering-based attacks. In the past, we’ve looked at grammar and spelling errors as possible indicators of spear-phishing emails. Today, the signs aren’t as clear-cut as generative AI elevates social engineering threats to a new degree. Adversaries can now replicate the nuances of language and communication with alarming precision to improve their phishing attempts.

Technical proficiency is no longer an absolute requirement to launch malicious attacks either, with generative AI making it easier for even novice adversaries to generate malicious codes or automate ransom negotiations.

A further concern for businesses in Singapore and Southeast Asia is around the potential weaponisation of vulnerabilities from what is commonly referred to as ‘patch Tuesday’ -- when technology companies like Microsoft tend to release patches to vulnerabilities in their solutions.

The ability to actually understand and exploit a vulnerability is a distinct skill set. But with the help of generative AI, a patch can be downloaded and disassembled faster, and the specific vulnerability identified in a shorter timeframe. An exploit can then be quickly created and disseminated. That’s what’s on the horizon around adversarial AI and its effect on ‘patch Tuesday’. This can significantly accelerate adversarial exploit development and hence increase the strain on organisations trying to patch and defend from the attack.

An eye-for-an-eye approach

A key area where generative AI helps is boosting skills and improving productivity in cybersecurity teams for businesses in Singapore and, more broadly, South East Asia. Generative AI can democratise security for security operations centres (SOC), empowering even novice security analysts to operate as advanced analysts with the help of simple queries. Meanwhile, more experienced cybersecurity professionals can automate repetitive tasks such as data collection, extraction, and basic threat search and detection – freeing up their time to take on more value-adding cybersecurity tasks.

Human analysts often consume valuable time understanding the origin and impact of threats, taking necessary remediation steps, and generating comprehensive reports. Here, generative AI can significantly accelerate this process and free analysts up to concentrate on what they’re really good at and where they should be spending their time. Overall, it makes SOC teams more productive and focused on the right things.

A consolidated view of insights collected across sources in the network, including sensors, servers, and devices helps streamline the process of threat identification, leading to improved threat responsiveness.  Through simple natural language queries and the right platform approach, cybersecurity teams can leverage solutions that are designed to expedite hunting and remediating existing threats.

However, AI may be great at solving constrained problem sets and executing tasks, but it still lacks the ability to think outside existing boundaries and develop innovative solutions on its own. While it may be able to tackle known and previously experienced threats, it falls short when it comes to emerging ones and that is why having a human feedback loop is essential to pick up on those nuances.

The power of human and machine

Generative AI-driven cybersecurity is only as good as the data available on hand.  Without the right data to train the model, generative AI cannot respond properly to adversaries’ constantly changing tactics.

Humans are still key to its success, specifically in training datasets and validating content for AI to perform security use cases effectively. Tight human feedback loops allow new insights to be incorporated into training models, allowing cybersecurity teams to overcome the limitations of AI. This is especially critical, as adversaries persistently seek out new ways to exploit organizations’ vulnerabilities and existing rules.

Codified learnings from expert threat hunters and security practitioners can transform large language models (LLMs) into reliable cybersecurity analysts and intelligent security operation centres. This empowers organisations with a solid foundation for leveraging AI’s accumulated knowledge and accessing deep cybersecurity expertise accumulated through the years.

In-depth human-validated information on threat actors, their activities, and security incidents around the world provides organizations with the added telemetry to enhance strategies in triaging, investigating, and remediating attacks. Human-validated information around outcomes of engagements, including breach investigations, risk assessments, and advisory services, can better position organizations against potential threats.

Ultimately, generative AI should be regarded as a teammate and not an independent process in the cybersecurity ecosystem. A combination of human-and-machine collaboration, continuous upskilling; a force multiplayer to help you stay one step ahead of adversaries in the cyber arms race.

Fabio Fratucello is the chief technology officer, International at CrowdStrike

See Also: