What good AI cyber security software looks like in 2022 (2024)

What good AI cyber security software looks like in 2022 (1)

This article originally appeared in issue 28 of IT Pro 20/20, available here. To sign up to receive each new issue in your inbox, click here

Weaponised artificial intelligence (AI) is no longer some futuristic sci-fi nightmare. Autonomous killer robots aren't out to get us just yet, but AI technologies such as machine learning have been adopted by criminal gangs who, like any ambitious organisations, want to give their operations an edge.

One of the best-known botnets, TrickBot, is a prime example of a once standard Trojan that's now brimming with AI capabilities. Its creators have added intelligent algorithm-based modules which, for instance, calculate how to hide in a specific target system, making it almost impossible to detect.

Imaginative attackers are also using AI to scan for minute vulnerabilities in systems; process vast stores of personal data and create deepfakes so realistic they'd fool a CEO's mum. Tools to achieve this nefarious magic are widely available through the dark web, but even more frightening still is the prospect of criminals weaponising organisations' own AI by infiltrating and manipulating the data that informs it.

The implications for global security are indeed grim. Business leaders also fear lagging behind in the AI security race, with 60% of those surveyed by Darktrace last year suggesting human-driven responses are failing to keep up. Nearly all (96%) have begun to guard against AI, but with threats escalating, what tools and systems are available?

How AI learns to guard your data

To face down AI threats, you need AI defences. More than two-thirds (69%) of organisations surveyed in a Capgemini study said AI security is urgent, and this number is likely to grow as more are hit by AI-driven attacks. "I don't know any IT security vendor that hasn't included machine learning algorithms in security toolsets," says Freeform Dynamics analyst Tony Lock. "Security was one of the earliest sectors to use machine learning because it's so good at looking for patterns, especially anomalies that might indicate a threat."

Traditional security tools can't keep pace with the sheer scale of malware and ransomware created every week. AI, by contrast, can detect even the tiniest potential risk before it enters the system, without having to constantly run computer scans or be told what threats to look out for. Instead, it learns a baseline and then automatically flags anything out of the ordinary.

RELATED RESOURCE

What good AI cyber security software looks like in 2022 (2)

Accelerating AI modernisation with data infrastructure

Generate business value from your AI initiatives

FREE DOWNLOAD

AI apps and components are available in cloud services from the likes of Amazon and Microsoft, and can be added to existing systems without interrupting workflows. Everyone can get on with their jobs with minimal risk of mistakes, and the tools are designed to scale as required. Microsoft Azure's secure research environment for regulated data is a good example. It uses smart automation to supervise and analyse the user's business data, while its machine learning is ready to leap into action if it detects a blip. Similarly, email scanners such as Proofpoint use machine learning to detect malicious emails by spotting clues far too subtle for a human to see.

The more these tools are used, the more accurate and faster they get. Response times are slashed as AI tools learn from their own experiences and from those of other organisations, through analysis of samples shared in the cloud. "The AI might miss the first attack, but then it'll share that knowledge with other AI systems and create new ways to detect the new attack, and so on," says Adam Kujawa, security evangelist at Malwarebytes. Eventually, says Kujawa, the user won't encounter threats at all.

Beyond anomalies: Automation, scale and prediction

Automated threats can't be tackled using legacy security tools, but AI-powered cyber security tools can help. Deployed in a system, algorithms build a thorough understanding of activity such as website traffic, and learn to automatically and instantly distinguish between humans, good data, bad data, and bots.

Martin Rehak, CEO of security firm Resistant AI and lecturer at Prague University, gives the example of large-scale financial fraud that exploits organisations' own automation systems. "AI and machine learning are the only scaling factors that can supervise these systems effectively in real-time," he says. The system will then continuously refine relationships between algorithms, getting better at evaluating documents and behaviour in real-time, potentially uncovering all kinds of fraud.

AI also prioritises risks far more intuitively than a human can. "Technology has evolved to allow prioritisation backed by AI algorithms, which computes risk score," explains Naveen Vijay, VP of threat research at risk analytics firm Gurucul. "This approach allows it to automate not only the detection of incidents but also the mitigation process."

AI helps you prioritise resources, too. By enabling you to analyse vast amounts of data and create a detailed record of all your assets, an AI system can predict how and where you're most likely to be compromised, so you can organise your defences to protect the most vulnerable areas.

Deep learning, attack simulations and beyond

At the moment, AI defences can't do all the work by themselves. They still have to be correctly managed by humans. "The common mistake I see is companies paying for AI systems then not configuring them correctly," says Jamie King, information and cyber security manager at IT provider TSG. "I personally like Microsoft Sentinel as part of a security strategy, because it's cost-effective and works well. But organisations need to be aware that it is an option, and quality management needs to be in place."

AI is great for spotting anomalies, but a human is still needed to make the final call, agrees Phil Bindley, MD of cloud and security at Intercity. "Having a blend that uses both AI and humans helps to spot false positives. Solutions like Checkpoint Harmony inform about potential threats based on AI and machine learning, then require human interaction to make a choice on the best course of action."

RELATED RESOURCE

What good AI cyber security software looks like in 2022 (3)

Recommendations for managing AI risks

Integrate your external AI tool findings into your broader security programs

FREE DOWNLOAD

Just as driverless cars are set to transform transport, though, autonomous AI systems may render human supervision unnecessary. Already, the most advanced AI security services offer elements of deep learning, which doesn't depend on human-designed algorithms but instead on neural networks, which comprise many layers of analytical nodes and are effectively artificial brains. Such a system could learn to "know" the difference between benign and malicious activity.

Security teams can already harness the predictive powers of AI by building models that help them predict what malware will do next, and then build AI workflows that swing into action automatically when an attack or variant is detected. AI prediction is evolving fast, however. Firms such as Darktrace are developing smart attack simulations that'll autonomously anticipate and block the actions of even the most inventive AI-tooled cyberpunk.

"Proactive security and simulations will be incredibly powerful," says Max Heinemeyer, VP of cyber innovation at Darktrace. "This will turn the tables on bad actors, giving security teams ways to future-proof their organisations against unknown and AI-driven threats."

Get the ITPro. daily newsletter

Receive our latest news, industry updates, featured resources and more. Sign up today to receive our FREE report on AI cyber crime & security - newly updated for 2023.

Jane Hoskyn has been a journalist for over 25 years, with bylines in Men's Health, the Mail on Sunday, BBC Radio and more. In between freelancing, her roles have included features editor for Computeractive and technology editor for Broadcast, and she was named IPC Media Commissioning Editor of the Year for her work at Web User. Today, she specialises in writing features about user experience (UX), security and accessibility in B2B and consumer tech. You can follow Jane's personal Twitter account at @janeskyn.

Most Popular

As an expert in the field of artificial intelligence (AI) and its applications in cybersecurity, it is evident from the provided article that the discussion revolves around the growing threat of weaponized AI and the measures taken to defend against such malicious use of technology. The content delves into the infiltration of AI technologies, particularly machine learning, by criminal organizations to enhance the sophistication of their operations.

The primary focus is on the example of TrickBot, a well-known botnet that has evolved from a standard Trojan into a formidable threat incorporating intelligent algorithm-based modules. These modules enable TrickBot to calculate optimal strategies for hiding within target systems, making detection nearly impossible. The article also highlights the use of AI by imaginative attackers to scan for vulnerabilities, process vast amounts of personal data, and create realistic deepfakes.

To counteract these AI-driven threats, the article emphasizes the need for AI defenses. It cites a Capgemini study where 69% of organizations consider AI security urgent, anticipating a further increase in AI-driven attacks. Traditional security tools are deemed inadequate due to the sheer scale of evolving malware and ransomware. In contrast, AI is positioned as a solution capable of detecting even the smallest potential risks before they enter the system.

The discussion further explores the integration of AI into existing systems through cloud services provided by companies like Amazon and Microsoft. AI applications and components are presented as tools that can be added seamlessly, allowing organizations to continue their operations with minimal risk. Microsoft Azure's secure research environment and email scanners like Proofpoint are cited as examples of AI applications in action.

The article emphasizes the continuous improvement of AI tools through usage. As these tools are employed and learn from their experiences, response times are reduced, and their accuracy increases. The collaborative learning aspect is highlighted, where AI systems share knowledge to enhance overall cybersecurity.

Additionally, the article touches upon the role of AI in automating threat detection and response. It discusses how algorithms gain a thorough understanding of activities, such as website traffic, and automatically distinguish between legitimate and malicious entities. AI's ability to prioritize risks intuitively and assist in resource allocation is also emphasized.

The evolving landscape of AI defenses is acknowledged, with a mention of the current need for human management. The article suggests that while AI can spot anomalies effectively, human supervision is crucial for making final decisions. The integration of AI and human expertise is presented as a balanced approach, citing examples like Microsoft Sentinel and Checkpoint Harmony.

Looking ahead, the article speculates on the potential for autonomous AI systems that may not require human supervision. The concept of deep learning, which relies on neural networks, is introduced as a technology that could distinguish between benign and malicious activities without human-designed algorithms.

In conclusion, the article provides a comprehensive overview of the current state of weaponized AI, the challenges it poses, and the evolving landscape of AI-driven cybersecurity defenses. The integration of AI into security strategies is depicted as essential for staying ahead of sophisticated threats in the digital landscape.

What good AI cyber security software looks like in 2022 (2024)
Top Articles
Latest Posts
Article information

Author: Barbera Armstrong

Last Updated:

Views: 5550

Rating: 4.9 / 5 (79 voted)

Reviews: 86% of readers found this page helpful

Author information

Name: Barbera Armstrong

Birthday: 1992-09-12

Address: Suite 993 99852 Daugherty Causeway, Ritchiehaven, VT 49630

Phone: +5026838435397

Job: National Engineer

Hobby: Listening to music, Board games, Photography, Ice skating, LARPing, Kite flying, Rugby

Introduction: My name is Barbera Armstrong, I am a lovely, delightful, cooperative, funny, enchanting, vivacious, tender person who loves writing and wants to share my knowledge and understanding with you.