Introduction

There is no denying it; AI is transforming the world.

All industries will sooner or later be affected by this technology. However, this is not necessarily all for good.

AI has the capacity to not only transform and enhance legitimate processes and enterprises, but also illegitimate ones, including cybercrime.

With AI, cybercrimes can be committed on an entirely new scale with a much higher success rate. This means it is becoming more important than ever for businesses and organisations to improve their level of cybersecurity. Otherwise, they will be left vulnerable.

Below we go through the main ways in which hackers are using AI for cybercrime and what this means for businesses and organisations:

Automating Phishing:

Phishing is one of the most common forms of cyberattack and AI is making it easier than ever for hackers.

Phishing usually takes the form of a fraudulent email pretending to come from a trusted source. The purpose of a phishing email is to get the victim to reveal sensitive information or download malware so that a hacker can take control of a system or steal information.

Before generative AI, it was relatively easy to recognise a phishing email because they often had spelling mistakes and bad grammar. This is not the case anymore.

AI tools like ChatGPT mean that hackers can now automate the creation of phishing emails with perfect grammar and spelling. Not only that, hackers can also use ChatGPT to perfectly replicate a company’s communication style and language or impersonate an executive, making it very difficult to tell the difference between legitimate emails and fraudulent ones.

The impact of ChatGPT in phishing has been enormous. It is estimated that since the launch of the tool in late 2022, there has been a 1,265% increase in phishing emails.

What does this mean for businesses and organisations?

The astronomical rise in phishing attempts means that, whether you know it or not, your business is constantly being bombarded by fraudulent emails. This means your employees are constantly at risk, especially now that phishing emails can be made to look so legitimate.

The best way to protect yourself and your employees is therefore to have a system in place that can analyse all incoming emails and ensure fraudulent emails never get through to any inbox in your organisation. In other words, businesses should use e-mail filtering (Mailscreen) that is screening all traffic and that stop threats from getting through and cause harm to your organisation and your people. In addition to this, every organisation must implement a correct DMARC Policy with status P=reject.

Generating Malware:

Beyond phishing, generative AI is also making it a lot easier for hackers to develop more destructive and intelligent kinds of malware.

For one, AI tools can be used to quickly generate malware in multiple programming languages, massively increasing the capacity of hackers to attack different kinds of systems. Furthermore, AI-powered malware can learn, which means that it can be trained to avoid detection software, as well as find and exploit system weaknesses. The more it does this, the better it gets.

The good news is that the use of AI to quickly generate intelligent malware from prompts is not yet widespread. There are a lot of safeguards on tools like ChatGPT, AlphaCode, and GitHub Copilot to prevent their use for the generation of malicious code. However, if there is one thing we know about hackers, it is that they find ways to go around the rules and safeguards.

Hackers are becoming specialists in writing prompts (known as “jailbreaks” in the dark web) that can manipulate AI tools into generating code and content that they are not supposed to, including writing malicious code and generating output containing sensitive information. So, even if the use of legitimate AI tools to generate malware is not yet widespread amongst hackers, they are learning and adapting. It’s only a matter of time until the AI tools we all use become a part of the hacker’s basic arsenal.

It’s also only a matter of time until we see powerful AI tools that are specifically trained to generate malware and phishing emails appearing in the dark web. In fact, there already are a couple, such as WormGPT and FraudGPT.

WormGPT and FraudGPT can, amongst other things, generate elaborate phishing content, write malware and harmful code, and analyse systems for vulnerabilities. With time, these tools will only become more advanced and powerful.

Preparing for the future:

The cybersecurity landscape may seem bleak. After all, what can businesses do against the automated creation of intelligent malware? How do you protect yourself from hackers with such powerful tools at their disposal?

It is more important than ever to have powerful cybersecurity systems in place that are trained to recognise and respond to the new kinds of threats that can be generated with AI. This means having systems in place that can continuously assess risks and anomalies so defences can be adapted in real-time in response to increasingly adaptive malware.

With malware and tools that can analyse and exploit system vulnerabilities it is also vitally important to think about how you build your systems and networks. This includes reducing potential avenues of attack by enabling proper authentication mechanisms and routines, segmenting your networks, limiting user privileges, encrypting sensitive data, and training your employees so they are less likely to fall victim to more sophisticated phishing attempts.

Cybersecurity in the time of AI requires a dynamic, 360 degree, 24/7/365 approach.

 

Need a cybersecurity partner that can fully protect you from existing and emerging threats?

Contact us for a free consultation to review your current security systems and processes: