By Vince Mazza, Guest columnist
Artificial Intelligence (AI) is quickly becoming mainstream in our lives, as its uses and applications continue to evolve. In the business world, AI can be used to write letters, announcements, marketing materials, and reports. It can analyze data of past performance and current economic indicators to help sales teams predict future trends. AI can assist across virtually every department of a company, with users observing that it saves them time and improves processes. Despite the many advantages to using AI, though, it’s essential for us all to approach this new technology with a degree of caution. And that in particular applies to issues of cybersecurity and AI.
You might say that AI is a double-edged sword when it comes to cybersecurity. It has the capability of offering strong defenses against outside cyber threats, while also being capable of introducing new vulnerabilities to a company’s infrastructure when in the wrong hands.
How can both be true? In this article, we’ll examine how AI functions, how it can both cause and prevent cyber attacks, and what you should know about using AI wisely.
QUICK ARTICLE LINKS
What is AI? How does it work?
How AI can be used to cause cyber attacks
Using AI to fight AI-enhanced cyber-attacks
Tips for using AI safely
Going Forward
What is AI? How does it work?
AI is a rapidly evolving technology which simulates human intelligence by using machines, especially computer systems. AI has a wide range of cognitive functions that we associate with the human mind, from the simple to the complex. Part of how AI works is through “machine learning,” which allows systems automatically to identify features, classify information, find patterns in data, make determinations and predictions, and uncover insights. AI uses algorithms to create machine learning models that continuously train the systems to increase its accuracy.
It’s important to note that AI performs based on the data that it uses. So, if faulty or fraudulent data goes into the algorithm, then AI might reach incorrect assumptions or be used for fraudulent purposes. The concept of AI has been with us for about ten years, although it’s really the last few years that it has come into prominence. One way to think of AI is as something capable of containing all of documented humanity encapsulated in the mind of a 10-year old, who is making decisions based upon that data.
If that “ten year old” has been exposed to the good role models, the outcome will be far superior to the situation, compared to the ten year old has been exposed to bad role models, like cyber criminals.
How AI can be used to cause cyber attacks
Unfortunately, cyber criminals have the same access to AI that everyone else does. And they can use AI to build their attack strategies with a significantly higher chance of a successful breach or attack.
There are two AI models used: Traditional AI as well as Generative AI.
Traditional AI solves specific tasks with predefined rules. Generative AI focuses on creating new content and data. This is an important distinction.
Generative AI uses deep–learning models which take raw data (such as all of Wikipedia) and from it learn to generate statistically probably outputs when promoted to do so. Generative AI uses unsupervised learning, whereas Traditional AI often employs supervised learning and discriminative models. To highlight the difference, think again of the analogy of the 10-year old. For how long would you leave a 10-year old unsupervised?
Generative AI is quite dangerous in the wrong hands and can be used in cyber-attacks such as phishing, SMS, and other social engineering operations.
Imagine phishing and smishing messages with highly convincing content that can mimic the language, tone and design of legitimate emails. AI can eliminate awkward diction, misspellings, grammatical errors and sloppy graphics that had previously made it easier to detect malicious messages.
With these AI advantages, hackers can make emails look more legitimate. AI can also impersonate people, such as bosses, with a vernacular that is virtually intact. The precision capability with Generative AI is something that the world has never previously seen.
This AI technology is sophisticated enough to fool people by expanding its reach to include a person’s hobbies, other contacts or events in their lives. This technology allows the “deep fake” voice of a boss, or even references to news stations that the intended victim watches.
One of the more frightening components of all of this is how AI can be used in malware. Generative AI uses machine learning to learn the environment. Malware can adapt to security measures and even automate the extraction of valuable data from compromised systems. And the AI continues to learn during an attack. It changes to find the most effective attack.
And with all of that said, AI additionally makes these techniques more affordable to the less skilled attackers.
Using AI to fight AI-enhanced cyber-attacks
We’ve seen how AI can be used by cyber criminals to deploy more successful attacks. But that is not the entire story. It is important to know, also, that AI can be used to fight cyber attacks, when the technology is in the right hands.
When fighting AI-enhanced cyber threats, you don’t want to “bring a knife to a gun fight.” The best way to fight against AI is by using AI.
Here are a few ways that AI can be used to thwart the actions of cyber criminals:
- AI-Specific Threat Detection: AI can sift through significant amounts of data to identify abnormal behavior and malicious activity. It can find abnormalities, detect them and take action including isolating machines and stopping the attack in its tracks.
- Real-time Continuous Monitoring of IoT devices and edge networks can be used to detect anomalies and intrusions, identify fake users, mitigate attacks and quarantine infected devices. AI can provide continual assessment of the trustworthiness of devices, users and applications and can give an immediate response, shortening the time needed to identify fraudsters.
By using data analysis and an algorithm, AI can identify spam and phishing emails by taking into account the message content and context when looking for warning signals. This continuous monitoring will analyze links and attachments in connection with all email communications across the business when phishing attacks occur. - A strong security community is another key means of combating AI-enhanced cyber attacks. The exchange of information, best practices and threat intelligence sharing from other cyber professionals and experts can help us stay resilient in the face of evolving AI-related security risks.
- Using AI to identify advanced malware: As each sample of malware passes through the model, the AI becomes stronger. Deep learning AI has enabled companies to optimize their malware protection strategies by increasing the quantity and accuracy of the data it analyses.
- AI in authenticity protection: As cyber criminals evolve their tactics, AI plays a crucial role in improving authentication processes. Traditional authentication processes execute its threat protection at the log-in stage. AI systems can detect in real time and respond to threats throughout a user’s session. For example, if the user suddenly moves to a new location and device or attempts to access financial information that isn’t relevant to their work, they’ll be prompted to verify their identity.
- Breach risk prediction and AI: AI can predict how and where organizations are most likely to be breached, so that they can plan for resources and tool allocation towards areas of weaknesses.
Tips for using AI safely
Knowing that AI can be used by cyber criminals to advance their purposes, but also knowing that AI in the right hands can be an effective tool in thwarting attacks, it makes sense to look at a few ways that we can use AI safely.
- Follow proper “cyber hygiene” and “Internet Maturity 101.” Understand how AI works, be mindful of privacy, use strong passwords, be aware of bias, keep software up to date, monitor usage and don’t rely on AI alone to protect against cyber threats.
- Be cautious with free AI systems such as public cloud systems. Any input from you on such systems might end up as output to someone else.
- Don’t feed confidential information to ChatGPT or other AI systems – such as financial information, Social Security numbers, or anything along those lines.
- Do not give any personal data such as names, health data or images as examples, from you or customers.
- Don’t upload process flows, network diagrams or code snippets from software.
- Don’t blindly trust the answers given by AI. While often correct, the answers can be wrong, outdated, or biased because the input data was also biased. Using multiple sources to verify information is always a good idea.
- Choose AI apps carefully. Hackers have taken advantage of demand for AI apps to create fake ones which will trick users into downloading them. Doing so will maximize the opportunity for a hacker to steal data.
- Don’t rely on AI alone to make crucial business decisions.
- Be careful with computer codes generated with AI tools. Computer programmers have started using AI tools to write code, and there is an inherent risk of generating code that carries various errors.
Going forward
Acknowledging that AI is an evolving technology that can be used for both good and criminal purposes, it’s important to know as much as you can about how it works. For the business owner who wants to concentrate on running his/her business, it makes sense to partner with a Managed Services Provider (MSP) and a cybersecurity company who can guide your efforts and keep your network protected. Let the experts give you that competitive advantage that your business needs.
Vince Mazza is co-founder and Chief Executive Officer of Guard Street Partners, LLC (Guard Street), a national cybersecurity company based in Wheaton, IL. His experience in property protection, data privacy and cybersecurity includes time as President and CEO of MH Equity Services LLC and VP at General Electric. He hosts the Guard Street Cybersecurity radio show/webcast and is viewed as a national leader in the field of cybersecurity.