AI is poised to make our life easier, for the purposes of good, but what happens when it’s used for bad things, too? New software happens.
Artificial intelligence everywhere is already a consistent trend for 2024, and AI PCs are a part of that, but this could lead to a new category of security software.
This week, one of the makers of security platforms and software announced what that was: security software specifically focused on protecting AI PCs, and from a security approach you might not have considered: the very prompts designed to make your life that much easier.
Right now (in 2024), prompts make up much of what we know AI for.
Regardless of whether you use it to make an image out of nothing or make music out of nothing, you need a prompt to make an AI model do something to make something, to make something out of nothing.
It’s how Copilot works when you type text into the feature in Windows 11, as the model takes your text prompt, analyses the semantics of the language, looks up what it can do, and spits out a result. Newer computers will be able to do much of this without needing a connection to the internet, iterating all the AI “on-device”, which is largely what this year’s AI trend is all about.
In a world where AI is everywhere, you may not necessarily need a web connection to make AI happen. It can just happen on your computer.
That’s theoretically great, but what happens when it’s not?
AI vulnerabilities have a security solution
“AI applications are vulnerable in ways that other applications are not,” said Tim Falinski, Vice President of Consumer for Trend Micro in Asia Pacific, Middle East, and Africa.
He told Pickr that these vulnerabilities can include a risks where prompts cause AI applications to “misbehave” because of the instructions being supplied, or because the model in how an AI works has been tampered with, changing its behaviour.
“If an AI application has been tampered with in some way, it can be directed by a malicious actor to do things such as steal sensitive information you may be storing on your PC,” he said.
We already live in a world where security software can’t help us against scams and SMS phishing, relegating security software’s status amongst many. You need it for internet security of sorts, and most people want some form of it, but with AI PCs at threat, what can be done?
It turns out a new form of internet security is required, and one that can safeguard the AI component of your new PC.
“The risks that come with using AI applications on your local device are very different from the traditional “malware” risks such as viruses or phishing or ransomware that traditional antivirus solutions protect,” said Falinski.
“If you choose to buy an AI PC and only use traditional antivirus protection on it, you are not 100% protected as traditional [antivirus] is not designed to protect you from the risks of AI applications running locally on your device.”
With that in mind, Trend Micro has a solution, and it likely won’t be alone.
Later in the year, Trend plans to launch a security solution for consumer AI PCs in what it says is a world first. The security platform will essentially aim to safeguard AI applications while also using neural processing units inside the latest chips to handle email security.
Essentially, Trend Micro will use AI models and on-device AI to improve security for emails while protecting the AI on your computer from being tampered with.
That’s the goal: thwarting potential AI security issues before they happen, and using what is learned to better improve security on the whole.
It’s a lofty goal, too, and one that Trend Micro’s press release on the matter essentially warns about, thanks in part to a particularly long set of caveats at the bottom related to some quotes in the initial story. As such, we’re going to throw those in an accordion you can read for yourself (below).
Certain statements included in this press release that are not historical facts are forward-looking statements. Forward-looking statements are sometimes accompanied by words such as “believe,” “may,” “will,” “estimate,” “continue,” “anticipate,” “intend,” “expect,” “should,” “would,” “plan,” “predict,” “potential,” “seem,” “seek,” “future,” “outlook” and similar expressions that predict or indicate future events or trends or that are not statements of historical matters. These forward-looking statements include, but are not limited to, statements concerning Trend’s AI related security solutions. These statements are based on our current expectations and beliefs and are subject to a number of factors and uncertainties that could cause actual results to differ materially from those described in the forward-looking statements. Although we believe that the expectations reflected in our forward-looking statements are reasonable, we do not know whether our expectations will prove correct. You are cautioned not to place undue reliance on these forward-looking statements, which speak only as of the date hereof, even if subsequently made available by us on our website or otherwise. We do not undertake any obligation to update, amend or clarify these forward-looking statements, whether as a result of new information, future events or otherwise, except as may be required under applicable securities laws.
The crux of the caveats is that while Trend Micro is stating its belief in AI and protecting systems is “reasonable”, something that might be found from quotes uttered by experts at the company, it doesn’t know if these “expectations will prove correct”. That’s our takeaway, but we are not lawyers.
It’s definitely something to be gambling security of a system on, though consumers may not get much of a choice.
If Trend Micro is the first company with an AI security platform, it likely won’t be the last, and like every form of technology, you can expect cybercriminals to exploit it. We guess it’s better to have some security attempting to do this job than none doing, well, nothing.
“We are not only leveraging AI for security, but also securing AI itself,” said Kevin Simzer, the Chief Operating Officer at Trend Micro.
“The value of this AI era will ultimately depend on how secure it is, from the enterprise level to the individual consumer. Trend is addressing both, while many in the industry are not yet doing either.”
Is bad AI the next stage in antivirus and security?
The first computer virus was in 1971, and since then the category has grown rapidly, exploding into security exploits of different types.
We no longer think in terms of “virus” for a computer, but into other categories, such as malware and ransomware, worm, trojan, spyware, stalkerware, and so on.
The category is so large that internet security and antivirus software is now largely one and the same, and if you don’t have protection, you’re essentially putting your computer and its data at risk.
Protecting that data is paramount, because the data is what really matters.
You can always replace hardware, even if it can be costly, but your files — your documents, photos, movies, and more — is often irreplaceable. That’s largely what you’re protecting, plus anything else you don’t want to get out in the open, such as your identity and other details meant to be secure.
Moving forward, as more computers become AI PCs simply because of how many companies are releasing them, the threat of tampered AI doing just that becomes a bigger deal. And that’s where the risk will be.
Fortunately, even if you have an older PC, your solution could end up being the same.
“If you aren’t using an AI PC, you can and should still use traditional device security solutions,” Trend’s Falinski told Pickr.
“When our AI security product becomes available later this year, you would still be able to use our AI security solution on a regular PC or an AI PC, as it will be protecting you from the traditional vulnerabilities as well as the new risks presented by using local AI applications,” he said.