Emails and SMS that pretend to be the real deal? That’s bad. But scammers using AI audio to trick? That’s worse. Fortunately, there may already be a fix.
Scammers will do anything to get your account details, and often that means seeing new scams you wouldn’t dream of. We’d hope security alerts aren’t infiltrating your sleep cycles, but if they are, it might be the usual assortment of phishing attempts, fake calls from Amazon, and so on.
However, criminals are getting more creating, and with the help of artificial intelligence, they’re finding new ways to become more convincing.
With a sample of someone’s voice, scammers have been able to create deepfakes of people’s voices, often asking for money over the phone.
The idea sees generative AI being used with voice samples to impersonate family members, and thanks to commonly found tools around the web, as well as samples of audio that can be gamed from multiple sources, scammers have found they can pretend to be someone you know to convincingly ask for money.
We’ve heard this idea at least once before, and while its uses weren’t high initially, they could become more commonplace.
Fortunately, security makers have been working on solutions, and over at CES this year, McAfee revealed “Project Mockingbird”, an AI system that uses behavioural and contextual detection models to work out whether audio in video is AI-generated.
Working to a 90 percent accuracy rate, McAfee says it can detect audio in video created to be a deepfake — where the video and audio source isn’t legit — or even a “cheapfake” equivalent where the video is real but the audio is not. It’s been tested with a Taylor Swift deepfake that will look very convincing to some, with the model used on this example, showing its accuracy.
As for where you can expect to find the technology, a representative for McAfee told Pickr that:
We are still reviewing the productisation of these features. The current thinking is we will include them in McAfee+ at no added charge to end users, but this is subject to change as we are still looking at several ways to commercialise these protections.
Which means it should be coming to a security product sometime in the future, hopefully before these sorts of scams become more commonplace.