About 31 percent of consumers now own voice-activated devices, up from 27 percent in 2018 and 14 percent in 2017. Technology for voice assistants is a great new development that makes a lot of aspects easier for owners and opens a new channel of voice shopping for merchants, but it comes with its own set of risks. In this past year alone, smartphone users have said strangers hacked into their devices and issued voice commands to Alexa, announcing false North Korean missile attacks, turning up home thermostats, and general insults. Although these incidents on their own are frightening, they make us realize the potential for even more damaging hacks into voice assistants because these devices have access to personal and payment information.
Nest, which is owned by Google, has said the main issue is weak user passwords as well as a failure to use two-factor authentication. However, the reality is that even voice assistants with stronger security measures could be victims of hacking. This means that hackers could use voice assistant devices and voice command apps on phones to access websites and purchase things without the owners even being aware of it. Which is where merchants who sell products through voice assistants will need to be mindful of fraudulent purchases coming from these channels.
You might be wondering how this all works. The neural networks behind the voice assistants can actually hear much better than humans can. For example, if you're in a restaurant, you won't be able to process every background noise, but AI can. AI speech recognition software can also recognize audio frequencies that are beyond the range of humans.
That means that hackers can issue silent (to humans) commands to the voice assistants. The first way is by putting malicious commands in white noise or background noise. These malicious commands can be placed in the background of online videos. As per Grant Paulson, a tech writer at Writinity, "a study did this successfully, as students played hidden voice commands over loudspeakers and successfully got voice assistants to open websites on devices and switch the device to airplane mode."
Another option for fraudsters is to set up a Dolphin Attack, which creates and broadcasts a command that's in a frequency beyond the human hearing range. This attack is dependent on ultrasonic transmissions, meaning that a hacker or attacker must be in the vicinity of the target device for the attack to work. This has already been used by researchers to use a locked iPhone to make phone calls thanks to commands that are inaudible to human ears. A Dolphin Attack can also use those devices to take photos, send messages, and go to websites.
Protecting Against Threats
Voice assistant companies are continuously working on improving the security of their voice assistants, but the details aren't always known. It's important that lawmakers and researchers look into addressing short and long term challenges with the safety of voice recognition technology.
It's natural that as the demand for voice recognition goes up due to the popularity of the Internet of Things, the risk of a data breach gets greater as well. Kaitlyn Doll, a cybersecurity expert at Draft Beyond, explains that "this means that fraud prevention companies will need to be on top of building and maintaining two-way databases of voice data protection to make sure that companies can know immediately what's a legitimate customer contact."
The incidents so far of voice command hacks are isolated and few and far between. However, we know that hackers and nefarious actors will quickly learn to use new technologies to their benefit. It's important to follow the latest recommended security practices so that you can protect your company from voice hacking.
This is still a new field, so it'll be interesting to see how the industry takes steps forward to protect their products. In the meantime, we all need to be aware of how to best protect our devices and companies from fraudsters.