How to Recognize When Your Smart Speaker Is Eavesdropping

Not everything has to be a platform for something else, and this is especially true for your smart home. While you can complement your smart speakers with skills, actions, and apps – we’re mainly talking about Amazon Echo and Google Home devices – you should think twice about what you’re installing. Vulnerabilities that none of the companies have yet to fix could leave you vulnerable to phishing or eavesdropping.

While it’s not a fancy laser attack this time around, researchers at Security Research Labs have confirmed that vulnerabilities that emerged several months ago in how voice commands are processed by Amazon and Google speakers have yet to be fixed. As a result, rogue apps (which both companies seem unable to detect) can phish sensitive security information from unsuspecting users.

Here is SRLabs’ description of how the application exploits the vulnerabilities:

1. Create a seemingly innocent application that includes an intent triggered by “start” that accepts the following words as slot values ​​(user-supplied variables that are redirected to the application). This intent behaves like a fallback intent.

2. Amazon or Google check the security of the voice application before publishing it. We changed the functionality after this review so that there is no need to retest. Specifically, we change the welcome message to a fake error message, making the user think the application is not running. (“This skill is currently not available in your country.”) The user now assumes that the voice application is no longer listening.

3. Add an arbitrary long pause in the sound after the error message by making the voice application “say” the “�” character sequence. “(U + D801, period, space). Since this sequence is difficult to pronounce, the speaker is silent while active. If you force the application to “speak” characters multiple times, this silence increases.

4. Finally, after a while, break the silence and play the phishing message. (“An important security update is available for your device. Say” Start update “and enter your password”). Everything that the user says after “start” is sent to the hacker’s server. This is because the intent, which previously acted as a fallback intent, now stores the user-entered password as a slot value. “

In another attack vector, a developer can create a listening routine by sending generic words such as “address” as triggers; Combine this with falsely announcing a “stop,” such as when the speaker is “goodbye,” and increase the amount of time the speaker remains active using the “hidden” character trick described earlier. If a person utters a trigger word at some point during this extended time, the speaker records and sends everything said to the developer.

If it’s hard to understand, here’s a video of the exploit in action:

The same hacking style is slightly different on Google devices, but is also more effective since the trigger words are not needed (and the eavesdropping period can last forever):

What to do with additional smart speaker skills

While Amazon and Google are supposedly stepping up their validation processes to reveal the skills, actions, and other integrations your device is trying to use, that’s not a good thing. As ThreatPost writes, remote applications using these exploits can be resubmitted (and even approved). That, and the whole aspect of bait and tampering – where a legitimate skill is asserted only to be replaced by code with a more malicious intent – is problematic in itself.

Our advice? Stick to the skills and practices of renowned developers that have already been tried and tested by others. For example, get your athletic performance on an ESPN skill with a lot of reviews, not on some random user’s “athletic skill” they created last week. You can always bookmark it and come back to check it out later to see if other users think it’s legal or not.

Most importantly, look at your smart speaker from time to time . Do not assume that the end of the response — a “rattling” sound or some other message — means that your speaker has finished processing commands. Know what the physical signals of your device are and look at it, not yell at it when you activate third-party skills. If your device remains active in some strange way, this is a great sign that a skill or action you are using may warrant further investigation.

Finally, cut back on your skills and activities . If you can’t remember the last time you used a third-party app or service with your smart speaker, turn off its ability to access your device (or associated account). Don’t let unnecessary integrations pile up because all it takes is to switch their focus to cause some trouble in your digital life.

More…

Leave a Reply