Alexa and Siri can hear this hidden command. You can't.

Adjust Comment Print

Researchers in China and the United States have discovered that Apple's Siri, Amazon's Alexa, and Google's Assistant can be controlled by hidden commands undetectable to the human ear, The New York Times is reporting.

The researchers say criminals could exploit the technology to unlock doors, wire money or buy stuff online, simply with music playing over the radio.

The worrying implications were most recently highlighted in a research paper written by students from the University of California, Berkeley and Georgetown University and published this month. When these tracks are played near an Amazon Echo or iPhone, nearby users would only hear the detectable audio.

These deceptions illustrate how AI - even as it is making great strides - can still be tricked and manipulated. Speech-recognition systems typically translate each sound to a letter, eventually compiling those into words and phrases.

Researchers at Berkeley said that they can modestly alter audio files "to cancel out the sound that the speech recognition system was supposed to hear and replace it with a sound that would be transcribed differently by machines while being almost undetectable to the human ear". As the Times points out, phones and speakers with digital assistants are expected to outnumber people by 2021, and over half of American households will have one or more smart speakers by then.

More news: Microsoft patches critical IE bug that is being exploited now
More news: Jose Mourinho says Manchester United 'need more quality' in summer rebuild
More news: Deadpool Mocks Infinity War And Demands Your Sister, Er, Silence

According to the coverage in the Times, both Amazon and Google have taken measures to protect their smart speakers and voice assistants against such manipulations. Google said security is a continuing focus and its Assistant has features to mitigate undetectable audio commands. Individual user voice recognition is one such protocol that could prevent such silent commands from being successful: if your smart speaker is calibrated to only respond to your voice, a silent command should, in theory, not have an effect on it. One method called DolphinAttack even muted the target phone before issuing inaudible commands, so the owner wouldn't hear the device's responses.

Similar techniques have been demonstrated using ultrasonic frequencies. While the commands couldn't penetrate walls, they could control smart devices through open windows from outside a building.

These commands can be embedded within a song, spoken text or other audio recordings.

Testing against Mozilla's open source DeepSpeech voice recognition implementation, Carlini and Wagner achieved a 100 percent success rate without having to resort to large amounts of distortion, a hallmark of past attempts at creating audio attacks. By inserting a smart speaker command into white noise, the researchers essentially camouflaged the commands from human listeners.

Comments