Hackers could trick AI voice recognition into mishearing voices

Hackers could trick AI voice recognition into mishearing voices

Artificial intelligence is showing great promise in voice recognition but a new study completed by researchers shows it can’t necessarily be trusted. A team successfully tricked AI into mishearing a recording, without altering how it’s heard by humans.

AI researchers at Facebook and Israel’s Bar-Ilan University have hit upon a method that causes AI to mishear otherwise unmodified recordings. New Scientist reports that the algorithm, dubbed “Houdini” by the team that created it, overlays audio recordings with a layer of noise that’s imperceptible to humans.
The subtle change to the file’s soundscape is enough to throw current AI recognition systems. The added noise is so low-level that its presence isn’t recognised by humans. The finding was confirmed by hearing tests that showed original and modified clips to be identical when both were listened to by human ears.

In a test of the algorithm, an audio clip was recorded and given to the AI for analysis. Later, the same clip was “hijacked” by Houdini. The new clip was indistinguishable from the old one to humans but generated a distinctly different response when processed by the AI.

Read More:

Doanld J Trump Network

Leave a Reply

Your email address will not be published. Required fields are marked *

Send this to a friend