A series of privacy missteps in recent months has raised fresh concerns over the future of voice-controlled digital assistants, a growing market seen by some as the next frontier in computing.
Recent incidents involving Google, Apple and Amazon devices underscore that despite strong growth in the market for smart speakers and devices, more work is needed to reassure consumers that their data is protected when they use the technology.
Apple this week said it was suspending its “Siri grading” program, in which people listen to snippets of conversations to improve the voice recognition technology, after the Guardian newspaper reported that the contractors were hearing confidential medical information, criminal dealings and even sexual encounters. “We are committed to delivering a great Siri experience while protecting user privacy,” Apple said in a statement, adding that it would allow consumers to opt into this feature in a future software update.
Meanwhile, Google said it would pause listening to and transcribing conversations in the EU from its Google Assistant in the wake of a privacy investigation in Germany. Amazon, which also has acknowledged it uses human assistants to improve the artificial intelligence of its Alexa-powered devices, recently announced a new feature making it easier to delete all recorded data.
The recent cases may give consumers the impression that someone is “listening” to their conversations even if it’s rarely true.
“From a tech perspective it’s not surprising that these companies use humans to annotate this data, because the machine is not good enough to understand everything,” said Florian Schaub, a University of Michigan professor specializing in human-computer interaction. “The problem is that people are not expecting it and it is not transparently communicated.”—Agencies