Google and Apple halt employee processing audio recorded by voice assistants
07.08.2019Back to news
Google and Apple suspend reviewing of audio recorded by voice assistants for recognition enhancement. The reason for such news – the increased number of incidents occurring due to employees who misuse the procured personal information listening to private conversations.
There is always a risk that records get leaked. And such leaks have already happened a few times. For example, not so long ago Google Assistant, which decrypts commands, “misaddressed” files. Amazon employees told Bloomberg that they share users’ funny audio. Anytime they could leak the records to earn some money or damage a company.
Developers ensure that employees work with completely depersonalised data – random numbers, which have no link to user accounts, are assigned to records. Google and Apple claim that user voices in records are distorted before request processing. This obstructs user identification without which data becomes useless. But the extent to which depersonalisation is performed is at the discretion of the developer. We also don’t know who except employees have an access to the data and whether this data can be misused. There were cases when voice assistants’ records served as evidence: US police listened to the records made by Amazon Echo, fixed in the crime scene, during an investigation.
That’s why the decision Google, Apple and Amazon have to make appears to be an important administrative measure, and it would be great if the companies introduce it.
Although besides insider leaks there are also breaches which happen due to technical errors. Almost all the voice assistants transfer user requests as audio files via HTTPS, and sometimes via HTTP-protocol. Technologies of interception and decryption of even a secure web traffic already exist and are quite available. And companies can’t guarantee data safety when it is being transmitted from a device to a server.