Google and Apple suspend reviewing of audio recorded by voice assistants for recognition enhancement. The reason for such news – the increased number of incidents occurring due to employees who misuse the procured personal information listening to private conversations.
There is always a risk that records get leaked. And such leaks have already happened a few times. For example, not so long ago Google Assistant, which decrypts commands, “misaddressed” files. Amazon employees told Bloomberg that they share users’ funny audio. Anytime they could leak the records to earn some money or damage a company.
Voice assistants collect information about users who can’t always control this process. Developers admit that devices can “hear” users all the time in order to recognise an activation command. Samsung warned customers that personal information spoken out close to a smart device can be recorded and transferred to a third party. And there could be employees who filter commands “manually”. Google Terms of Use, as well as Terms of Use of other major services, informed customers about the same.
Developers ensure that employees work with completely depersonalised data – random numbers, which have no link to user accounts, are assigned to records. Google and Apple claim that user voices in records are distorted before request processing. This obstructs user identification without which data becomes useless. But the extent to which depersonalisation is performed is at the discretion of the developer. We also don’t know who except employees have an access to the data and whether this data can be misused. There were cases when voice assistants’ records served as evidence: US police listened to the records made by Amazon Echo, fixed in the crime scene, during an investigation.
That’s why the decision Google, Apple and Amazon have to make appears to be an important administrative measure, and it would be great if the companies introduce it.
Although besides insider leaks there are also breaches which happen due to technical errors. Almost all the voice assistants transfer user requests as audio files via HTTPS, and sometimes via HTTP-protocol. Technologies of interception and decryption of even a secure web traffic already exist and are quite available. And companies can’t guarantee data safety when it is being transmitted from a device to a server.
SearchInform uses four types of cookies as described below. You can decide which categories of cookies you wish to accept to improve your experience on our website. To learn more about the cookies we use on our site, please read our Cookie Policy.
Always active. These cookies are essential to our website working effectively.
Cookies does not collect personal information. You can disable the cookie files
record
on the Internet Settings tab in your browser.
These cookies allow SearchInform to provide enhanced functionality and personalization, such as remembering the language you choose to interact with the website.
These cookies enable SearchInform to understand what information is the most valuable to you, so we can improve our services and website.
These cookies are created by other resources to allow our website to embed content from other websites, for example, images, ads, and text.
Please enable Functional Cookies
You have disabled the Functional Cookies.
To complete the form and get in touch with us, you need to enable Functional Cookies.
Otherwise the form cannot be sent to us.
Subscribe to our newsletter and receive a bright and useful tutorial Explaining Information Security in 4 steps!
Subscribe to our newsletter and receive case studies in comics!