Earlier this month, Google unveiled remarkable new capabilities for its automated assistant. They’re based on Google’s growing expertise in artificial intelligence (AI).
Google’s AI Assistant Is a Reminder that Privacy and Security Are Not the Same
An AI that sounds human compromises both privacy and security. Although they’re often bundled together, privacy and security are different. Privacy includes the right to be left alone. AI callers violate that because of their potential to intrude. Privacy concerns also arise when information is used out of context (for instance, for gossip, price discrimination, or targeted advertising). AI callers that sound human may violate privacy because they can fool people into believing the context of the call is person-to-person when it is actually person-to-machine. However, when people talk about privacy concerns, they’re often really concerned about security. The issue isn’t targeted advertising or gossip, it’s theft and safety. Security concerns arise when information is extracted and then used in an illegal way. The most obvious of these is identity theft. Imagine that an AI caller can impersonate your voice. You may want that as part of a service that you control, but, at the same time, someone could replicate that to fool others into believing they are talking to you. The problem is that improving security may not help privacy, and vice versa.