ChatGPT got the early press, and every day we learn of new generative artificial intelligence products that can create new and creative visual and text responses to human input. Following on ChatGPT’s fame, Google’s Bard and Microsoft’s Bing are now grabbing some of the spotlight, but these are merely a few of the hundreds if not thousands of generative artificial intelligence products currently available or in development—there is no question that generative AI is here to stay. Indeed, social media and other platform companies—TikTok (using AI to create or add effects to images), Instacart (to create shopping lists and answer food questions), and Shopify (to generate product descriptions), to name a few—are already integrating AI into their services.
Among all the questions begged by this innovative technology are some critical issues concerning privacy. While only time will tell the extent of the privacy issues, some of the concerns are already clear.
- The California Consumer Privacy Act (CCPA) gives individuals the right to understand, and the right to opt out of, automated decision-making technologies, which would include AI. Companies will need to monitor the reach of their AI tools to ensure they respect consumer choices. As with most issues relating to privacy, it will be important for companies to have properly marshaled their data to enable this level of control.
- AI could merge the information of two similarly situated people and mistakenly send materials/offers/etc. to the wrong person; if that person has not opted-in to such offers, this could amount to a privacy law violation. Again, control and limitation of data access will be key.
- While anonymized data does not amount to Personal Identifiable Information (PII) under the CCPA, AI could identify an individual from otherwise anonymized characteristics. Studies have shown that even very rough data can be reidentified using AI with a 95+% level of accuracy. Companies can avoid purposeful reidentification, but it will take additional controls to prevent AI from doing so of its own volition.
- AI can infer additional PII even from basic information and can easily result in the company having more PII than it has asked for or for which it has been granted permission.
- AI could use data in a given database for purposes other than those for which consent has been given. Again, the control of data and boundaries will be key to avoiding many of the privacy traps.
- AI could collect data (and result in the company’s collection of data) of those who have not consented to such collection.
The key to avoiding such privacy violations is one that most companies collecting consumer data on a large scale are already familiar with—the need to have strict control over the data, including the ability to set boundaries on access. Now more than ever, companies collecting personal information from their customers will need to practice good data hygiene to ensure AI is not able to de-anonymize, use without authorization, or collect without consent the data of its customers and/or third parties. Additionally, giving users straightforward and easily accessed control over their data, and being transparent about the use of AI in connection with their data will engender better consumer confidence. Finally, ensuring that AI is given appropriate, broad, and inclusive data when “learning” will help ensure that the automated decisions and inferences will avoid bias.
It looks like we are at a tipping point with AI—it is not a question of whether, but instead, when your company will implement this exciting and powerful technology into its products and services. When doing so, companies should be mindful of the potential privacy pitfalls with the technology and take steps from the outset to address, among other things, the attendant privacy implications.