As the proliferation of large language model AI learning systems continues, the US government is looking to make sure that generative AI isn’t violating consumer privacy laws. The Federal Trade Commission (FTC) recently launched an investigation into OpenAI, the parent company and creator of both ChatGPT and DALL-E, as it’s reportedly “concerned” that OpenAI may be running afoul of those consumer privacy laws by putting personal data and personal reputations at risk.
The FTC’s probe into OpenAI was first reported by The Washington Post, who revealed that the agency sent OpenAI a 20-page document demanding records of how it’s addressed “risks” — like a recent bug that leaked ChatGPT users’ payment and chat history — posed by its AI models. Besides the addressed risks, the FTC is also asking OpenAI for info on complaints regarding their AI making “false or malicious” statements about individuals, as well as info that shows users’ understanding of the product they’re using and its potential to generate false or misleading information.
This probe marks the first real legislative challenge to AI’s future application, and arrives as OpenAI and other companies actively attempt to put themselves in the good graces of regulators and the government: OpenAI CEO Sam Altman willingly testified about the application of AI before the US Congress in May, and even met with President Joe Biden and Vice President Kamala Harris.
Engadget asked both the FTC and OpenAI for comment — the former declined, as is their policy with open investigations, and, at the time of writing, the latter had not responded.
Elsewhere in the world of tech, Elon Musk has announced the creation of xAI, an AI-focused startup.