ChatGPT maker OpenAI sued for allegedly using "stolen private information"

Industry leaders warn of AI risks

OpenAI, the artificial intelligence firm behind ChatGPT, went from a non-profit research lab to a company that is unlawfully stealing millions of users' private information to train its tools, according to a new lawsuit that calls on the organization to compensate those users.

OpenAI developed its AI products, including chatbot ChatGPT, image generator Dall-E and others using "stolen private information, including personally identifiable information" from hundreds of millions of internet users, the 157-page lawsuit, filed in the Northern district of California Wednesday, alleges. 

The lawsuit, filed by a group of individuals identified only by their initials, professions or the ways in which they've engaged with OpenAI's tools, goes so far as to accuse OpenAI of posing a "potentially catastrophic risk to humanity." 

While artificial intelligence can be used for good, the suit claims OpenAI chose "to pursue profit at the expense of privacy, security, and ethics" and "doubled down on a strategy to secretly harvest massive amounts of personal data from the internet, including private information and private conversations, medical data, information about children — essentially every piece of data exchanged on the internet it could take-without notice to the owners or users of such data, much less with anyone's permission."

"Without this unprecedented theft of private and copyrighted information belonging to real people, communicated to unique communities, for specific purposes, targeting specific audiences, [OpenAI's] Products would not be the multi-billion-dollar business they are today," the suit claims.

The information OpenAI's accused of stealing includes all inputs into its AI tools, such as prompts people feed ChatGPT; users' account information, including their names, contact details and login credentials; their payment information; data pulled from users' browsers, including their physical locations; their chat and search data; key stroke data and more.

Microsoft, an OpenAI partner also named in the suit, declined to comment. OpenAI did not immediately respond to CBS MoneyWatch's request for comment. 

Without having stolen reams of personal and copyrighted data and information, OpenAI's products "would not be the multi-billion-dollar business they are today," the lawsuit states.

The suit claims OpenAI rushed its products to market without implementing safeguards to mitigate potential harm the tools could have on humans. Now, those tools pose risks to humanity and could even "eliminate the human species as a threat to its goals." 

What's more, the defendants now have enough information to "create our digital clones, including the ability to replicate our voice and likeness," the lawsuit alleges. 

In short, the tools have have become too powerful, given that they could even "encourage our own professional obsolescence." 

The suit calls on OpenAI to open the "black box" and be transparent about the data it collects. Plaintiffs are also seeking compensation from OpenAI for "the stolen data on which the products depend" and the ability for users to opt out of data collection when using OpenAI tools. 

Read more
f

We and our partners use cookies to understand how you use our site, improve your experience and serve you personalized content and advertising. Read about how we use cookies in our cookie policy and how you can control them by clicking Manage Settings. By continuing to use this site, you accept these cookies.