The country’s privacy watchdog, known as Garante, said its investigation showed that OpenAI processed users’ personal data to train ChatGPT “without having an adequate legal basis and violated the principle of transparency and the related information obligations towards users”.
OpenAI dubbed the decision “disproportionate” and said it will appeal.
“When the Garante ordered us to stop offering ChatGPT in Italy in 2023, we worked with them to reinstate it a month later,” an OpenAI spokesperson said Friday in an emailed statement. “They’ve since recognised our industry-leading approach to protecting privacy in AI, yet this fine is nearly 20 times the revenue we made in Italy during the relevant period.”
OpenAI added, however, it remained “committed to working with privacy authorities worldwide to offer beneficial AI that respects privacy rights.”
The investigation, launched last year, also found that OpenAI didn’t provide an “adequate age verification system” to prevent users under 13 years of age from being exposed to inappropriate AI-generated content, the watchdog said.
The Italian authority also ordered OpenAI to launch a six-month campaign on different Italian media to raise public awareness about ChatGPT, specifically in regard to data collection.
The booming popularity of generative artificial intelligence systems like ChatGPT has drawn scrutiny from regulators on both sides of the Atlantic.
Regulators in the US and Europe have been examining OpenAI and other companies that have played a key part in the AI boom, while governments around the world have been drawing up rules to protect against risks posed by AI systems, led by the European Union’s AI Act, a comprehensive rulebook for artificial intelligence.
(Edited by : Priyanka Deshpande)