In a first for a Western country, Italy has temporarily banned the use of OpenAI’s ChatGPT chatbot, following concerns over the application’s alleged breach of privacy rules. The country’s Data Protection Authority, known as Garante, claimed that the AI application failed to check the ages of its users who are required to be at least 13 years old. The agency also noted that ChatGPT collected and stored personal data without any legal justification. OpenAI has 20 days to respond with remedies or risk being fined up to 4% of its worldwide annual turnover or €20m ($21.68m). The company has already taken the bot offline for Italian users, and it is also not available in mainland China, Hong Kong, Iran, and Russia, among other places.
ChatGPT, which has already reached over 100 million monthly active users since its launch last year, has triggered a tech craze and prompted competitors to launch similar products and companies to integrate or utilize similar technologies. The chatbot’s rapid development has also captured the attention of lawmakers in several countries, with many experts calling for new regulations to govern AI due to its potential impact on national security, jobs, and education.
The lack of transparency surrounding AI training methods is another significant concern. While OpenAI has disabled ChatGPT for Italian users, it has not provided details on how it trains its AI model. This lack of transparency is a real problem, according to Johanna Björklund, AI researcher, and associate professor at Umeå University in Sweden. “If you do AI research, you should be very transparent about how you do it,” she said.
Meanwhile, the European Commission, which is debating the EU AI Act, is not likely to ban AI entirely. Margrethe Vestager, European Commission Executive Vice President, tweeted, “No matter which #tech we use, we have to continue to advance our freedoms & protect our rights. That’s why we don’t regulate #AI technologies, we regulate the uses of #AI.” Instead, the focus is on regulating the uses of AI, particularly in terms of data protection rules.
In a related development, on Wednesday, Elon Musk and a group of AI experts and industry executives called for a six-month pause in developing AI systems more powerful than OpenAI’s newly launched GPT-4, citing potential risks to society. As the technology continues to advance, it is essential that policymakers, industry leaders, and researchers work together to ensure that AI is used ethically and responsibly.