Released by OpenAI in November, ChatGPT deploys artificial intelligence to hold remarkably humanlike conversations on complex topics, generate articles of near-publishable quality and propose edits to computer code. But the responses it spits out are not necessarily accurate, and sometimes off base.
The chatbot, available free online, has exploded in popularity around the world in recent months. People ask the AI questions in an instant-message-like format, and it answers in full sentences and paragraphs, allowing conversation. Users have gotten the chatbot to write song lyrics, sitcom scenes and headlines.
It’s set off a race among competitors to develop AI of similar sophistication: Microsoft last month made a new AI chatbot powered by the same technology open to journalists, some of whom reported bizarre and troubling interactions.
The rapid advance of AI technologies has put a range of real-world applications within reach: ChatGPT can write convincing application essays, for instance, or help those who cannot write to compose emails. But it’s also raised a host of ethical concerns, including around plagiarism, disinformation and the effects of automation.
A group of experts and executives signed an open letter earlier this week asking companies including OpenAI, Google and Microsoft to put the breaks on training AI models, to allow time for a reckoning with the risks and to establish further rules around its use.
Italian regulators singled out privacy concerns.
ChatGPT uses algorithms to take inmassive volumes of text, usually scraped from the internet. OpenAI also hired “human AI trainers” to talk to the model, to help reinforce humanlike conversation styles.
The Italian data protection agency voiced concerns about what it described as OpenAI’s lack of transparency and guardrails surrounding the use of Italian users’ data.
“There appears to be no legal basis underpinning the massive collection and processing of personal data in order to ‘train’ the algorithms on which the platform relies,” the data privacy authority said in a news release. And while the chatbot is supposed to be reserved for users older than 13, it has no mechanism to verify this, the agency said, which “exposes children to receiving responses that are absolutely inappropriate to their age and awareness.”
If OpenAI does not notify the agency within 20 days of measures to comply with the order, it could be fined up to around $21 million “or 4% of the total worldwide annual turnover,” the statement said.
The agency is responsible for enforcing both domestic and E.U. privacy laws in Italy. The European Union has stricter privacy regulations than the United States and other countries. Lawmakers in the European Parliament have raised concerns about ChatGPT, and top E.U. institutions are expected to begin negotiating this spring the details of landmark AI legislation that could include stronger restrictions on the platform, Politico reported.
Benjamin Soloway, Pranshu Verma and Rachel Lerman contributed to this report.
Leave a Reply