Consumer organizations call for regulatory action on Generative AI
Consumer organizations are calling for action on generative AI models like ChatGPT. The report highlights issues such as inaccuracy and manipulation.
Consumer organizations from 13 European countries and the United States have released a report urging regulators to take action on generative AI models like ChatGPT. The organizations express worry about the potential for deception, manipulation, and harm associated with these systems, including the dissemination of disinformation, reinforcement of biases, and fraudulent activities. They call upon European safety, data protection, and consumer authorities to examine how existing laws apply to these AI systems.
The lack of accountability and transparency in the practices of major Big Tech companies is a significant concern for consumer organizations, as it hampers understanding of data collection practices and decision-making processes. The report also highlights issues such as inaccuracy, manipulation, and the generation of misleading content by these AI models.
The European Parliament’s position on the AI Act includes stringent requirements for foundation models and generative AI regarding risk management, data governance, and system reliability. Discussions are scheduled for July 18th to consider advancing the application of the AI rulebook to foundation models and generative AI.