Writer files lawsuit against Grammarly over AI feature using experts’ identities

A journalist has filed a class action lawsuit against Grammarly after the company introduced an AI feature that generated editorial feedback by imitating well-known writers and public figures without their permission.

The legal complaint was submitted by investigative journalist Julia Angwin, who argued that the tool unlawfully used the identities and reputations of authors and commentators.

The feature, known as ‘Expert Review’, produced automated critiques presented as if they came from figures such as Stephen King, Carl Sagan and technology journalist Kara Swisher.

Such a feature was available to subscribers paying an annual fee and was designed to simulate professional editorial guidance.

Critics quickly questioned both the quality of the generated feedback and the decision to associate the tool with real individuals who had not authorised the use of their names or expertise.

Technology writer Casey Newton tested the system by submitting one of his own articles and receiving automated feedback attributed to an AI version of Swisher. The response appeared generic, casting doubt on the value of linking such commentary to prominent personalities.

Following criticism from writers and researchers, the feature was disabled. Shishir Mehrotra, chief executive of Grammarly’s parent company Superhuman, issued a public apology while defending the broader concept behind the tool.

The lawsuit reflects growing tensions around AI systems that replicate creative styles or professional expertise.

As generative AI technologies expand across writing and publishing industries, questions surrounding consent, intellectual labour and identity rights are becoming increasingly prominent.

Would you like to learn more about AI, tech and digital diplomacyIf so, ask our Diplo chatbot!

UK watchdog demands stronger child safety on social platforms

The British communications regulator Ofcom has called on major technology companies to enforce stricter age controls and improve safety protections for children using online platforms.

The warning targets services widely used by young audiences, including Facebook, Instagram, Roblox, Snapchat, TikTok and YouTube.

Regulators said that despite existing minimum age policies, large numbers of children under the age of 13 continue to access platforms intended for older users.

According to Ofcom research, more than 70 percent of children aged 8 to 12 regularly use such services.

Authorities have asked companies to demonstrate how they will strengthen protections and ensure compliance with minimum age requirements.

Platforms must present their plans by 30 April, after which Ofcom will publish an assessment of their responses and determine whether further regulatory action is necessary.

The regulator also outlined several key areas requiring improvement.

Companies in the UK are expected to implement more effective age-verification systems, strengthen protections against online grooming and ensure that recommendation algorithms do not expose children to harmful content.

Another concern involves product development practices.

Ofcom warned that new digital features, including AI tools, should not be tested on children without adequate safety assessments. Platforms are required to evaluate potential risks before launching significant updates.

The measures are part of the UK’s broader regulatory framework introduced under the Online Safety Act, which aims to reduce exposure to harmful online material.

The law requires platforms to prevent children from accessing content linked to pornography, suicide, self-harm and eating disorders, while limiting the promotion of violent or abusive material in recommendation feeds.

Ofcom indicated that enforcement action may follow if companies fail to demonstrate meaningful improvements. Regulators argue that stronger safeguards are necessary to restore public trust and ensure that digital platforms prioritise child safety in their design and operation.

Would you like to learn more about AI, tech and digital diplomacyIf so, ask our Diplo chatbot!

EU competition regulators expand scrutiny across the entire AI ecosystem

Competition authorities in the EU are broadening their oversight of the AI sector, examining every layer of the technology’s value chain.

Speaking at a conference in Berlin, Teresa Ribera explained that regulators are analysing the full ‘AI stack’ instead of focusing solely on consumer applications.

According to the competition chief, scrutiny extends beyond visible AI tools to the systems that support them. Investigations are assessing underlying models, the data used to train those models, as well as cloud infrastructure and energy resources that power AI systems.

Regulatory attention has already reached the application layer.

The European Commission opened an investigation in 2025 involving Meta after concerns emerged that the company could restrict competing AI assistants on its messaging platform WhatsApp.

Following regulatory pressure, Meta proposed allowing rival AI chatbots on the platform in exchange for a fee. European regulators are now assessing the proposal to determine whether additional intervention is necessary to preserve fair competition in rapidly evolving digital markets.

Authorities have also examined concentration risks across other parts of the AI ecosystem, including the infrastructure layer dominated by companies such as Nvidia.

Regulators argue that effective competition oversight must address the entire technology stack as AI markets expand quickly.

Would you like to learn more about AI, tech and digital diplomacyIf so, ask our Diplo chatbot!  

Cambridge researchers warn AI toys misread children’s emotions

AI toys for young children may misread emotions and respond inappropriately, according to a study by researchers at the University of Cambridge. Developmental psychologists observed interactions between children aged three to five and conversational AI-powered toys.

Findings showed the toys often struggled with pretend play and emotional cues. In several cases, children attempted to express sadness or initiate imaginative scenarios, while the AI responded with unrelated or overly scripted replies, leaving emotional signals unrecognised.

Researchers warned that such limitations could affect children’s emotional development and imaginative play. Early years practitioners also raised concerns about how toy-collected conversation data may be used and whether children could start treating the devices as trusted companions.

The study calls for stronger regulation and the introduction of safety certification for AI toys aimed at young children. Toy developer Curio stated that improving AI interactions and maintaining parental controls remain priorities as the technology continues to develop.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

Biased AI suggestions shift societal attitudes

AI-powered writing tools may do more than speed up typing- they can influence the way people think. A Cornell study found that biassed autocomplete suggestions can subtly shift users’ opinions on issues like the death penalty, fracking, GMOs, and voting rights.

Experiments with over 2,500 participants revealed that users’ views gravitated toward the AI’s predetermined bias. Attempts to warn participants about the AI’s bias, either before or after writing, did not prevent the shifts.

Researchers noted that the effect occurs because users effectively write biassed viewpoints themselves, a process psychology research shows can alter personal attitudes.

The influence was consistent across political topics and participants of all leanings. Compared with simply providing pre-written arguments, biassed AI suggestions had a stronger effect on shaping opinions.

Researchers warn that as autocomplete and generative AI tools become increasingly prevalent, covert persuasion through AI may pose serious societal risks.

The study, led by Sterling Williams-Ceci and Mor Naaman of Cornell Tech, highlights the potential for AI to shape beliefs without users noticing. Findings highlight the need for oversight as AI writing assistants enter everyday communication.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

ChatGPT dynamic visual explanations introduce interactive learning tools

OpenAI has introduced a new ChatGPT feature called dynamic visual explanations, allowing users to interact with mathematical and scientific concepts through real-time visuals.

Instead of relying solely on text explanations or static diagrams, the feature enables users to manipulate formulas and variables and immediately see how those changes affect results. For example, when exploring the Pythagorean theorem, users can adjust the triangle’s sides and see the hypotenuse update instantly.

To use the tool, users can ask ChatGPT questions such as ‘What is a lens equation?’ or ‘How can I find the area of a circle?’ The chatbot responds with both a written explanation and an interactive visual module that users can manipulate directly.

The feature currently supports more than 70 topics in mathematics and science. The topics include binomial squares, Charles’ law, compound interest, Coulomb’s law, exponential decay, Hooke’s law, kinetic energy, linear equations, and Ohm’s law.

OpenAI says it plans to expand the range of topics over time. The feature is already available to all logged-in ChatGPT users. The launch marks a shift in how ChatGPT supports learning. Instead of simply providing answers, the tool now encourages users to explore underlying concepts by experimenting with interactive models.

AI tools have become increasingly common in education, although their role remains widely debated. Some educators worry that students may become overly dependent on AI tools, while others see them as valuable learning aids.

According to OpenAI, more than 140 million people use ChatGPT every week to help with subjects such as mathematics and science, which many learners find challenging. Other technology companies are also experimenting with similar tools. Google’s Gemini introduced interactive diagrams and visual explanations last year.

The new feature joins several other ChatGPT learning tools, including study mode, which guides users through problems step by step, and QuizGPT, which allows users to create flashcards and test themselves before exams.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

Google outlines roadmap for safer generative AI for young users

Google has presented a strategy for developing generative AI systems designed to protect younger users better better while supporting learning and creativity.

The approach emphasises building conversational AI experiences that balance innovation with safeguards tailored to children and teenagers.

The company’s framework rests on three pillars: protecting young people online, respecting the role of families in digital environments and enabling youth to explore AI technologies responsibly.

According to Google, safety policies prohibit harmful content, including material linked to child exploitation, violent extremism and self-harm, while additional restrictions target age-inappropriate topics.

Safeguards are integrated throughout the AI development lifecycle, from user input to model responses. Systems use specialised classifiers to detect potentially harmful queries and prevent inappropriate outputs.

These protections are also applied to models such as Gemini, which incorporates defences against prompt manipulation and cyber misuse.

Beyond preventing harm, Google aims to support responsible AI adoption through educational initiatives.

Resources designed for families encourage discussions about responsible technology use, while tools such as Guided Learning in Gemini seek to help students explore complex topics through structured explanations and interactive learning support.

Would you like to learn more about AI, tech and digital diplomacyIf so, ask our Diplo chatbot!

EU lawmakers call for stronger copyright safeguards in AI training

The European Parliament has adopted a report urging policymakers to establish a long-term framework protecting copyrighted works used in AI training.

These recommendations aim to ensure that creative industries retain transparency and fair treatment as generative AI technologies expand.

Among the central proposals is the creation of a European register managed by the European Union Intellectual Property Office. The database would list copyrighted works used to train AI systems and identify creators who have chosen to exclude their content from such use.

Lawmakers in the EU are also calling for greater transparency from AI developers, including disclosure of the websites from which training data has been collected. According to the report, failing to meet transparency requirements could raise questions about compliance with existing copyright rules.

The recommendations have received mixed reactions from industry stakeholders.

Organisations representing creators argue that stronger safeguards are necessary to ensure fair remuneration and legal clarity, while technology sector groups caution that additional requirements could create complexity for companies developing AI systems.

The report is not legally binding but signals the political direction of ongoing European discussions on copyright and AI governance.

Would you like to learn more about AI, tech and digital diplomacyIf so, ask our Diplo chatbot!  

AI-driven education push reshapes Chungnam National University strategy

Chungnam National University aims to become a leading centre for AI-driven education in Korea as AI reshapes how universities teach, learn, and manage operations. University President Kim Jeong-kyoum said higher education institutions must rethink how they approach AI and prepare for the profound changes AI-driven education is expected to bring across society.

‘AI will undoubtedly bring significant changes across industries and in our daily lives,’ Kim told The Korea Times in a recent interview. ‘Universities need to approach this shift with an open mindset and be ready to accept it. I want Chungnam National University to become a university that uses AI better than anyone else.’

While acknowledging that the phrase ‘AI-leading university’ is increasingly common, Kim said the university’s real priority is integrating AI into teaching practically. The institution is considering incorporating AI-related elements into more than 30 percent of its curriculum to ensure students gain hands-on experience with the technology and support the expansion of AI-driven education across disciplines.

‘We want to teach students how to use AI effectively in practice,’ he said. ‘Professors need to use and understand AI themselves to teach it properly, and students also need systematic training on how to use these tools well.’

Beyond the classroom, the university also plans to introduce AI into administrative systems to improve campus operations. ‘Administration is often the hardest part of a university to change,’ Kim said. ‘That’s why we believe introducing AI into administrative systems first could be particularly meaningful.’

The university is also expanding research through its Glocal Lab project, which aims to strengthen Chungnam National University’s role in AI-driven pharmaceutical and biotechnology research. The initiative is expected to more directly connect academic research with industry and support the development of specialised talent, strengthening the university’s broader ambitions in AI-driven education and innovation.

Kim said, ‘Until now, there have been clear limits to translating the university’s strong basic research into applications in local industries. We expect the Glocal Lab project to help bridge that gap by connecting academic research more directly with the industrial field.’

The project will integrate AI, mathematical sciences, and pharmaceutical and biotechnology research into a unified R&D platform. ‘Ultimately, the Glocal Lab project will help the university grow into a global R&D hub,’ Kim said. ‘By creating high-quality jobs locally, it can also help curb the outflow of talented young people to the Seoul metropolitan area and foster a virtuous cycle of regional settlement and innovation.’

The university is also enhancing internationalisation efforts, aiming to increase the share of international students to 10 percent while expanding global partnerships and strengthening its global profile in AI-driven education. ‘Universities should take the lead in presenting new models in a global society,’ Kim said. ‘By doing so, these ideas can spread beyond campus and ultimately influence local industries and businesses.’

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!