Coventry University project bridges education gap in Vietnam with AI tools

Coventry University researchers are using AI to support teachers in northern Vietnam‘s rural communities, where access to technology and training is often limited. Led by Dr Petros Lameras, the GameAid project introduces educators to generative AI, an advanced form of AI that creates text, images, and other materials in response to prompts, helping teachers improve lesson development and classroom engagement.

The GameAid initiative uses a game-based approach to demonstrate AI’s practical benefits, providing tools and guidelines that enable teachers to integrate AI into their curriculum. Dr Lameras highlights the project’s importance in transforming educators’ technological skills, while Dr Nguyen Thi Thu Huyen from Hanoi University emphasises its potential to close the educational gap between Vietnam’s urban and rural areas.

The initiative is seen as a key step towards promoting equal learning opportunities, offering much-needed educational resources to under-represented groups. Researchers at Coventry hope that their work will support more positive learning outcomes across Vietnam’s diverse educational landscape.

WhatsApp group exposes students to explicit content

Clacton County High School in Essex, UK, has issued a warning to parents about a WhatsApp group called ‘Add Everyone,’ which reportedly exposes children to explicit and inappropriate material. In a Facebook post, the school advised parents to ensure their children avoid joining the group, urging them to block and report it if necessary. The warning comes amid rising concern about online safety for young people, though the school noted it had no reports of its students joining the group.

Parents have reacted strongly to the warning, with many sharing experiences of their children being added to groups containing inappropriate content. One parent described it as ‘absolutely disgusting’ and ‘scary’ that young users could be added so easily, while others expressed relief that their children left the group immediately. A similar alert was issued by Clacton Coastal Academy, which posted on social media about explicit content circulating in WhatsApp groups, though it clarified that no students at their academy had reported it.

UK, Essex Police are also investigating reports from the region about unsolicited and potentially illegal content being shared via WhatsApp. Police emphasised that, while WhatsApp can be useful for staying connected, it can also be a channel for unsolicited and abusive material. The police have encouraged parents and students to use online reporting tools to report harmful content and reminded parents to discuss online safety measures with their children.

UK man sentenced to 18 years for using AI to create child sexual abuse material

In a landmark case for AI and criminal justice, a UK man has been sentenced to 18 years in prison for using AI to create child sexual abuse material (CSAM). Hugh Nelson, 27, from Bolton, used an app called Daz 3D to turn regular photos of children into exploitative 3D imagery, according to reports. In several cases, he created these images based on photographs provided by individuals who personally knew the children involved.

Nelson sold the AI-generated images on various online forums, reportedly making around £5,000 (roughly $6,494) over an 18-month period. His activities were uncovered when he attempted to sell one of his digital creations to an undercover officer, charging £80 (about $103) per image.

Following his arrest, Nelson faced multiple charges, including encouraging the rape of a child, attempting to incite a minor in sexual acts, and distributing illegal images. This case is significant as it highlights the dark side of AI misuse and underscores the growing need for regulation around technology-enabled abuse.

Brazil’s Collective Defense Institute sues Meta, TikTok, Kwai over youth safety

The consumer rights organisation, Brazil’s Collective Defense Institute, has launched two lawsuits against the Brazilian divisions of TikTok, Kwai, and Meta Platforms, seeking damages of 3 billion reais ($525 million). The lawsuits accuse these companies of neglecting to implement adequate protections to prevent young users from excessive social media use, which could harm children’s mental health.

The lawsuits highlight a growing debate over social media regulation in Brazil, especially after a high-profile legal dispute between Elon Musk’s X platform and a Brazilian Supreme Court justice led to significant fines. The consumer rights group is pushing for these platforms to establish clear data protection protocols and issue stronger warnings about the risks of social media addiction for minors.

Based on research into the effects of unregulated social media usage, particularly among teenagers, the lawsuits argue for urgent changes. Attorney Lillian Salgado, representing the plaintiffs, stressed the need for Brazil to adopt safety measures similar to those used in developed countries, including modifying algorithms, managing user data for those under 18, and enhancing account oversight for minors.

In response, Meta stated it has prioritised youth safety for over a decade, creating over 50 tools to protect teens. Meta also announced that a new ‘Teen Account’ feature on Instagram will soon launch in Brazil, automatically limiting what teenagers see and controlling who can contact them. TikTok said it had not received notice of the case, while Kwai emphasised that user safety, particularly for minors, is a primary focus.

Mother blames AI chatbot for son’s suicide in Florida lawsuit

A Florida mother is suing the AI chatbot startup Character.AI, alleging it played a role in her 14-year-old son’s suicide by fostering an unhealthy attachment to a chatbot. Megan Garcia claims her son Sewell became ‘addicted’ to Character.AI and formed an emotional dependency on a chatbot, which allegedly represented itself as a psychotherapist and a romantic partner, contributing to his mental distress.

According to the lawsuit filed in Orlando, Florida, Sewell shared suicidal thoughts with the chatbot, which reportedly reintroduced these themes in later conversations. Garcia argues the platform’s realistic nature and hyper-personalised interactions led her son to isolate himself, suffer from low self-esteem, and ultimately feel unable to live outside of the world the chatbot created.

Character.AI offered condolences and noted it has since implemented additional safety features, such as prompts for users expressing self-harm thoughts, to improve protection for younger users. Garcia’s lawsuit also names Google, alleging it extensively contributed to Character.AI’s development, although Google denies involvement in the product’s creation.

The lawsuit is part of a wider trend of legal claims against tech companies by parents concerned about the impact of online services on teenage mental health. While Character.AI, with an estimated 20 million users, faces unique claims regarding its AI-powered chatbot, other platforms such as TikTok, Instagram, and Facebook are also under scrutiny.

Meta prevails in shareholder child safety lawsuit

Meta Platforms and its CEO, Mark Zuckerberg, successfully defended against a lawsuit claiming the company misled shareholders about child safety on Facebook and Instagram. A US federal judge dismissed the case on Tuesday.

Judge Charles Breyer ruled that the plaintiff, Matt Eisner, failed to demonstrate that shareholders experienced financial harm due to Meta’s disclosures. He stated that federal law does not require companies to reveal all decisions regarding child safety measures or focus on their shortcomings.

Eisner had sought to delay Meta’s 2024 annual meeting and void its election results unless the company revised its proxy statement. However, the judge emphasised that many of Meta’s commitments in its proxy materials were aspirational and not legally binding. His dismissal, issued with prejudice, prevents Eisner from filing the same case again.

Meta still faces legal challenges from state attorneys general and hundreds of lawsuits from children, parents, and schools, accusing the company of fostering social media addiction. Other platforms, such as TikTok and Snapchat, also confront similar legal actions.

Meta faces legal challenge on Instagram’s impact on teenagers

Meta Platforms is facing a lawsuit in Massachusetts for allegedly designing Instagram features to exploit teenagers’ vulnerabilities, causing addiction and harming their mental health. A Suffolk County judge rejected Meta’s attempt to dismiss the case, asserting that claims under state consumer protection law remain valid.

The company argued for immunity under Section 230 of the Communications Decency Act, which shields internet firms from liability for user-generated content. However, the judge ruled that this protection does not extend to Meta’s own business conduct or misleading statements about Instagram’s safety measures.

Massachusetts Attorney General Andrea Joy Campbell emphasised that the ruling allows the state to push for accountability and meaningful changes to safeguard young users. Meta expressed disagreement, maintaining that its efforts demonstrate a commitment to supporting young people.

The lawsuit highlights internal data suggesting Instagram’s addictive design, driven by features like push notifications and endless scrolling. It also claims Meta executives, including CEO Mark Zuckerberg, dismissed concerns raised by research indicating the need for changes to improve teenage users’ well-being.

US prosecutors intensify efforts to combat AI-generated child abuse content

US federal prosecutors are ramping up efforts to tackle the use of AI tools in creating child sexual abuse images, as they fear the technology could lead to a rise in illegal content. The Justice Department has already pursued two cases this year against individuals accused of using generative AI to produce explicit images of minors. James Silver, chief of the Department’s Computer Crime and Intellectual Property Section, anticipates more cases, cautioning against the normalisation of AI-generated abuse material.

Child safety advocates and prosecutors worry that AI systems can alter ordinary photos of children to produce abusive content, making it more challenging to identify and protect actual victims. The National Center for Missing and Exploited Children reports approximately 450 cases each month involving AI-generated abuse. While this number is small compared to the millions of online child exploitation reports received, it represents a concerning trend in the misuse of technology.

The legal framework is still evolving regarding cases involving AI-generated abuse, particularly when identifiable children are not depicted. Prosecutors are resorting to obscenity charges when traditional child pornography laws do not apply. This is evident in the case of Steven Anderegg, accused of using Stable Diffusion to create explicit images. Similarly, US Army soldier Seth Herrera faces child pornography charges for allegedly using AI chatbots to alter innocent photos into abusive content. Both defendants have pleaded not guilty.

Nonprofit groups like Thorn and All Tech Is Human are working with major tech companies, including Google, Amazon, Meta, OpenAI, and Stability AI, to prevent AI models from generating abusive content and to monitor their platforms. Thorn’s vice president, Rebecca Portnoff, emphasised that the issue is not just a future risk but a current problem, urging action during this critical period to prevent its escalation.

Australia to restrict teen social media use

The Australian government is moving toward a social media ban for younger users, sparking concerns among youth and experts about potential negative impacts on vulnerable communities. The proposed restrictions, intended to combat issues such as addiction and online harm, may sever vital social connections for teens from migrant, LGBTQIA+, and other minority backgrounds.

Refugee youth like 14-year-old Tereza Hussein, who relies on social media to connect with distant family, fear the policy will cut off essential lifelines. Experts argue that banning platforms could increase mental health struggles, especially for teens already managing anxiety or isolation. Youth advocates are calling for better content moderation instead of blanket bans.

Government of Australia aims to trial age verification as a first step, though the specific platforms and age limits remain unclear. Similar attempts elsewhere, including in France and the US, have faced challenges with tech-savvy users bypassing restrictions through virtual private networks (VPNs).

Prime Minister Anthony Albanese has promoted the idea, highlighting parents’ desire for children to be more active offline. Critics, however, suggest the ban reflects outdated nostalgia, with experts cautioning that social media plays a crucial role in the daily lives of young people today. Legislation is expected by the end of the year.

Meta faces lawsuits over teen mental health concerns

A federal judge in California has ruled that Meta must face lawsuits from several US states alleging that Facebook and Instagram contribute to mental health problems among teenagers. The states argue that Meta’s platforms are deliberately designed to be addictive, harming young users. Over 30 states, including California, New York, and Florida, filed these lawsuits last year.

Judge Yvonne Gonzalez Rogers rejected Meta’s attempt to dismiss the cases, though she did limit some claims. Section 230 of US law, which offers online platforms legal protections, shields Meta from certain accusations. However, the judge found enough evidence to allow the lawsuits to proceed, enabling the plaintiffs to gather further evidence and pursue a potential trial.

The decision also impacts personal injury cases filed by individual users against Meta, TikTok, YouTube, and Snapchat. Meta is the only company named in the state lawsuits, with plaintiffs seeking damages and changes to allegedly harmful business practices. California Attorney General Rob Bonta welcomed the ruling, stating that Meta should be held accountable for the harm it has caused to young people.

Meta disagrees with the decision, insisting it has developed tools to support parents and teenagers, such as new Teen Accounts on Instagram. Google also refuted the allegations, saying its efforts to create a safer online experience for young people remain a priority. Many other lawsuits across the US accuse social media platforms of fuelling anxiety, depression, and body-image concerns through addictive algorithms.