EU lawmakers push limits on AI nudity apps

More than 50 EU lawmakers have called on the European Commission to clarify whether AI-powered applications for nudity are prohibited under existing EU legislation, citing concerns about online harm and legal uncertainty.

The request follows public scrutiny of the Grok, owned by xAI, which was found to generate manipulated intimate images involving women and minors.

Lawmakers argue that such systems enable gender-based online violence and the production of child sexual abuse material instead of legitimate creative uses.

In their letter, lawmakers questioned whether current provisions under the EU AI Act sufficiently address nudification tools or whether additional prohibitions are required. They also warned that enforcement focused only on substantial online platforms risks leaving similar applications operating elsewhere.

While EU authorities have taken steps under the Digital Services Act to assess platform responsibilities, lawmakers stressed the need for broader regulatory clarity and consistent application across the digital market.

Further political debate on the issue is expected in the coming days.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Australia’s social media age limit prompts restrictions on millions of under-16 accounts

Major social media platforms restricted access to approximately 4.7 million accounts linked to children under 16 across Australia during early December, following the introduction of the national social media minimum age requirement.

Initial figures collected by eSafety indicate that platforms with high youth usage are already engaging in early compliance efforts.

Since the obligation took effect on 10 December, regulatory focus has shifted towards monitoring and enforcement instead of preparation, targeting services assessed as age-restricted.

Early data suggests meaningful steps are being taken, although authorities stress it remains too soon to determine whether platforms have achieved full compliance.

eSafety has emphasised continuous improvement in age-assurance accuracy, alongside the industry’s responsibility to prevent circumvention.

Reports indicate some under-16 accounts remain active, although early signals point towards reduced exposure and gradual behavioural change rather than immediate elimination.

Officials note that the broader impact of the minimum age policy will emerge over time, supported by a planned independent, longitudinal evaluation involving academic and youth mental health experts.

Data collection will continue to monitor compliance, platform migration trends and long-term safety outcomes for children and families in Australia.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Grok faces investigation over deepfake abuse claims

California Attorney General Rob Bonta has launched an investigation into xAI, the company behind the Grok chatbot, over the creation and spread of nonconsensual sexually explicit images.

Bonta’s office said Grok has been used to generate deepfake intimate images of women and children, which have then been shared on social media platforms, including X.

Officials said users have taken ordinary photos and manipulated them into sexually explicit scenarios without consent, with xAI’s ‘spicy mode’ contributing to the problem.

‘We have zero tolerance for the AI-based creation and dissemination of nonconsensual intimate images or child sexual abuse material,’ Bonta said in a statement.

The investigation will examine whether xAI has violated the law and follows earlier calls for stronger safeguards to protect children from harmful AI content.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

Why young people across South Asia turn to AI

Children and young adults across South Asia are increasingly turning to AI tools for emotional reassurance, schoolwork and everyday advice, even while acknowledging their shortcomings.

Easy access to smartphones, cheap data and social pressures have made chatbots a constant presence, often filling gaps left by limited human interaction.

Researchers and child safety experts warn that growing reliance on AI risks weakening critical thinking, reducing social trust and exposing young users to privacy and bias-related harms.

Studies show that many children understand AI can mislead or oversimplify, yet receive little guidance at school or home on how to question outputs or assess risks.

Rather than banning AI outright, experts argue for child-centred regulation, stronger safeguards and digital literacy that involves parents, educators and communities.

Without broader social support systems and clear accountability from technology companies, AI risks becoming a substitute for human connection instead of a tool that genuinely supports learning and wellbeing.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

X restricts Grok image editing after global backlash

Elon Musk’s X has limited the image editing functions of its Grok AI tool after criticism over the creation of sexualised images of real people.

The platform said technological safeguards have been introduced to block such content in regions where it is illegal, following growing concern from governments and regulators.

UK officials described the move as a positive step, although regulatory scrutiny remains ongoing.

Authorities are examining whether X complied with existing laws, while similar investigations have been launched in the US amid broader concerns over the misuse of AI-generated imagery.

International pressure has continued to build, with some countries banning Grok entirely instead of waiting for platform-led restrictions.

Policy experts have welcomed stronger controls but questioned how effectively X can identify real individuals and enforce its updated rules across different jurisdictions.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Winnipeg schools embrace AI as classroom learning tool

At General Wolfe School and other Winnipeg classrooms, students are using AI tools to help with tasks such as translating language and understanding complex terms, with teachers guiding them on how to verify AI-generated information against reliable sources.

Teachers are cautious but optimistic, developing a thinking framework that prioritises critical thinking and human judgement alongside AI use rather than rigid policies as the technology evolves.

Educators in the Winnipeg School Division are adapting teaching methods to incorporate AI while discouraging over-reliance, stressing that students should use AI as an aid rather than a substitute for learning.

This reflects broader discussions in education about how to balance innovation with foundational skills as AI becomes more commonplace in school environments.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

France weighs social media ban for under 15s

France’s health watchdog has warned that social media harms adolescent mental health, particularly among younger girls. The assessment is based on a five-year scientific review of existing research.

ANSES said online platforms amplify harmful pressures, cyberbullying and unrealistic beauty standards. Experts found that girls, LGBT youths and vulnerable teens face higher psychological risks.

France is debating legislation to ban social media access for children under 15. President Emmanuel Macron supports stronger age restrictions and platform accountability.

The watchdog urged changes to algorithms and default settings to prioritise child well-being. Similar debates have emerged globally following Australia’s introduction of a teenage platform ban.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

AI deepfake abuse drives rise in victim support cases

Rising concern surrounds the growing number of people seeking help after becoming victims of AI-generated intimate deepfakes in Guernsey, part of the UK. Support services report a steady increase in cases.

Existing law criminalises sharing intimate images without consent, but AI-generated creations remain legal. Proposed reforms aim to close this gap and strengthen victim protection.

Police and support charities warn that deepfakes cause severe emotional harm and are challenging to prosecute. Cross-border platforms and anonymous perpetrators complicate enforcement and reporting.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

UAE faces faster cyber threats powered by AI

Rising use of AI is transforming cyberattacks in the UAE, enabling deepfakes, automated phishing and rapid data theft. Expanding digital services increase exposure for businesses and residents.

Criminals deploy autonomous AI tools to scan networks, exploit weaknesses and steal information faster than humans. Shorter detection windows raise risks of breaches, disruption and financial loss.

High-value sectors such as government, finance and healthcare face sustained targeting amid skills shortages. Protection relies on cautious users, stronger governance and secure-by-design systems across smart infrastructure.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

AI-generated images raise consent concerns in the UK

UK lawmaker Jess Asato said an AI-altered image depicting her in a bikini circulated online. The incident follows wider reports of sexualised deepfake abuse targeting women on social media.

Platforms hosted thousands of comments, including further manipulated images, heightening distress. Victims describe the content as realistic, dehumanising and violating personal consent.

Government ministers of the UK pledged to ban nudification tools and criminalise non-consensual intimate images. Technology firms face pressure to remove content, suspend accounts, and follow Ofcom guidance to maintain a safe online environment.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!