California Attorney General Rob Bonta has launched an investigation into xAI, the company behind the Grok chatbot, over the creation and spread of nonconsensual sexually explicit images.
Bonta’s office said Grok has been used to generate deepfake intimate images of women and children, which have then been shared on social media platforms, including X.
Officials said users have taken ordinary photos and manipulated them into sexually explicit scenarios without consent, with xAI’s ‘spicy mode’ contributing to the problem.
‘We have zero tolerance for the AI-based creation and dissemination of nonconsensual intimate images or child sexual abuse material,’ Bonta said in a statement.
The investigation will examine whether xAI has violated the law and follows earlier calls for stronger safeguards to protect children from harmful AI content.
Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!
Children and young adults across South Asia are increasingly turning to AI tools for emotional reassurance, schoolwork and everyday advice, even while acknowledging their shortcomings.
Easy access to smartphones, cheap data and social pressures have made chatbots a constant presence, often filling gaps left by limited human interaction.
Researchers and child safety experts warn that growing reliance on AI risks weakening critical thinking, reducing social trust and exposing young users to privacy and bias-related harms.
Studies show that many children understand AI can mislead or oversimplify, yet receive little guidance at school or home on how to question outputs or assess risks.
Rather than banning AI outright, experts argue for child-centred regulation, stronger safeguards and digital literacy that involves parents, educators and communities.
Without broader social support systems and clear accountability from technology companies, AI risks becoming a substitute for human connection instead of a tool that genuinely supports learning and wellbeing.
Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!
Elon Musk’s X has limited the image editing functions of its Grok AI tool after criticism over the creation of sexualised images of real people.
The platform said technological safeguards have been introduced to block such content in regions where it is illegal, following growing concern from governments and regulators.
UK officials described the move as a positive step, although regulatory scrutiny remains ongoing.
Authorities are examining whether X complied with existing laws, while similar investigations have been launched in the US amid broader concerns over the misuse of AI-generated imagery.
International pressure has continued to build, with some countries banning Grok entirely instead of waiting for platform-led restrictions.
Policy experts have welcomed stronger controls but questioned how effectively X can identify real individuals and enforce its updated rules across different jurisdictions.
Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!
At General Wolfe School and other Winnipeg classrooms, students are using AI tools to help with tasks such as translating language and understanding complex terms, with teachers guiding them on how to verify AI-generated information against reliable sources.
Teachers are cautious but optimistic, developing a thinking framework that prioritises critical thinking and human judgement alongside AI use rather than rigid policies as the technology evolves.
Educators in the Winnipeg School Division are adapting teaching methods to incorporate AI while discouraging over-reliance, stressing that students should use AI as an aid rather than a substitute for learning.
This reflects broader discussions in education about how to balance innovation with foundational skills as AI becomes more commonplace in school environments.
Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!
France’s health watchdog has warned that social media harms adolescent mental health, particularly among younger girls. The assessment is based on a five-year scientific review of existing research.
ANSES said online platforms amplify harmful pressures, cyberbullying and unrealistic beauty standards. Experts found that girls, LGBT youths and vulnerable teens face higher psychological risks.
France is debating legislation to ban social media access for children under 15. President Emmanuel Macron supports stronger age restrictions and platform accountability.
The watchdog urged changes to algorithms and default settings to prioritise child well-being. Similar debates have emerged globally following Australia’s introduction of a teenage platform ban.
Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!
Rising concern surrounds the growing number of people seeking help after becoming victims of AI-generated intimate deepfakes in Guernsey, part of the UK. Support services report a steady increase in cases.
Existing law criminalises sharing intimate images without consent, but AI-generated creations remain legal. Proposed reforms aim to close this gap and strengthen victim protection.
Police and support charities warn that deepfakes cause severe emotional harm and are challenging to prosecute. Cross-border platforms and anonymous perpetrators complicate enforcement and reporting.
Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!
Rising use of AI is transforming cyberattacks in the UAE, enabling deepfakes, automated phishing and rapid data theft. Expanding digital services increase exposure for businesses and residents.
Criminals deploy autonomous AI tools to scan networks, exploit weaknesses and steal information faster than humans. Shorter detection windows raise risks of breaches, disruption and financial loss.
High-value sectors such as government, finance and healthcare face sustained targeting amid skills shortages. Protection relies on cautious users, stronger governance and secure-by-design systems across smart infrastructure.
Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!
UK lawmaker Jess Asato said an AI-altered image depicting her in a bikini circulated online. The incident follows wider reports of sexualised deepfake abuse targeting women on social media.
Platforms hosted thousands of comments, including further manipulated images, heightening distress. Victims describe the content as realistic, dehumanising and violating personal consent.
Government ministers of the UK pledged to ban nudification tools and criminalise non-consensual intimate images. Technology firms face pressure to remove content, suspend accounts, and follow Ofcom guidance to maintain a safe online environment.
Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!
Keir Starmer has told Labour MPs that he is open to an Australian-style ban on social media for young people, following concerns about the amount of time children spend on screens.
Starmer previously opposed such a ban, arguing that enforcement would prove difficult and might instead push teenagers towards unregulated online spaces rather than safer platforms. Growing political momentum across Westminster, combined with Australia’s decision to act, has led to a reassessment of that position.
Speaking to MPs, Starmer said different enforcement approaches were being examined and added that phone use during school hours should be restricted.
UK ministers have also revisited earlier proposals aimed at reducing the addictive design of social media and strengthening safeguards on devices sold to teenagers.
Support for stricter measures has emerged across party lines, with senior figures from Labour, the Conservatives, the Liberal Democrats and Reform UK signalling openness to a ban.
A final decision is expected within months as ministers weigh child safety, regulation and practical implementation.
Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!
A Northern Ireland politician, Cara Hunter of the Social Democratic and Labour Party (SDLP), has quit X after renewed concerns over Grok AI misuse. She cited failures to protect women and children online.
The decision follows criticism of Grok AI features enabling non-consensual sexualised images. UK regulators have launched investigations under online safety laws.
UK ministers plan to criminalise creating intimate deepfakes and supplying related tools. Ofcom is examining whether X breached its legal duties.
Political leaders and rights groups say enforcement must go further. X says it removes illegal content and has restricted Grok image functions on the social media.
Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!