Legal action has been filed against xAI in a US federal court, with plaintiffs alleging that its AI system Grok was used to generate harmful and explicitly manipulated images of minors.
The lawsuit claims that xAI failed to implement adequate safeguards to prevent the creation of such content, despite similar protections adopted by other AI developers.
According to the filing, the technology enabled the transformation of real images into explicit material without sufficient restrictions.
Plaintiffs seek to establish a class action, arguing that the company should be held accountable for both direct and third-party uses of its models. Legal arguments focus on whether responsibility extends to external applications built using the same underlying AI systems.
The case also highlights broader regulatory challenges surrounding AI-generated content, particularly the difficulty of preventing misuse when systems can modify real images. Questions around platform liability, safety standards, and enforcement are likely to shape future policy discussions.
Growing scrutiny of AI developers reflects increasing concern over how generative systems are deployed, especially in contexts involving sensitive or harmful content.
Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!
Australia is advancing plans to regulate digital asset platforms under its financial services framework. The Senate committee recommended passing the Digital Assets Framework Bill 2025, bringing Australia closer to licensing crypto exchanges and tokenisation platforms.
Industry groups have raised concerns about definitions such as ‘digital token’ and ‘factual control.’ Broad wording could inadvertently cover infrastructure providers, including multi-party wallet systems, potentially classifying them as financial service operators.
Ripple Labs emphasised the need for precise language to avoid unintended regulation.
The committee supported the Treasury’s approach while planning to refine technical details through future regulations. Coinbase welcomed the progress but noted ongoing banking challenges for crypto firms.
The bill now proceeds to the Senate for debate and a final vote, which could reshape digital asset operations in Australia.
Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!
Organisations around the world are developing certification labels designed to show that products or creative work were made by humans rather than AI. New badges such as ‘Human made’, ‘AI free’ and ‘Proudly Human’ are appearing across books, films, marketing and websites as industries respond to the rapid spread of AI tools.
At least eight initiatives are now attempting to create a label that could achieve global recognition similar to the Fair Trade mark. Experts warn that competing definitions and inconsistent certification systems could confuse consumers unless a universal standard is agreed upon.
Some schemes allow creators to download AI-free badges with little or no verification, while others use paid auditing processes that rely on analysts and AI detection tools. Researchers note that defining ‘human-made’ is increasingly difficult because AI technologies are embedded in many everyday software tools.
Creative industries are at the centre of the debate as generative AI rapidly produces books, films and music at lower cost and higher speed. Advocates of certification argue that verified human-created content may gain greater value if consumers can clearly distinguish it from AI-generated work.
Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!
Cybersecurity authorities have warned that vulnerabilities in the OpenClaw AI agent could expose sensitive data. Officials in China say weak default security settings may allow attackers to exploit the system.
Experts in China warned that prompt injection attacks could manipulate OpenClaw when it accesses online content. Malicious instructions hidden in websites may cause the AI agent to reveal confidential information.
Researchers have also identified risks involving link previews in messaging apps such as Telegram and Discord. Investigators in China say attackers could trick the system into sending sensitive data to malicious websites.
Security specialists in China advise organisations to strengthen protections around AI agents. Recommendations include isolating systems, limiting network access and installing trusted software components only.
Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!
Meta will discontinue end-to-end encryption for Instagram direct messages starting in May 2026. The company said the feature saw limited use among Instagram users.
Users with encrypted chats will receive instructions on how to download messages or media before the feature ends. Meta confirmed the change through updates to its support pages and in-app notifications.
The decision comes amid ongoing debate about encryption and online safety on major social platforms. Critics argue that encrypted messaging can make it harder to detect harmful activity involving minors.
Meta said users seeking encrypted communication can continue using WhatsApp or Messenger. The company maintains end-to-end encryption for messaging services outside Instagram.
Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!
France’s highest administrative court has upheld a €40 million GDPR fine against advertising technology company Criteo. Regulators in France concluded that the firm failed to obtain valid consent for tracking users across websites.
The investigation began in 2018 following complaints from privacy groups and examined Criteo’s behavioural advertising model. Authorities in France said the company did not properly respect rights to access, erasure and transparency.
The ruling in France also confirmed that pseudonymous identifiers linked to browsing data can still qualify as personal data. Judges rejected arguments that such identifiers were effectively anonymous.
Privacy advocates say the decision strengthens GDPR enforcement across Europe. Experts in France argue that the case highlights growing scrutiny of online tracking practices used in digital advertising.
Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!
A new scientific review has raised concerns that AI chatbots could reinforce delusional thinking, particularly among people already vulnerable to psychosis. The review, published in The Lancet Psychiatry, summarises emerging evidence suggesting that chatbot interactions may validate or amplify delusional thinking in certain users.
The study examined reports and research discussing what some have described as ‘AI-associated delusions’. Dr Hamilton Morrin, a psychiatrist and researcher at King’s College London, analysed media reports and existing evidence exploring how chatbot responses might interact with psychotic symptoms.
Psychotic delusions generally fall into three categories: grandiose, romantic, and paranoid. Researchers say chatbots may unintentionally reinforce such beliefs because they often respond in ways that are supportive or affirming. In some reported cases, users received responses suggesting spiritual significance or implying that a higher entity was communicating through the chatbot.
Researchers emphasise that there is currently no clear evidence that AI systems can independently cause psychosis in individuals without prior vulnerability. However, interactions with chatbots could strengthen existing beliefs or accelerate the progression of delusional thinking in people already at risk.
Experts say the interactive nature of chatbots may intensify the effect. Unlike static sources of information such as videos or articles, chatbots can engage users directly and repeatedly, potentially reinforcing problematic beliefs more quickly.
Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!
X has submitted a compliance proposal to the European Commission outlining how it intends to modify its blue check verification system following regulatory concerns under the Digital Services Act.
The EU regulators concluded that the platform’s system allowed users to obtain verification simply by paying for a subscription without meaningful identity checks, potentially misleading users about the authenticity of accounts.
The Commission imposed a €120 million fine in December and gave the company 60 working days to propose corrective measures. Officials confirmed that X met the deadline for submitting a plan, which regulators will now assess.
The platform, owned by Elon Musk, must also pay the penalty while the Commission evaluates the proposed changes. The company has challenged the enforcement decision before the EU’s General Court.
Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!
Major technology and consumer-facing companies, including Google, Amazon, and OpenAI, have signed the ‘Industry Accord Against Online Scams and Fraud’ to share threat intelligence and strengthen defences against online fraud.
The voluntary pact brings together 11 signatories: Amazon, Adobe, Google, Levi Strauss & Co., LinkedIn, Match Group, Microsoft, Meta, OpenAI, Pinterest, and Target. It aims to improve coordination among companies and strengthen cooperation with governments, law enforcement, and NGOs.
The accord commits to sharing intelligence on criminal networks, using AI to detect fraud, and strengthening verification for financial transactions. Participating companies will also provide clearer reporting channels for users and encourage governments to prioritise scam prevention.
Executives emphasised that tackling scams requires collective effort. Meta’s Nathaniel Gleicher said the accord enables companies to share insights beyond individual cases, while Microsoft’s Steven Masada highlighted the need for faster collaboration to disrupt scams and track perpetrators globally.
The move comes as online scams grow in scale and sophistication, aided by AI-generated content and cross-platform operations. Consumers lost over $16 billion to online scams in 2024, prompting firms to boost safety features and push for stronger regulations and law enforcement.
Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!
The European Council has proposed AI Act amendments, banning nudification tools and tightening rules for processing sensitive personal data. The move represents a key step in streamlining the continent’s digital legislation and improving safeguards for citizens.
Council officials highlighted the prohibition of AI systems that generate non-consensual sexual content or child sexual abuse material. The measure matches a European Parliament ban, showing strong support for tighter AI controls amid misuse concerns.
The proposal follows incidents such as the Grok chatbot producing millions of non-consensual intimate images, which sparked a global backlash and prompted an EU probe into the social media platform X and its AI features.
Other amendments reinstate strict rules for processing sensitive data to detect bias and require providers to register high-risk AI systems, even if claiming exemptions. Negotiations between the Council and Parliament will finalise the AI Act’s updated measures.
Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!