xAI faces lawsuit over alleged misuse of AI image generation

Legal action has been filed against xAI in a US federal court, with plaintiffs alleging that its AI system Grok was used to generate harmful and explicitly manipulated images of minors.

The lawsuit claims that xAI failed to implement adequate safeguards to prevent the creation of such content, despite similar protections adopted by other AI developers.

According to the filing, the technology enabled the transformation of real images into explicit material without sufficient restrictions.

Plaintiffs seek to establish a class action, arguing that the company should be held accountable for both direct and third-party uses of its models. Legal arguments focus on whether responsibility extends to external applications built using the same underlying AI systems.

The case also highlights broader regulatory challenges surrounding AI-generated content, particularly the difficulty of preventing misuse when systems can modify real images. Questions around platform liability, safety standards, and enforcement are likely to shape future policy discussions.

Growing scrutiny of AI developers reflects increasing concern over how generative systems are deployed, especially in contexts involving sensitive or harmful content.

Would you like to learn more about AI, tech and digital diplomacyIf so, ask our Diplo chatbot!  

Microsoft Exchange Online outage affects users globally

A service disruption has affected users of Microsoft Exchange Online, and Microsoft has confirmed ongoing investigations into mailbox access issues affecting enterprise customers worldwide.

Reports indicate that Microsoft users encountered difficulties connecting via multiple access points, including the Microsoft Outlook desktop and mobile applications and browser-based email services. The issue affects specific connection methods rather than the entire platform.

Organisations relying on cloud-based communication tools experienced interruptions in email workflows, calendar scheduling, and shared mailbox functionality. Such disruptions can significantly disrupt operational continuity, particularly for businesses that depend on real-time communication systems.

Updates through Microsoft’s service health channels suggest that engineering teams are working to identify the root cause, though no definitive explanation has yet been provided.

Such incidents highlight broader concerns around resilience in cloud infrastructure, as enterprises increasingly depend on centralised platforms for critical communication services.

Would you like to learn more about AI, tech and digital diplomacyIf so, ask our Diplo chatbot!

Cyber operation led by INTERPOL dismantles 45,000+ malicious IP addresses

An INTERPOL-coordinated operation targeting phishing, malware, and ransomware infrastructure has resulted in the takedown of more than 45,000 malicious IP addresses and servers.

Law enforcement agencies from 72 countries and territories participated in Operation Synergia III (from 18 July 2025 to 31 January 2026). The operation resulted in 94 arrests, with 110 additional individuals under investigation. A total of 212 electronic devices and servers were seized.

During the operation, INTERPOL processed threat data into actionable intelligence, facilitated cross-border coordination, and provided tactical operational support to participating countries. Preliminary investigations informed a series of coordinated national actions, including searches of identified locations and the disruption of malicious cyber infrastructure.

Several investigations remain ongoing. Preliminary case reports illustrate the range of criminal methods. For instance, in Macau, China, law enforcement identified more than 33,000 phishing and fraudulent websites impersonating casinos, banks, government portals, and payment services.

The sites were used to collect payments via fraudulent top-up mechanisms or to harvest users’ personal and financial data.

In Togo, police arrested 10 suspects operating from a residential location. The group’s activities included unauthorised access to social media accounts and social engineering schemes such as romance fraud and sextortion.

After compromising accounts, suspects contacted the account holder’s connections, impersonating the original user to initiate fraudulent relationships or solicit money transfers from secondary victims.

In Bangladesh, police arrested 40 suspects and seized 134 electronic devices linked to a range of schemes, including fraudulent loan and employment offers, identity theft, and credit card fraud.

INTERPOL collaborated with private sector partners Group-IB, Trend Micro, and S2W to monitor illicit cyber activity and identify malicious servers during the operation.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

Security warning issued over OpenClaw AI agent

Cybersecurity authorities have warned that vulnerabilities in the OpenClaw AI agent could expose sensitive data. Officials in China say weak default security settings may allow attackers to exploit the system.

Experts in China warned that prompt injection attacks could manipulate OpenClaw when it accesses online content. Malicious instructions hidden in websites may cause the AI agent to reveal confidential information.

Researchers have also identified risks involving link previews in messaging apps such as Telegram and Discord. Investigators in China say attackers could trick the system into sending sensitive data to malicious websites.

Security specialists in China advise organisations to strengthen protections around AI agents. Recommendations include isolating systems, limiting network access and installing trusted software components only.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

AI network management systems deployed for BTS concert in Seoul

South Korea’s three major telecommunications operators plan to deploy advanced network technologies during the BTS comeback concert scheduled for 21 March at Gwanghwamun Square in central Seoul. The initiative aims to bolster network management, prevent congestion, and ensure stable connectivity as large crowds gather in a confined space.

SK Telecom said it will introduce its proprietary AI-powered network management system, A-One, at the event. The technology can recommend optimal equipment placement, predict traffic demand, and monitor real-time network performance to maintain service stability.

To manage heavy data usage during the concert, the company will operate multiple network systems across the venue’s different zones. The setup is designed to allow attendees inside the square to upload photos and videos quickly while enabling viewers outside the venue to stream the concert without interruptions. Additional equipment will also be installed in areas expected to attract international visitors.

KT will deploy its AI-based autonomous traffic management system, W-SDN, which monitors network usage in real time and automatically adjusts traffic flows if congestion is detected. The company will activate an emergency network control mode during the event and deploy about 80 engineers and portable base stations on site.

LG Uplus will also apply its autonomous network management technology, which predicts traffic changes and distributes network loads across nearby base stations. The South Korea-based operator said the system will help ensure uninterrupted connectivity for concertgoers throughout the event.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

Meta removes encrypted messaging from Instagram DMs

Meta will discontinue end-to-end encryption for Instagram direct messages starting in May 2026. The company said the feature saw limited use among Instagram users.

Users with encrypted chats will receive instructions on how to download messages or media before the feature ends. Meta confirmed the change through updates to its support pages and in-app notifications.

The decision comes amid ongoing debate about encryption and online safety on major social platforms. Critics argue that encrypted messaging can make it harder to detect harmful activity involving minors.

Meta said users seeking encrypted communication can continue using WhatsApp or Messenger. The company maintains end-to-end encryption for messaging services outside Instagram.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

EU reviews X compliance proposal under Digital Services Act

X has submitted a compliance proposal to the European Commission outlining how it intends to modify its blue check verification system following regulatory concerns under the Digital Services Act.

The EU regulators concluded that the platform’s system allowed users to obtain verification simply by paying for a subscription without meaningful identity checks, potentially misleading users about the authenticity of accounts.

The Commission imposed a €120 million fine in December and gave the company 60 working days to propose corrective measures. Officials confirmed that X met the deadline for submitting a plan, which regulators will now assess.

The platform, owned by Elon Musk, must also pay the penalty while the Commission evaluates the proposed changes. The company has challenged the enforcement decision before the EU’s General Court.

Would you like to learn more about AI, tech and digital diplomacyIf so, ask our Diplo chatbot!

France pushes EU AI gigafactories to support European technology

In the EU, France is calling for planned European AI ‘gigafactories’ to focus on testing and scaling European technologies rather than primarily increasing demand for hardware from companies such as Nvidia.

The large computing facilities are intended to provide the infrastructure needed to train advanced AI systems. However, officials in France argue that the projects should strengthen Europe’s technological capabilities rather than reinforce reliance on foreign suppliers.

Several EU countries, including Poland, Austria and Lithuania, support using the infrastructure to improve Europe’s digital resilience.

The initiative forms part of the European Commission’s wider plans to expand computing capacity and support the development of a stronger European AI ecosystem.

Would you like to learn more about AI, tech and digital diplomacyIf so, ask our Diplo chatbot!  

Major tech firms pledge to fight online fraud

Major technology and consumer-facing companies, including Google, Amazon, and OpenAI, have signed the ‘Industry Accord Against Online Scams and Fraud’ to share threat intelligence and strengthen defences against online fraud.

The voluntary pact brings together 11 signatories: Amazon, Adobe, Google, Levi Strauss & Co., LinkedIn, Match Group, Microsoft, Meta, OpenAI, Pinterest, and Target. It aims to improve coordination among companies and strengthen cooperation with governments, law enforcement, and NGOs.

The accord commits to sharing intelligence on criminal networks, using AI to detect fraud, and strengthening verification for financial transactions. Participating companies will also provide clearer reporting channels for users and encourage governments to prioritise scam prevention.

Executives emphasised that tackling scams requires collective effort. Meta’s Nathaniel Gleicher said the accord enables companies to share insights beyond individual cases, while Microsoft’s Steven Masada highlighted the need for faster collaboration to disrupt scams and track perpetrators globally.

The move comes as online scams grow in scale and sophistication, aided by AI-generated content and cross-platform operations. Consumers lost over $16 billion to online scams in 2024, prompting firms to boost safety features and push for stronger regulations and law enforcement.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

Europe aims to tighten AI rules and personal data standards

The European Council has proposed AI Act amendments, banning nudification tools and tightening rules for processing sensitive personal data. The move represents a key step in streamlining the continent’s digital legislation and improving safeguards for citizens.

Council officials highlighted the prohibition of AI systems that generate non-consensual sexual content or child sexual abuse material. The measure matches a European Parliament ban, showing strong support for tighter AI controls amid misuse concerns.

The proposal follows incidents such as the Grok chatbot producing millions of non-consensual intimate images, which sparked a global backlash and prompted an EU probe into the social media platform X and its AI features.

Other amendments reinstate strict rules for processing sensitive data to detect bias and require providers to register high-risk AI systems, even if claiming exemptions. Negotiations between the Council and Parliament will finalise the AI Act’s updated measures.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot