Cyber operation led by INTERPOL dismantles 45,000+ malicious IP addresses

An INTERPOL-coordinated operation targeting phishing, malware, and ransomware infrastructure has resulted in the takedown of more than 45,000 malicious IP addresses and servers.

Law enforcement agencies from 72 countries and territories participated in Operation Synergia III (from 18 July 2025 to 31 January 2026). The operation resulted in 94 arrests, with 110 additional individuals under investigation. A total of 212 electronic devices and servers were seized.

During the operation, INTERPOL processed threat data into actionable intelligence, facilitated cross-border coordination, and provided tactical operational support to participating countries. Preliminary investigations informed a series of coordinated national actions, including searches of identified locations and the disruption of malicious cyber infrastructure.

Several investigations remain ongoing. Preliminary case reports illustrate the range of criminal methods. For instance, in Macau, China, law enforcement identified more than 33,000 phishing and fraudulent websites impersonating casinos, banks, government portals, and payment services.

The sites were used to collect payments via fraudulent top-up mechanisms or to harvest users’ personal and financial data.

In Togo, police arrested 10 suspects operating from a residential location. The group’s activities included unauthorised access to social media accounts and social engineering schemes such as romance fraud and sextortion.

After compromising accounts, suspects contacted the account holder’s connections, impersonating the original user to initiate fraudulent relationships or solicit money transfers from secondary victims.

In Bangladesh, police arrested 40 suspects and seized 134 electronic devices linked to a range of schemes, including fraudulent loan and employment offers, identity theft, and credit card fraud.

INTERPOL collaborated with private sector partners Group-IB, Trend Micro, and S2W to monitor illicit cyber activity and identify malicious servers during the operation.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

Security warning issued over OpenClaw AI agent

Cybersecurity authorities have warned that vulnerabilities in the OpenClaw AI agent could expose sensitive data. Officials in China say weak default security settings may allow attackers to exploit the system.

Experts in China warned that prompt injection attacks could manipulate OpenClaw when it accesses online content. Malicious instructions hidden in websites may cause the AI agent to reveal confidential information.

Researchers have also identified risks involving link previews in messaging apps such as Telegram and Discord. Investigators in China say attackers could trick the system into sending sensitive data to malicious websites.

Security specialists in China advise organisations to strengthen protections around AI agents. Recommendations include isolating systems, limiting network access and installing trusted software components only.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

AI network management systems deployed for BTS concert in Seoul

South Korea’s three major telecommunications operators plan to deploy advanced network technologies during the BTS comeback concert scheduled for 21 March at Gwanghwamun Square in central Seoul. The initiative aims to bolster network management, prevent congestion, and ensure stable connectivity as large crowds gather in a confined space.

SK Telecom said it will introduce its proprietary AI-powered network management system, A-One, at the event. The technology can recommend optimal equipment placement, predict traffic demand, and monitor real-time network performance to maintain service stability.

To manage heavy data usage during the concert, the company will operate multiple network systems across the venue’s different zones. The setup is designed to allow attendees inside the square to upload photos and videos quickly while enabling viewers outside the venue to stream the concert without interruptions. Additional equipment will also be installed in areas expected to attract international visitors.

KT will deploy its AI-based autonomous traffic management system, W-SDN, which monitors network usage in real time and automatically adjusts traffic flows if congestion is detected. The company will activate an emergency network control mode during the event and deploy about 80 engineers and portable base stations on site.

LG Uplus will also apply its autonomous network management technology, which predicts traffic changes and distributes network loads across nearby base stations. The South Korea-based operator said the system will help ensure uninterrupted connectivity for concertgoers throughout the event.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

Meta removes encrypted messaging from Instagram DMs

Meta will discontinue end-to-end encryption for Instagram direct messages starting in May 2026. The company said the feature saw limited use among Instagram users.

Users with encrypted chats will receive instructions on how to download messages or media before the feature ends. Meta confirmed the change through updates to its support pages and in-app notifications.

The decision comes amid ongoing debate about encryption and online safety on major social platforms. Critics argue that encrypted messaging can make it harder to detect harmful activity involving minors.

Meta said users seeking encrypted communication can continue using WhatsApp or Messenger. The company maintains end-to-end encryption for messaging services outside Instagram.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

EU reviews X compliance proposal under Digital Services Act

X has submitted a compliance proposal to the European Commission outlining how it intends to modify its blue check verification system following regulatory concerns under the Digital Services Act.

The EU regulators concluded that the platform’s system allowed users to obtain verification simply by paying for a subscription without meaningful identity checks, potentially misleading users about the authenticity of accounts.

The Commission imposed a €120 million fine in December and gave the company 60 working days to propose corrective measures. Officials confirmed that X met the deadline for submitting a plan, which regulators will now assess.

The platform, owned by Elon Musk, must also pay the penalty while the Commission evaluates the proposed changes. The company has challenged the enforcement decision before the EU’s General Court.

Would you like to learn more about AI, tech and digital diplomacyIf so, ask our Diplo chatbot!

France pushes EU AI gigafactories to support European technology

In the EU, France is calling for planned European AI ‘gigafactories’ to focus on testing and scaling European technologies rather than primarily increasing demand for hardware from companies such as Nvidia.

The large computing facilities are intended to provide the infrastructure needed to train advanced AI systems. However, officials in France argue that the projects should strengthen Europe’s technological capabilities rather than reinforce reliance on foreign suppliers.

Several EU countries, including Poland, Austria and Lithuania, support using the infrastructure to improve Europe’s digital resilience.

The initiative forms part of the European Commission’s wider plans to expand computing capacity and support the development of a stronger European AI ecosystem.

Would you like to learn more about AI, tech and digital diplomacyIf so, ask our Diplo chatbot!  

Major tech firms pledge to fight online fraud

Major technology and consumer-facing companies, including Google, Amazon, and OpenAI, have signed the ‘Industry Accord Against Online Scams and Fraud’ to share threat intelligence and strengthen defences against online fraud.

The voluntary pact brings together 11 signatories: Amazon, Adobe, Google, Levi Strauss & Co., LinkedIn, Match Group, Microsoft, Meta, OpenAI, Pinterest, and Target. It aims to improve coordination among companies and strengthen cooperation with governments, law enforcement, and NGOs.

The accord commits to sharing intelligence on criminal networks, using AI to detect fraud, and strengthening verification for financial transactions. Participating companies will also provide clearer reporting channels for users and encourage governments to prioritise scam prevention.

Executives emphasised that tackling scams requires collective effort. Meta’s Nathaniel Gleicher said the accord enables companies to share insights beyond individual cases, while Microsoft’s Steven Masada highlighted the need for faster collaboration to disrupt scams and track perpetrators globally.

The move comes as online scams grow in scale and sophistication, aided by AI-generated content and cross-platform operations. Consumers lost over $16 billion to online scams in 2024, prompting firms to boost safety features and push for stronger regulations and law enforcement.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

Europe aims to tighten AI rules and personal data standards

The European Council has proposed AI Act amendments, banning nudification tools and tightening rules for processing sensitive personal data. The move represents a key step in streamlining the continent’s digital legislation and improving safeguards for citizens.

Council officials highlighted the prohibition of AI systems that generate non-consensual sexual content or child sexual abuse material. The measure matches a European Parliament ban, showing strong support for tighter AI controls amid misuse concerns.

The proposal follows incidents such as the Grok chatbot producing millions of non-consensual intimate images, which sparked a global backlash and prompted an EU probe into the social media platform X and its AI features.

Other amendments reinstate strict rules for processing sensitive data to detect bias and require providers to register high-risk AI systems, even if claiming exemptions. Negotiations between the Council and Parliament will finalise the AI Act’s updated measures.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

OpenAI plans to integrate Sora video generation into ChatGPT

According to reports, OpenAI is preparing to integrate its AI video generator Sora directly into ChatGPT, a move that could expand the platform’s capabilities beyond text and image generation.

Sora currently operates as a standalone application and web service. Integrating the tool into ChatGPT could dramatically increase its visibility and usage, particularly given the chatbot’s massive global user base.

The company released an updated version of the model in 2025 that allows users to create, remix and even appear inside AI-generated videos. Bringing those features into ChatGPT would represent a major step toward making video generation a mainstream function within conversational AI systems.

Competition in the generative video market is intensifying. Companies, including Google, are developing similar technologies, with the company’s Gemini platform offering video creation powered by the Veo system. Other developers are also launching text-to-video models as the field rapidly expands.

Despite the potential growth, integrating video generation into ChatGPT may significantly increase operating costs. Running large AI systems requires vast computing resources and energy, and the chatbot already costs billions of dollars annually to operate.

Although OpenAI earns revenue from subscriptions, the majority of ChatGPT users currently use the free version. The company is therefore exploring additional monetisation strategies, including advertising and new premium services.

Integrating Sora into ChatGPT could therefore serve both strategic and financial goals, strengthening the platform’s position in the competitive generative AI market while expanding the types of content users can create.

Would you like to learn more about AI, tech and digital diplomacyIf so, ask our Diplo chatbot!  

Young investors warned on crypto and AI advice

Australia’s financial regulator has warned young investors to be cautious with social media influencers and AI chatbots. A survey by the Australian Securities and Investments Commission found one in four Gen Z Australians invest in crypto, often guided by online content.

The survey of 1,127 participants aged 18 to 28 showed 63% use social media for financial information, 18% rely on AI platforms, and 30% consult YouTube. AI was the most trusted source at 64%, but over half still trust influencers and social media despite possible misinformation.

ASIC previously issued warnings to 18 influencers suspected of promoting high-risk products without a licence. Commissioner Alan Kirkland said some social media marketing promotes crypto scams or risky super switches that threaten young people’s key assets.

The regulator is also watching AI financial guidance. Personalised advice from unlicensed sources is illegal, and young investors should carefully check sources, especially as crypto exchanges increasingly use AI bots for trading guidance.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot