The European Union imposed sanctions on two China-based companies and one Iranian company in connection with cyber operations targeting the EU member states. The Council’s official press release does not specify the underlying operations. The designated entities are Integrity Technology Group and Anxun Information Technology, both based in China, and Emennet Pasargad, based in Iran.
According to an EU statement, Integrity Technology is assessed to have facilitated the compromise of over 65,000 devices across six member states. Anxun is assessed to have provided offensive cyber capabilities targeting critical infrastructure, and two of the company’s co-founders have been individually designated for their roles in these operations.
Emennet is assessed to have a compromised digital advertising infrastructure to disseminate disinformation during the 2024 Paris Olympics.
The sanctions entail an asset freeze and a travel ban for the listed individuals. The EU citizens and entities are additionally prohibited from making funds available to the designated companies.
Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!
Cybersecurity authorities have warned that vulnerabilities in the OpenClaw AI agent could expose sensitive data. Officials in China say weak default security settings may allow attackers to exploit the system.
Experts in China warned that prompt injection attacks could manipulate OpenClaw when it accesses online content. Malicious instructions hidden in websites may cause the AI agent to reveal confidential information.
Researchers have also identified risks involving link previews in messaging apps such as Telegram and Discord. Investigators in China say attackers could trick the system into sending sensitive data to malicious websites.
Security specialists in China advise organisations to strengthen protections around AI agents. Recommendations include isolating systems, limiting network access and installing trusted software components only.
Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!
South Korea’s three major telecommunications operators plan to deploy advanced network technologies during the BTS comeback concert scheduled for 21 March at Gwanghwamun Square in central Seoul. The initiative aims to bolster network management, prevent congestion, and ensure stable connectivity as large crowds gather in a confined space.
SK Telecom said it will introduce its proprietary AI-powered network management system, A-One, at the event. The technology can recommend optimal equipment placement, predict traffic demand, and monitor real-time network performance to maintain service stability.
To manage heavy data usage during the concert, the company will operate multiple network systems across the venue’s different zones. The setup is designed to allow attendees inside the square to upload photos and videos quickly while enabling viewers outside the venue to stream the concert without interruptions. Additional equipment will also be installed in areas expected to attract international visitors.
KT will deploy its AI-based autonomous traffic management system, W-SDN, which monitors network usage in real time and automatically adjusts traffic flows if congestion is detected. The company will activate an emergency network control mode during the event and deploy about 80 engineers and portable base stations on site.
LG Uplus will also apply its autonomous network management technology, which predicts traffic changes and distributes network loads across nearby base stations. The South Korea-based operator said the system will help ensure uninterrupted connectivity for concertgoers throughout the event.
Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!
Meta will discontinue end-to-end encryption for Instagram direct messages starting in May 2026. The company said the feature saw limited use among Instagram users.
Users with encrypted chats will receive instructions on how to download messages or media before the feature ends. Meta confirmed the change through updates to its support pages and in-app notifications.
The decision comes amid ongoing debate about encryption and online safety on major social platforms. Critics argue that encrypted messaging can make it harder to detect harmful activity involving minors.
Meta said users seeking encrypted communication can continue using WhatsApp or Messenger. The company maintains end-to-end encryption for messaging services outside Instagram.
Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!
Researchers are increasingly combining geospatial data with predictive modelling to anticipate health risks.
In that context, Google has introduced new capabilities within Google Earth AI designed to help public health experts forecast outbreaks and identify vulnerable communities.
The system integrates environmental information such as weather patterns, flooding and air quality with population mobility data and health records.
These insights allow researchers to analyse how environmental conditions influence the spread of diseases, including Dengue Fever and Cholera.
Several research initiatives are already testing the models. In collaboration with the World Health Organisation Regional Office for Africa, forecasting tools combining Google’s time-series models with geospatial data improved cholera prediction accuracy by more than 35 percent.
Academic researchers are also applying the technology to other diseases. Scientists at the University of Oxford have used Earth AI datasets to improve six-month dengue forecasts in Brazil, helping local authorities prepare preventative responses.
The technology is also being tested for chronic disease analysis. In Australia, partnerships with health organisations are exploring how geospatial models can identify regional health needs and support preventative care strategies.
Combining environmental intelligence with health data could enable public health systems to shift from reactive crisis management to earlier detection and prevention of disease outbreaks.
Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!
X has submitted a compliance proposal to the European Commission outlining how it intends to modify its blue check verification system following regulatory concerns under the Digital Services Act.
The EU regulators concluded that the platform’s system allowed users to obtain verification simply by paying for a subscription without meaningful identity checks, potentially misleading users about the authenticity of accounts.
The Commission imposed a €120 million fine in December and gave the company 60 working days to propose corrective measures. Officials confirmed that X met the deadline for submitting a plan, which regulators will now assess.
The platform, owned by Elon Musk, must also pay the penalty while the Commission evaluates the proposed changes. The company has challenged the enforcement decision before the EU’s General Court.
Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!
In the EU, France is calling for planned European AI ‘gigafactories’ to focus on testing and scaling European technologies rather than primarily increasing demand for hardware from companies such as Nvidia.
The large computing facilities are intended to provide the infrastructure needed to train advanced AI systems. However, officials in France argue that the projects should strengthen Europe’s technological capabilities rather than reinforce reliance on foreign suppliers.
Several EU countries, including Poland, Austria and Lithuania, support using the infrastructure to improve Europe’s digital resilience.
The initiative forms part of the European Commission’s wider plans to expand computing capacity and support the development of a stronger European AI ecosystem.
Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!
Major technology and consumer-facing companies, including Google, Amazon, and OpenAI, have signed the ‘Industry Accord Against Online Scams and Fraud’ to share threat intelligence and strengthen defences against online fraud.
The voluntary pact brings together 11 signatories: Amazon, Adobe, Google, Levi Strauss & Co., LinkedIn, Match Group, Microsoft, Meta, OpenAI, Pinterest, and Target. It aims to improve coordination among companies and strengthen cooperation with governments, law enforcement, and NGOs.
The accord commits to sharing intelligence on criminal networks, using AI to detect fraud, and strengthening verification for financial transactions. Participating companies will also provide clearer reporting channels for users and encourage governments to prioritise scam prevention.
Executives emphasised that tackling scams requires collective effort. Meta’s Nathaniel Gleicher said the accord enables companies to share insights beyond individual cases, while Microsoft’s Steven Masada highlighted the need for faster collaboration to disrupt scams and track perpetrators globally.
The move comes as online scams grow in scale and sophistication, aided by AI-generated content and cross-platform operations. Consumers lost over $16 billion to online scams in 2024, prompting firms to boost safety features and push for stronger regulations and law enforcement.
Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!
The European Council has proposed AI Act amendments, banning nudification tools and tightening rules for processing sensitive personal data. The move represents a key step in streamlining the continent’s digital legislation and improving safeguards for citizens.
Council officials highlighted the prohibition of AI systems that generate non-consensual sexual content or child sexual abuse material. The measure matches a European Parliament ban, showing strong support for tighter AI controls amid misuse concerns.
The proposal follows incidents such as the Grok chatbot producing millions of non-consensual intimate images, which sparked a global backlash and prompted an EU probe into the social media platform X and its AI features.
Other amendments reinstate strict rules for processing sensitive data to detect bias and require providers to register high-risk AI systems, even if claiming exemptions. Negotiations between the Council and Parliament will finalise the AI Act’s updated measures.
Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!
Despite speculation that the feature was expanding internationally, OpenAI has clarified that advertisements in ChatGPT are currently available only to users in the US.
Questions about a broader rollout emerged after references to advertisements appeared in the platform’s updated privacy policy. Some users interpreted the language as evidence that advertising would soon be introduced globally.
OpenAI said the policy update does not signal an immediate expansion. According to the company, advertising features are still being tested within the US as part of a gradual deployment strategy.
ChatGPT advertisements were introduced in February 2026 and appear below responses generated by the chatbot. The ads are shown only to logged-in users on free subscription tiers and are not displayed to users under eighteen.
Company representatives stated that advertising systems operate independently from the AI model that generates responses. According to OpenAI, advertisers cannot influence or modify the content produced by ChatGPT.
The company also said it does not share user conversations or personal chat histories with advertisers. However, advertisements may still be personalised based on user queries, which has prompted discussions about how conversational interfaces could shape consumer decisions.
OpenAI indicated that it is adopting a cautious, phased approach before considering any wider rollout of ChatGPT advertising features in other markets.
Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!