EU competition regulators expand scrutiny across the entire AI ecosystem

Competition authorities in the EU are broadening their oversight of the AI sector, examining every layer of the technology’s value chain.

Speaking at a conference in Berlin, Teresa Ribera explained that regulators are analysing the full ‘AI stack’ instead of focusing solely on consumer applications.

According to the competition chief, scrutiny extends beyond visible AI tools to the systems that support them. Investigations are assessing underlying models, the data used to train those models, as well as cloud infrastructure and energy resources that power AI systems.

Regulatory attention has already reached the application layer.

The European Commission opened an investigation in 2025 involving Meta after concerns emerged that the company could restrict competing AI assistants on its messaging platform WhatsApp.

Following regulatory pressure, Meta proposed allowing rival AI chatbots on the platform in exchange for a fee. European regulators are now assessing the proposal to determine whether additional intervention is necessary to preserve fair competition in rapidly evolving digital markets.

Authorities have also examined concentration risks across other parts of the AI ecosystem, including the infrastructure layer dominated by companies such as Nvidia.

Regulators argue that effective competition oversight must address the entire technology stack as AI markets expand quickly.

Would you like to learn more about AI, tech and digital diplomacyIf so, ask our Diplo chatbot!  

Cambridge researchers warn AI toys misread children’s emotions

AI toys for young children may misread emotions and respond inappropriately, according to a study by researchers at the University of Cambridge. Developmental psychologists observed interactions between children aged three to five and conversational AI-powered toys.

Findings showed the toys often struggled with pretend play and emotional cues. In several cases, children attempted to express sadness or initiate imaginative scenarios, while the AI responded with unrelated or overly scripted replies, leaving emotional signals unrecognised.

Researchers warned that such limitations could affect children’s emotional development and imaginative play. Early years practitioners also raised concerns about how toy-collected conversation data may be used and whether children could start treating the devices as trusted companions.

The study calls for stronger regulation and the introduction of safety certification for AI toys aimed at young children. Toy developer Curio stated that improving AI interactions and maintaining parental controls remain priorities as the technology continues to develop.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

BeatBanker malware targets Android users in Brazil

A new Android malware called BeatBanker is targeting users in Brazil through fake Starlink and government apps. The malware hijacks devices, steals banking credentials, tampers with cryptocurrency transactions, and secretly mines Monero.

Infection begins on phishing websites mimicking the Google Play Store or the ‘INSS Reembolso’ app. Users are tricked into installing trojanised APKs, which evade detection through memory-based decryption and by blocking analysis environments.

Fake update screens maintain persistence while silently downloading additional malicious payloads.

BeatBanker initially combined a banking trojan with a cryptocurrency miner. It uses accessibility permissions to monitor browsers and crypto apps, overlaying fake screens to redirect Tether and other crypto transfers.

A foreground service plays silent audio loops to prevent the device from shutting down, while Firebase Cloud Messaging enables remote control of infected devices.

The latest variant replaces the banking module with the BTMOB RAT, providing full control over devices. Capabilities include automatic permissions, background persistence, keylogging, GPS tracking, camera access, and screen-lock credential capture.

Kaspersky warns that BeatBanker demonstrates the growing sophistication of mobile threats and multi-layered malware campaigns.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

AI browsers expose new cybersecurity attack surfaces

Security researchers have demonstrated that agentic browsers, powered by AI, may introduce new cybersecurity vulnerabilities.

Experiments targeting the Comet AI browser, developed by Perplexity AI, showed that attackers could manipulate the system into executing phishing scams in only a few minutes.

The attack exploits the reasoning process used by AI agents when interacting with websites. These systems continuously explain their actions and observations, revealing internal signals that attackers can analyse to refine malicious strategies and bypass built-in safeguards.

Researchers showed that phishing pages can be iteratively trained using adversarial machine learning methods, such as Generative Adversarial Networks.

By observing how the AI browser responds to suspicious signals, attackers can optimise fraudulent pages until the system accepts them as legitimate.

The findings highlight a shift in the cybersecurity threat landscape. Instead of deceiving human users directly, attackers increasingly focus on manipulating the AI agents that perform online actions on behalf of users.

Security experts warn that prompt injection vulnerabilities remain a fundamental challenge for large language models and agentic systems.

Although new defensive techniques are being developed, researchers believe such weaknesses may remain difficult to eliminate.

Would you like to learn more about AI, tech and digital diplomacyIf so, ask our Diplo chatbot!

EU platform law expands data access rights

European regulators are examining how the Digital Markets Act interacts with the General Data Protection Regulation across major digital platforms. The EU rules apply to designated gatekeepers that operate core platform services used by millions of users.

Policy specialists in the EU say the Digital Markets Act complements GDPR protections by strengthening user control over personal data. The framework also supports rights related to data access, portability and transparency for both consumers and business users.

The regulatory overlap affects areas including consent requirements, third-party software installation and interoperability between services. Authorities are also coordinating enforcement between competition and data protection regulators.

Analysts say the combined application of both laws could reshape the responsibilities of major technology platforms. Policymakers aim to increase user choice while reinforcing safeguards for the integrity and confidentiality of personal data in the GDPR.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

Google outlines roadmap for safer generative AI for young users

Google has presented a strategy for developing generative AI systems designed to protect younger users better better while supporting learning and creativity.

The approach emphasises building conversational AI experiences that balance innovation with safeguards tailored to children and teenagers.

The company’s framework rests on three pillars: protecting young people online, respecting the role of families in digital environments and enabling youth to explore AI technologies responsibly.

According to Google, safety policies prohibit harmful content, including material linked to child exploitation, violent extremism and self-harm, while additional restrictions target age-inappropriate topics.

Safeguards are integrated throughout the AI development lifecycle, from user input to model responses. Systems use specialised classifiers to detect potentially harmful queries and prevent inappropriate outputs.

These protections are also applied to models such as Gemini, which incorporates defences against prompt manipulation and cyber misuse.

Beyond preventing harm, Google aims to support responsible AI adoption through educational initiatives.

Resources designed for families encourage discussions about responsible technology use, while tools such as Guided Learning in Gemini seek to help students explore complex topics through structured explanations and interactive learning support.

Would you like to learn more about AI, tech and digital diplomacyIf so, ask our Diplo chatbot!

Over 85 companies join global crypto partner program 

Mastercard has introduced the Crypto Partner Program, a global initiative connecting more than 85 crypto-native companies, payments providers, and financial institutions. The program aims to create a forum for collaboration that aligns innovation in digital assets with traditional payment systems.

Enterprise use cases such as cross-border remittances, payouts, and settlements are growing, underscoring the practical potential of on-chain payments. Participants will collaborate with Mastercard to design products that combine the speed and programmability of digital assets with existing card rails and global commerce.

The initiative builds on Mastercard’s long-standing approach to blockchain and digital assets, including Start Path and the Engage platform, which provide opportunities for collaboration, innovation, and growth.

The program focuses on turning technical innovation into scalable, compliant solutions that can operate across markets and everyday commerce.

Partners in the Crypto Partner Program include Binance, Circle, Crypto.com, Solana, Ripple, PayPal, and over 80 other industry leaders, demonstrating the growing ecosystem of companies working together to shape the future of digital payments.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

AI-driven adaptive malware highlights new cyber threat landscape

Google’s cybersecurity division, Mandiant, has warned about the growing threat of AI-driven adaptive malware, highlighting how AI is reshaping the cyber threat landscape.

According to a recent report, adaptive malware can modify its behaviour and code in response to the environment it encounters, thereby evading traditional security tools. By analysing the security systems protecting a target, the malware can rewrite parts of its code to bypass detection.

Unlike traditional malware, which typically follows fixed instructions, adaptive malware can adjust its behaviour during an attack. This capability makes it more difficult for conventional cybersecurity tools to detect and block malicious activity.

Mandiant noted that such malware is increasingly associated with advanced persistent threat (APT) groups that conduct long-term, targeted cyber operations. These groups often pursue espionage objectives or financial gain while maintaining prolonged access to compromised systems.

AI is also being used to automate elements of cyberattacks. Machine learning algorithms allow malicious software to anticipate defensive measures and adjust its behaviour in real time. In some cases, attackers are integrating AI into broader automated attack chains. AI-driven malware can gather information, adapt its strategy, and continue operating with minimal human intervention.

Security researchers say autonomous AI agents may be capable of managing multiple stages of an attack, including reconnaissance, exploitation, and persistence, while remaining undetected.

To address these evolving threats, Mandiant recommends that organisations strengthen their cybersecurity strategies by deploying advanced detection and response tools, including AI-based systems that can identify anomalous behaviour. As AI capabilities continue to develop, cybersecurity experts say understanding adaptive malware and automated attack techniques will be essential for organisations seeking to protect their systems and data.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

Spain expands digital oversight of online hate

Spain has launched a digital system designed to track hate speech and disinformation across social media platforms. Prime Minister Pedro Sánchez presented the tool in Madrid as part of a wider effort to improve oversight of online platforms.

The platform known as HODIO will analyse public posts and measure the spread and reach of hateful content. Authorities in Spain say the project will publish regular reports examining how platforms respond to harmful material.

The monitoring initiative is managed by Spain’s Observatory on Racism and Xenophobia. Officials in Spain say the data will help citizens understand the scale of online hate and assess how social networks address abusive content.

The initiative forms part of a broader digital policy agenda in Spain that also includes measures to protect minors online. Policymakers in Spain have discussed proposals such as restrictions on social media use by children under 16.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

AI and quantum computing reshape the global cybersecurity landscape

Cybersecurity risks are increasing as digital connectivity expands across governments, businesses and households.

According to Thales Group, a growing number of connected devices and digital services has significantly expanded the potential entry points for cyberattacks.

AI is reshaping the cybersecurity landscape by enabling attackers to identify vulnerabilities at unprecedented speed.

Security specialists increasingly describe the environment as a contest in which defensive systems must deploy AI to counter adversaries using similar technologies to exploit weaknesses in digital infrastructure.

Security concerns also extend beyond large institutions. Connected devices in homes, including smart cameras and speakers, often lack robust security protections, increasing exposure for individuals and networks.

Policymakers in Europe are responding through measures such as the Cyber Resilience Act, which will introduce mandatory security requirements for connected products sold in the EU.

Long-term risks are also emerging from advances in quantum computing.

Experts warn that powerful future machines could eventually break widely used encryption systems that currently protect communications, financial data and government networks, prompting organisations to adopt quantum-resistant security methods.

Would you like to learn more about AI, tech and digital diplomacyIf so, ask our Diplo chatbot!