New EU rules aim to accelerate GDPR complaint handling

The Council of the European Union has approved new rules aimed at speeding up the handling of cross-border data protection complaints, marking a significant update to the enforcement of the General Data Protection Regulation (GDPR) across the bloc. The new regulation aims to address long-standing bottlenecks in cooperation between national data protection authorities, which often hinder investigations involving companies operating across multiple EU countries.

Among the key changes is the introduction of harmonised criteria for determining whether a complaint is admissible, ensuring that citizens receive the same treatment no matter where they file a GDPR complaint. The rules also strengthen the rights of both complainants and companies under investigation, including clearer procedures for participation in the case and access to preliminary findings.

To reduce administrative burdens, the regulation introduces a simplified cooperation procedure for straightforward cases, allowing authorities to close cases more quickly without relying on the full cooperation framework.

Standard investigations will now be subject to a maximum 15-month deadline, extendable by another 12 months for particularly complex cases. Simple cooperation cases must be concluded within 12 months.

With the Council’s adoption, the legislative process is complete. The regulation will enter into force 20 days after its publication in the EU’s Official Journal and will begin to apply 15 months later. It updates the GDPR’s cross-border enforcement system, under which a single lead authority handles cases but must coordinate with other national regulators when individuals in multiple member states are affected.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

WhatsApp to support cross-app messaging

Meta is launching a ‘third-party chats’ feature on WhatsApp in Europe, allowing users to send and receive messages from other interoperable messaging apps.

Initially, only two apps, BirdyChat and Haiket, will support this integration, but users will be able to send text, voice, video, images and files. The rollout will begin in the coming months for iOS and Android users in the EU.

Meta emphasises that interoperability is opt-in, and messages exchanged via third-party apps will retain end-to-end encryption, provided the other apps match WhatsApp’s security requirements. Users can choose whether to display these cross-app conversations in a separate ‘third-party chats’ folder or mix them into their main inbox.

By opening up its messaging to external apps, WhatsApp is responding to the EU’s Digital Markets Act (DMA), which requires major tech platforms to allow interoperability. This move could reshape how messaging works in Europe, making it easier to communicate across different apps, though it also raises questions about privacy, spam risk and how encryption is enforced.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

Eurofiber France confirms the major data breach

The French telecommunications company Eurofiber has acknowledged a breach of its ATE customer platform and digital ticket system after a hacker accessed the network through software used by the company.

Engineers detected the intrusion quickly and implemented containment measures, while the company stressed that services remained operational and banking data stayed secure. The incident affected only French operations and subsidiaries such as Netiwan, Eurafibre, Avelia, and FullSave, according to the firm.

Security researchers instead argue that the scale is far broader. International Cyber Digest reported that more than 3,600 organisations may be affected, including prominent French institutions such as Orange, Thales, the national rail operator, and major energy companies.

The outlet linked the intrusion to the ransomware group ByteToBreach, which allegedly stole Eurofiber’s entire GLPI database and accessed API keys, internal messages, passwords and client records.

A known dark web actor has now listed the stolen dataset for sale, reinforcing concerns about the growing trade in exposed corporate information. The contents reportedly range from files and personal data to cloud configurations and privileged credentials.

Eurofiber did not clarify which elements belonged to its systems and which originated from external sources.

The company has notified the French privacy regulator CNIL and continues to investigate while assuring Dutch customers that their data remains safe.

A breach that underlines the vulnerability of essential infrastructure providers across Europe, echoing recent incidents in Sweden, where a compromised IT supplier exposed data belonging to over a million people.

Eurofiber says it aims to strengthen its defences instead of allowing similar compromises in future.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

OpenAI accelerates enterprise AI growth after Gartner names it an emerging leader

The US tech firm, OpenAI, gained fresh momentum after being named an Emerging Leader in Generative AI by Gartner. The assessment highlights strong industry confidence in OpenAI’s ability to support companies that want reliable and scalable AI systems.

Enterprise clients have increasingly adopted the company’s tools after significant investment in privacy controls, data governance frameworks and evaluation methods that help organisations deploy AI safely.

More than one million companies now use OpenAI’s technology, driven by workers who request ChatGPT as part of their daily tasks.

Over eight hundred million weekly users arrive already familiar with the tool, which shortens pilot phases and improves returns, rather than slowing transformation with lengthy onboarding. ChatGPT Enterprise has experienced sharp expansion, recording ninefold growth in seats over the past year.

OpenAI views generative AI as a new layer of enterprise infrastructure rather than a peripheral experiment. The next generation of systems is expected to be more collaborative and closely integrated with corporate operations, supporting new ways of working across multiple sectors.

The company aims to help organisations convert AI strategies into measurable results, rather than abstract ambitions.

Executives described the recognition as encouraging, although they stressed that broader progress still lies ahead. OpenAI plans to continue strengthening its enterprise platform, enabling businesses to integrate AI responsibly and at scale.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

EU aviation regulator opens debate on AI oversight and safety

EASA has issued its first regulatory proposal on AI in aviation, opening a three-month consultation for industry feedback. The draft focuses on trustworthy, data-driven AI systems and anticipates applications ranging from basic assistance to human–AI teaming.

The move comes amid wider criticism of EU AI rules from major tech firms and political leaders. Aviation stakeholders are now assessing whether compliance costs and operational demands could slow development or disrupt competitive positioning across the sector.

Experts warn that adapting to the framework may require significant investment, particularly for companies with limited resources. Others may accelerate AI adoption to preserve market advantage, especially where safety gains or efficiency improvements justify rapid deployment.

EASA stresses that consultation is essential to balance strict assurance requirements with the flexibility needed for innovation. Privacy and personal data issues remain contentious, shaping expectations for acceptable AI use in safety-critical environments.

Meanwhile, Airbus is pushing to reach 75 A320-family deliveries per month by 2027, driven by the A321neo’s strong order book. In parallel, Mitsui OSK Lines continues to lead the global LNG carrier market, reflecting broader momentum across adjacent transport sectors.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

Report calls for new regulations as AI deepfakes threaten legal evidence

US courtrooms increasingly depend on video evidence, yet researchers warn that the legal system is unprepared for an era in which AI can fabricate convincing scenes.

A new report led by the University of Colorado Boulder argues that national standards are urgently needed to guide how courts assess footage generated or enhanced by emerging technologies.

The authors note that judges and jurors receive little training on evaluating altered clips, despite more than 80 percent of cases involving some form of video.

Concerns have grown as deepfakes become easier to produce. A civil case in California collapsed in September after a judge ruled that a witness video was fabricated, and researchers believe such incidents will rise as tools like Sora 2 allow users to create persuasive simulations in moments.

Experts also warn about the spread of the so-called deepfake defence, where lawyers attempt to cast doubt on genuine recordings instead of accepting what is shown.

AI is also increasingly used to clean up real footage and to match surveillance clips with suspects. Such techniques can improve clarity, yet they also risk deepening inequalities when only some parties can afford to use them.

High-profile errors linked to facial recognition have already led to wrongful arrests, reinforcing the need for more explicit courtroom rules.

The report calls for specialised judicial training, new systems for storing and retrieving video evidence and stronger safeguards that help viewers identify manipulated content without compromising whistleblowers.

Researchers hope the findings prompt legal reforms that place scientific rigour at the centre of how courts treat digital evidence as it shifts further into an AI-driven era.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

WhatsApp may show Facebook and Instagram usernames for unknown numbers

WhatsApp is reportedly testing a feature that will display Meta-verified usernames (from Facebook or Instagram) when users search for phone numbers they haven’t saved. According to WABetaInfo, this is currently in development for iOS.

When a searched number matches an active WhatsApp account, the app displays the associated username, along with limited profile details, depending on the user’s privacy settings. Importantly, if someone searches by username, their phone number remains hidden to protect privacy.

WhatsApp is also reportedly allowing users to reserve the same username they use on Facebook or Instagram. Verification of ownership happens through Meta’s Accounts Centre, ensuring a unified identity across Meta platforms.

However, this update is part of a broader push to enhance privacy: WhatsApp has previously announced that it will allow users to replace their phone numbers with usernames, enabling chats without revealing personal numbers.

From a digital-policy perspective, the change raises important issues about identity, discoverability and data integration across Meta’s apps. It may make it easier to identify and connect with unfamiliar contacts, but it also concentrates more of our personal data under Meta’s own digital identity infrastructure.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

SAP unveils new models and tools shaping enterprise AI

The German multinational software company, SAP, used its TechEd event in Berlin to reveal a significant expansion of its Business AI portfolio, signalling a decisive shift toward an AI-native future across its suite.

The company expects to deliver 400 AI use cases by the end of 2025, building on more than 300 already in place.

It also argues that its early use cases already generate substantial returns, offering meaningful value for firms seeking operational gains instead of incremental upgrades.

A firm that places AI-native architecture at the centre of its strategy. SAP HANA Cloud now supports richer model grounding through multi-model engines, long-term agentic memory, and automated knowledge graph creation.

SAP aims to integrate these tools with SAP Business Data Cloud and Snowflake through zero-copy data sharing next year.

The introduction of SAP-RPT-1, a new relational foundation model designed for structured enterprise data rather than general language tasks, is presented as a significant step toward improving prediction accuracy across finance, supply chains, and customer analytics.

SAP also seeks to empower developers through a mix of low-code and pro-code tools, allowing companies to design and orchestrate their own Joule Agents.

Agent governance is strengthened through the LeanIX agent hub. At the same time, new interoperability efforts based on the agent-to-agent protocol are expected to enable SAP systems to work more smoothly with models and agents from major partners, including AWS, Google, Microsoft, and ServiceNow.

Improvements in ABAP development, including the introduction of SAP-ABAP-1 and a new Visual Studio Code extension, aim to support developers who prefer modern, AI-enabled workflows over older, siloed environments.

Physical AI also takes a prominent role. SAP demonstrated how Joule Agents already operate inside autonomous robots for tasks linked to logistics, field services, and asset performance.

Plans extend from embodied AI to quantum-ready business algorithms designed to enhance complex decision-making without forcing companies to re-platform.

SAP frames the overall strategy as a means to support Europe’s digital sovereignty, which is strengthened through expanded infrastructure in Germany and cooperation with Deutsche Telekom under the Industrial AI Cloud project.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Deepfakes surge as scammers exploit AI video tools

Experts warn online video is entering a perilous new phase as AI deepfakes spread. Analysts say totals climbed from roughly 500,000 in 2023 to eight million in 2025.

Security researchers say deepfake scams have risen by more than 3,000 percent recently. Studies also indicate humans correctly spot high-quality fakes only around one in four times. People are urged to question surprising clips, verify stories elsewhere and trust their instincts.

Video apps such as Sora 2 create lifelike clips that fraudsters reuse for scams. Sora passed one million downloads and later tightened rules after racist deepfakes of Martin Luther King Jr.

Specialists at Outplayed suggest checking eye blinks, mouth movements and hands for subtle distortions. Inconsistent lighting, unnaturally smooth skin or glitching backgrounds can reveal manipulated or AI-generated video.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

US states weigh VPN restrictions to protect minors online

US legislators in Wisconsin and Michigan are weighing proposals that would restrict the use of VPNs to access sites deemed harmful to minors. The bills build on age-verification rules for websites hosting sexual content, which lawmakers say are too easy to bypass when users connect via VPNs.

In Wisconsin, a bill that has already passed the State Assembly would require adult sites to both verify age and block visitors using VPNs, potentially making the state the first in the US to outlaw VPN use for accessing such content if the Senate approves it.

In Michigan, similar legislation would go further by obliging internet providers to monitor and block VPN connections, though that proposal has yet to advance.

The Digital Rights Group and the Electronic Frontier Foundation argue that the approach would erode privacy for everyone, not just minors.

It warns that blanket restrictions would affect businesses, students, journalists and abuse survivors who rely on VPNs for security, calling the measures ‘surveillance dressed up as safety’ and urging lawmakers instead to improve education, parental tools and support for safer online environments.

The debate comes as several European countries, including France, Italy and the UK, have introduced age-verification rules for pornography sites, but none have proposed banning VPNs.

Would you like to learn more about AI, tech and digital diplomacyIf so, ask our Diplo chatbot!