Eurofiber France confirms the major data breach

The French telecommunications company Eurofiber has acknowledged a breach of its ATE customer platform and digital ticket system after a hacker accessed the network through software used by the company.

Engineers detected the intrusion quickly and implemented containment measures, while the company stressed that services remained operational and banking data stayed secure. The incident affected only French operations and subsidiaries such as Netiwan, Eurafibre, Avelia, and FullSave, according to the firm.

Security researchers instead argue that the scale is far broader. International Cyber Digest reported that more than 3,600 organisations may be affected, including prominent French institutions such as Orange, Thales, the national rail operator, and major energy companies.

The outlet linked the intrusion to the ransomware group ByteToBreach, which allegedly stole Eurofiber’s entire GLPI database and accessed API keys, internal messages, passwords and client records.

A known dark web actor has now listed the stolen dataset for sale, reinforcing concerns about the growing trade in exposed corporate information. The contents reportedly range from files and personal data to cloud configurations and privileged credentials.

Eurofiber did not clarify which elements belonged to its systems and which originated from external sources.

The company has notified the French privacy regulator CNIL and continues to investigate while assuring Dutch customers that their data remains safe.

A breach that underlines the vulnerability of essential infrastructure providers across Europe, echoing recent incidents in Sweden, where a compromised IT supplier exposed data belonging to over a million people.

Eurofiber says it aims to strengthen its defences instead of allowing similar compromises in future.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

OpenAI accelerates enterprise AI growth after Gartner names it an emerging leader

The US tech firm, OpenAI, gained fresh momentum after being named an Emerging Leader in Generative AI by Gartner. The assessment highlights strong industry confidence in OpenAI’s ability to support companies that want reliable and scalable AI systems.

Enterprise clients have increasingly adopted the company’s tools after significant investment in privacy controls, data governance frameworks and evaluation methods that help organisations deploy AI safely.

More than one million companies now use OpenAI’s technology, driven by workers who request ChatGPT as part of their daily tasks.

Over eight hundred million weekly users arrive already familiar with the tool, which shortens pilot phases and improves returns, rather than slowing transformation with lengthy onboarding. ChatGPT Enterprise has experienced sharp expansion, recording ninefold growth in seats over the past year.

OpenAI views generative AI as a new layer of enterprise infrastructure rather than a peripheral experiment. The next generation of systems is expected to be more collaborative and closely integrated with corporate operations, supporting new ways of working across multiple sectors.

The company aims to help organisations convert AI strategies into measurable results, rather than abstract ambitions.

Executives described the recognition as encouraging, although they stressed that broader progress still lies ahead. OpenAI plans to continue strengthening its enterprise platform, enabling businesses to integrate AI responsibly and at scale.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Report calls for new regulations as AI deepfakes threaten legal evidence

US courtrooms increasingly depend on video evidence, yet researchers warn that the legal system is unprepared for an era in which AI can fabricate convincing scenes.

A new report led by the University of Colorado Boulder argues that national standards are urgently needed to guide how courts assess footage generated or enhanced by emerging technologies.

The authors note that judges and jurors receive little training on evaluating altered clips, despite more than 80 percent of cases involving some form of video.

Concerns have grown as deepfakes become easier to produce. A civil case in California collapsed in September after a judge ruled that a witness video was fabricated, and researchers believe such incidents will rise as tools like Sora 2 allow users to create persuasive simulations in moments.

Experts also warn about the spread of the so-called deepfake defence, where lawyers attempt to cast doubt on genuine recordings instead of accepting what is shown.

AI is also increasingly used to clean up real footage and to match surveillance clips with suspects. Such techniques can improve clarity, yet they also risk deepening inequalities when only some parties can afford to use them.

High-profile errors linked to facial recognition have already led to wrongful arrests, reinforcing the need for more explicit courtroom rules.

The report calls for specialised judicial training, new systems for storing and retrieving video evidence and stronger safeguards that help viewers identify manipulated content without compromising whistleblowers.

Researchers hope the findings prompt legal reforms that place scientific rigour at the centre of how courts treat digital evidence as it shifts further into an AI-driven era.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

WhatsApp may show Facebook and Instagram usernames for unknown numbers

WhatsApp is reportedly testing a feature that will display Meta-verified usernames (from Facebook or Instagram) when users search for phone numbers they haven’t saved. According to WABetaInfo, this is currently in development for iOS.

When a searched number matches an active WhatsApp account, the app displays the associated username, along with limited profile details, depending on the user’s privacy settings. Importantly, if someone searches by username, their phone number remains hidden to protect privacy.

WhatsApp is also reportedly allowing users to reserve the same username they use on Facebook or Instagram. Verification of ownership happens through Meta’s Accounts Centre, ensuring a unified identity across Meta platforms.

However, this update is part of a broader push to enhance privacy: WhatsApp has previously announced that it will allow users to replace their phone numbers with usernames, enabling chats without revealing personal numbers.

From a digital-policy perspective, the change raises important issues about identity, discoverability and data integration across Meta’s apps. It may make it easier to identify and connect with unfamiliar contacts, but it also concentrates more of our personal data under Meta’s own digital identity infrastructure.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

SAP unveils new models and tools shaping enterprise AI

The German multinational software company, SAP, used its TechEd event in Berlin to reveal a significant expansion of its Business AI portfolio, signalling a decisive shift toward an AI-native future across its suite.

The company expects to deliver 400 AI use cases by the end of 2025, building on more than 300 already in place.

It also argues that its early use cases already generate substantial returns, offering meaningful value for firms seeking operational gains instead of incremental upgrades.

A firm that places AI-native architecture at the centre of its strategy. SAP HANA Cloud now supports richer model grounding through multi-model engines, long-term agentic memory, and automated knowledge graph creation.

SAP aims to integrate these tools with SAP Business Data Cloud and Snowflake through zero-copy data sharing next year.

The introduction of SAP-RPT-1, a new relational foundation model designed for structured enterprise data rather than general language tasks, is presented as a significant step toward improving prediction accuracy across finance, supply chains, and customer analytics.

SAP also seeks to empower developers through a mix of low-code and pro-code tools, allowing companies to design and orchestrate their own Joule Agents.

Agent governance is strengthened through the LeanIX agent hub. At the same time, new interoperability efforts based on the agent-to-agent protocol are expected to enable SAP systems to work more smoothly with models and agents from major partners, including AWS, Google, Microsoft, and ServiceNow.

Improvements in ABAP development, including the introduction of SAP-ABAP-1 and a new Visual Studio Code extension, aim to support developers who prefer modern, AI-enabled workflows over older, siloed environments.

Physical AI also takes a prominent role. SAP demonstrated how Joule Agents already operate inside autonomous robots for tasks linked to logistics, field services, and asset performance.

Plans extend from embodied AI to quantum-ready business algorithms designed to enhance complex decision-making without forcing companies to re-platform.

SAP frames the overall strategy as a means to support Europe’s digital sovereignty, which is strengthened through expanded infrastructure in Germany and cooperation with Deutsche Telekom under the Industrial AI Cloud project.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Deepfakes surge as scammers exploit AI video tools

Experts warn online video is entering a perilous new phase as AI deepfakes spread. Analysts say totals climbed from roughly 500,000 in 2023 to eight million in 2025.

Security researchers say deepfake scams have risen by more than 3,000 percent recently. Studies also indicate humans correctly spot high-quality fakes only around one in four times. People are urged to question surprising clips, verify stories elsewhere and trust their instincts.

Video apps such as Sora 2 create lifelike clips that fraudsters reuse for scams. Sora passed one million downloads and later tightened rules after racist deepfakes of Martin Luther King Jr.

Specialists at Outplayed suggest checking eye blinks, mouth movements and hands for subtle distortions. Inconsistent lighting, unnaturally smooth skin or glitching backgrounds can reveal manipulated or AI-generated video.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

New strategy targets Africa’s connectivity gap

Africa’s latest digital summit in Cotonou presented a growing concern. Coverage has expanded across West and Central Africa, yet adoption remains stubbornly low. Nearly two-thirds of Africans remain offline, despite most already living in areas with mobile networks.

Senior figures at the World Bank argued that the continent now faces an inclusion challenge rather than an infrastructure gap, as many households weigh daily necessities against the cost of connectivity.

Affordability has become the dominant barrier. Mobile Internet often consumes more than twice the global threshold for acceptable pricing, while fixed broadband can account for a striking share of monthly income. Devices remain expensive, and digital literacy is far from widespread.

Women, in particular, lag, and many rural communities lack the necessary skills to utilise essential digital services. Concerns also extend to businesses that struggle to train staff for digital tools and emerging AI solutions.

Policymakers now argue for a shift in strategy. The World Bank intends to prioritise digital public goods such as digital identification, electronic payments and interoperable platforms, believing that valuable services will encourage people to go online.

Governments hope that a stronger ecosystem will make online health, connected agriculture and digital learning more accessible and therefore more valuable.

Benin used the summit to highlight its advances in online administration and training programmes. Regional leaders also called for the creation of an African Single Digital Market that would lower access costs, encourage cross-border investment and harmonise regulations.

Officials insisted that a unified approach could accelerate development and equip African workers with the skills required for the digital jobs expected to expand by the end of the decade.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

US states weigh VPN restrictions to protect minors online

US legislators in Wisconsin and Michigan are weighing proposals that would restrict the use of VPNs to access sites deemed harmful to minors. The bills build on age-verification rules for websites hosting sexual content, which lawmakers say are too easy to bypass when users connect via VPNs.

In Wisconsin, a bill that has already passed the State Assembly would require adult sites to both verify age and block visitors using VPNs, potentially making the state the first in the US to outlaw VPN use for accessing such content if the Senate approves it.

In Michigan, similar legislation would go further by obliging internet providers to monitor and block VPN connections, though that proposal has yet to advance.

The Digital Rights Group and the Electronic Frontier Foundation argue that the approach would erode privacy for everyone, not just minors.

It warns that blanket restrictions would affect businesses, students, journalists and abuse survivors who rely on VPNs for security, calling the measures ‘surveillance dressed up as safety’ and urging lawmakers instead to improve education, parental tools and support for safer online environments.

The debate comes as several European countries, including France, Italy and the UK, have introduced age-verification rules for pornography sites, but none have proposed banning VPNs.

Would you like to learn more about AI, tech and digital diplomacyIf so, ask our Diplo chatbot!

New report warns retailers are unprepared for AI-powered attacks

Retailers are entering the peak shopping season amid warnings that AI-driven cyber threats will accelerate. LevelBlue’s latest Spotlight Report says nearly half of retail executives are already seeing significantly higher attack volumes, while one-third have suffered a breach in the past year.

The sector is under pressure to roll out AI-driven personalisation and new digital channels, yet only a quarter feel ready to defend against AI attacks. Readiness gaps also cover deepfakes and synthetic identity fraud, even though most expect these threats to arrive soon.

Supply chain visibility remains weak, with almost half of executives reporting limited insight into software suppliers. Few list supplier security as a near-term priority, fuelling concern that vulnerabilities could cascade across retail ecosystems.

High-profile breaches have pushed cybersecurity into the boardroom, and most retailers now integrate security teams with business operations. Leadership performance metrics and risk appetite frameworks are increasingly aligned with cyber resilience goals.

Planned investment is focused on application security, business-wide resilience processes, and AI-enabled defensive tools. LevelBlue argues that sustained spending and cultural change are required if retailers hope to secure consumer trust amid rapidly evolving threats.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

Mass CCTV hack in India exposes maternity ward videos sold on Telegram

Police in Gujarat have uncovered an extensive cybercrime network selling hacked CCTV footage from hospitals, schools, offices and private homes across India.

The case surfaced after local media spotted YouTube videos showing women in a maternity ward during exams and injections, with links directing viewers to Telegram channels selling longer clips. The hospital involved said cameras were installed to protect staff from false allegations.

Investigators say hackers accessed footage from an estimated 50,000 CCTV systems nationwide and sold videos for 800–2,000 rupees, with some Telegram channels offering live feeds by subscription.

Arrests since February span Maharashtra, Uttar Pradesh, Gujarat, Delhi and Uttarakhand. Police have charged suspects under laws covering privacy violations, publication of obscene material, voyeurism and cyberterrorism.

Experts say weak security practices make Indian CCTV systems easy targets. Many devices run on default passwords such as Admin123. Hackers used brute-force tools to break into networks, according to investigators. Cybercrime specialists advise users to change their IP addresses and passwords, run periodic audits, and secure both home and office networks.

The case highlights the widespread use of CCTV in India, as cameras are prevalent in both public and private spaces, often installed without consent or proper safeguards. Rights advocates say women face added stigma when sensitive footage is leaked, which discourages complaints.

Police said no patients or hospitals filed a report in this case due to fear of exposure, so an officer submitted the complaint.

Last year, the government urged states to avoid suppliers linked to past data breaches and introduced new rules to raise security standards, but breaches remain common.

Investigators say this latest case demonstrates how easily insecure systems can be exploited and how sensitive footage can spread online, resulting in severe consequences for victims.

Would you like to learn more about AI, tech and digital diplomacyIf so, ask our Diplo chatbot!