WhatsApp to support cross-app messaging

Meta is launching a ‘third-party chats’ feature on WhatsApp in Europe, allowing users to send and receive messages from other interoperable messaging apps.

Initially, only two apps, BirdyChat and Haiket, will support this integration, but users will be able to send text, voice, video, images and files. The rollout will begin in the coming months for iOS and Android users in the EU.

Meta emphasises that interoperability is opt-in, and messages exchanged via third-party apps will retain end-to-end encryption, provided the other apps match WhatsApp’s security requirements. Users can choose whether to display these cross-app conversations in a separate ‘third-party chats’ folder or mix them into their main inbox.

By opening up its messaging to external apps, WhatsApp is responding to the EU’s Digital Markets Act (DMA), which requires major tech platforms to allow interoperability. This move could reshape how messaging works in Europe, making it easier to communicate across different apps, though it also raises questions about privacy, spam risk and how encryption is enforced.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

Eurofiber France confirms the major data breach

The French telecommunications company Eurofiber has acknowledged a breach of its ATE customer platform and digital ticket system after a hacker accessed the network through software used by the company.

Engineers detected the intrusion quickly and implemented containment measures, while the company stressed that services remained operational and banking data stayed secure. The incident affected only French operations and subsidiaries such as Netiwan, Eurafibre, Avelia, and FullSave, according to the firm.

Security researchers instead argue that the scale is far broader. International Cyber Digest reported that more than 3,600 organisations may be affected, including prominent French institutions such as Orange, Thales, the national rail operator, and major energy companies.

The outlet linked the intrusion to the ransomware group ByteToBreach, which allegedly stole Eurofiber’s entire GLPI database and accessed API keys, internal messages, passwords and client records.

A known dark web actor has now listed the stolen dataset for sale, reinforcing concerns about the growing trade in exposed corporate information. The contents reportedly range from files and personal data to cloud configurations and privileged credentials.

Eurofiber did not clarify which elements belonged to its systems and which originated from external sources.

The company has notified the French privacy regulator CNIL and continues to investigate while assuring Dutch customers that their data remains safe.

A breach that underlines the vulnerability of essential infrastructure providers across Europe, echoing recent incidents in Sweden, where a compromised IT supplier exposed data belonging to over a million people.

Eurofiber says it aims to strengthen its defences instead of allowing similar compromises in future.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Report calls for new regulations as AI deepfakes threaten legal evidence

US courtrooms increasingly depend on video evidence, yet researchers warn that the legal system is unprepared for an era in which AI can fabricate convincing scenes.

A new report led by the University of Colorado Boulder argues that national standards are urgently needed to guide how courts assess footage generated or enhanced by emerging technologies.

The authors note that judges and jurors receive little training on evaluating altered clips, despite more than 80 percent of cases involving some form of video.

Concerns have grown as deepfakes become easier to produce. A civil case in California collapsed in September after a judge ruled that a witness video was fabricated, and researchers believe such incidents will rise as tools like Sora 2 allow users to create persuasive simulations in moments.

Experts also warn about the spread of the so-called deepfake defence, where lawyers attempt to cast doubt on genuine recordings instead of accepting what is shown.

AI is also increasingly used to clean up real footage and to match surveillance clips with suspects. Such techniques can improve clarity, yet they also risk deepening inequalities when only some parties can afford to use them.

High-profile errors linked to facial recognition have already led to wrongful arrests, reinforcing the need for more explicit courtroom rules.

The report calls for specialised judicial training, new systems for storing and retrieving video evidence and stronger safeguards that help viewers identify manipulated content without compromising whistleblowers.

Researchers hope the findings prompt legal reforms that place scientific rigour at the centre of how courts treat digital evidence as it shifts further into an AI-driven era.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

WhatsApp may show Facebook and Instagram usernames for unknown numbers

WhatsApp is reportedly testing a feature that will display Meta-verified usernames (from Facebook or Instagram) when users search for phone numbers they haven’t saved. According to WABetaInfo, this is currently in development for iOS.

When a searched number matches an active WhatsApp account, the app displays the associated username, along with limited profile details, depending on the user’s privacy settings. Importantly, if someone searches by username, their phone number remains hidden to protect privacy.

WhatsApp is also reportedly allowing users to reserve the same username they use on Facebook or Instagram. Verification of ownership happens through Meta’s Accounts Centre, ensuring a unified identity across Meta platforms.

However, this update is part of a broader push to enhance privacy: WhatsApp has previously announced that it will allow users to replace their phone numbers with usernames, enabling chats without revealing personal numbers.

From a digital-policy perspective, the change raises important issues about identity, discoverability and data integration across Meta’s apps. It may make it easier to identify and connect with unfamiliar contacts, but it also concentrates more of our personal data under Meta’s own digital identity infrastructure.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

Deepfakes surge as scammers exploit AI video tools

Experts warn online video is entering a perilous new phase as AI deepfakes spread. Analysts say totals climbed from roughly 500,000 in 2023 to eight million in 2025.

Security researchers say deepfake scams have risen by more than 3,000 percent recently. Studies also indicate humans correctly spot high-quality fakes only around one in four times. People are urged to question surprising clips, verify stories elsewhere and trust their instincts.

Video apps such as Sora 2 create lifelike clips that fraudsters reuse for scams. Sora passed one million downloads and later tightened rules after racist deepfakes of Martin Luther King Jr.

Specialists at Outplayed suggest checking eye blinks, mouth movements and hands for subtle distortions. Inconsistent lighting, unnaturally smooth skin or glitching backgrounds can reveal manipulated or AI-generated video.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

New report warns retailers are unprepared for AI-powered attacks

Retailers are entering the peak shopping season amid warnings that AI-driven cyber threats will accelerate. LevelBlue’s latest Spotlight Report says nearly half of retail executives are already seeing significantly higher attack volumes, while one-third have suffered a breach in the past year.

The sector is under pressure to roll out AI-driven personalisation and new digital channels, yet only a quarter feel ready to defend against AI attacks. Readiness gaps also cover deepfakes and synthetic identity fraud, even though most expect these threats to arrive soon.

Supply chain visibility remains weak, with almost half of executives reporting limited insight into software suppliers. Few list supplier security as a near-term priority, fuelling concern that vulnerabilities could cascade across retail ecosystems.

High-profile breaches have pushed cybersecurity into the boardroom, and most retailers now integrate security teams with business operations. Leadership performance metrics and risk appetite frameworks are increasingly aligned with cyber resilience goals.

Planned investment is focused on application security, business-wide resilience processes, and AI-enabled defensive tools. LevelBlue argues that sustained spending and cultural change are required if retailers hope to secure consumer trust amid rapidly evolving threats.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

India’s data protection rules finally take effect

India has activated the Digital Personal Data Protection Act 2023 after extended delays. Final regulations notified in November operationalise a long-awaited national privacy framework. The Act, passed in August 2023, now gains a fully operational compliance structure.

Implementation of the rules is staggered so organisations can adjust governance, systems and contracts. Some provisions, including the creation of a Data Protection Board, take effect immediately. Obligations on consent notices, breach reporting and children’s data begin after 12 or 18 months.

India introduces regulated consent managers acting as a single interface between users and data fiduciaries. Managers must register with the Board and follow strict operational standards. Parents will use digital locker-based verification when authorising the processing of children’s information online.

Global technology, finance and health providers now face major upgrades to internal privacy programmes. Lawyers expect major work mapping data flows, refining consent journeys and tightening security practices.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

Google commits 40 billion dollars to expand Texas AI infrastructure

Google will pour 40 billion dollars into Texas by 2027, expanding digital infrastructure. Funding focuses on new cloud and AI facilities alongside existing campuses in Midlothian and Dallas.

Three new US data centres are planned, one in Armstrong County and two in Haskell County. One Haskell site will sit beside a solar plant and battery storage facility. Investment is accompanied by agreements for more than 6,200 megawatts of additional power generation.

Google will create a 30 million dollar Energy Impact Fund supporting Texan energy efficiency and affordability projects. The company backs training for existing electricians and over 1,700 apprentices through electrical training programmes.

Spending strengthens Texas as a major hub for data centres and AI development. Google says expanded infrastructure and workforce will help maintain US leadership in advanced computing technologies. Company highlights its 15 year presence in Texas and pledges ongoing community support.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

Teenagers still face harmful content despite new protections

In the UK and other countries, teenagers continue to encounter harmful social media content, including posts about bullying, suicide and weapons, despite the Online Safety Act coming into effect in July.

A BBC investigation using test profiles revealed that some platforms continue to expose young users to concerning material, particularly on TikTok and YouTube.

The experiment, conducted with six fictional accounts aged 13 to 15, revealed differences in exposure between boys and girls.

While Instagram showed marked improvement, with no harmful content displayed during the latest test, TikTok users were repeatedly served posts about self-harm and abuse, and one YouTube profile encountered videos featuring weapons and animal harm.

Experts warned that changes will take time and urged parents to monitor their children’s online activity actively. They also recommended open conversations about content, the use of parental controls, and vigilance rather than relying solely on the new regulatory codes.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

How neurotech is turning science fiction into lived reality

Some experts now say neurotechnology could be as revolutionary as AI, as devices advance rapidly from sci-fi tropes into practical reality. Researchers can already translate thoughts into words through brain implants, and spinal implants are helping people with paralysis regain movement.

King’s College London neuroscientist Anne Vanhoestenberghe told AFP, ‘People do not realise how much we’re already living in science fiction.’

Her lab works on implants for both brain and spinal systems, not just restoring function, but reimagining communication.

At the same time, the technology carries profound ethical risks. There is growing unease about privacy, data ownership and the potential misuse of neural data.

Some even warn that our ‘innermost thoughts are under threat.’ Institutions like UNESCO are already moving to establish global neurotech governance frameworks.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot