Report calls for new regulations as AI deepfakes threaten legal evidence

US courtrooms increasingly depend on video evidence, yet researchers warn that the legal system is unprepared for an era in which AI can fabricate convincing scenes.

A new report led by the University of Colorado Boulder argues that national standards are urgently needed to guide how courts assess footage generated or enhanced by emerging technologies.

The authors note that judges and jurors receive little training on evaluating altered clips, despite more than 80 percent of cases involving some form of video.

Concerns have grown as deepfakes become easier to produce. A civil case in California collapsed in September after a judge ruled that a witness video was fabricated, and researchers believe such incidents will rise as tools like Sora 2 allow users to create persuasive simulations in moments.

Experts also warn about the spread of the so-called deepfake defence, where lawyers attempt to cast doubt on genuine recordings instead of accepting what is shown.

AI is also increasingly used to clean up real footage and to match surveillance clips with suspects. Such techniques can improve clarity, yet they also risk deepening inequalities when only some parties can afford to use them.

High-profile errors linked to facial recognition have already led to wrongful arrests, reinforcing the need for more explicit courtroom rules.

The report calls for specialised judicial training, new systems for storing and retrieving video evidence and stronger safeguards that help viewers identify manipulated content without compromising whistleblowers.

Researchers hope the findings prompt legal reforms that place scientific rigour at the centre of how courts treat digital evidence as it shifts further into an AI-driven era.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

WhatsApp may show Facebook and Instagram usernames for unknown numbers

WhatsApp is reportedly testing a feature that will display Meta-verified usernames (from Facebook or Instagram) when users search for phone numbers they haven’t saved. According to WABetaInfo, this is currently in development for iOS.

When a searched number matches an active WhatsApp account, the app displays the associated username, along with limited profile details, depending on the user’s privacy settings. Importantly, if someone searches by username, their phone number remains hidden to protect privacy.

WhatsApp is also reportedly allowing users to reserve the same username they use on Facebook or Instagram. Verification of ownership happens through Meta’s Accounts Centre, ensuring a unified identity across Meta platforms.

However, this update is part of a broader push to enhance privacy: WhatsApp has previously announced that it will allow users to replace their phone numbers with usernames, enabling chats without revealing personal numbers.

From a digital-policy perspective, the change raises important issues about identity, discoverability and data integration across Meta’s apps. It may make it easier to identify and connect with unfamiliar contacts, but it also concentrates more of our personal data under Meta’s own digital identity infrastructure.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

SAP unveils new models and tools shaping enterprise AI

The German multinational software company, SAP, used its TechEd event in Berlin to reveal a significant expansion of its Business AI portfolio, signalling a decisive shift toward an AI-native future across its suite.

The company expects to deliver 400 AI use cases by the end of 2025, building on more than 300 already in place.

It also argues that its early use cases already generate substantial returns, offering meaningful value for firms seeking operational gains instead of incremental upgrades.

A firm that places AI-native architecture at the centre of its strategy. SAP HANA Cloud now supports richer model grounding through multi-model engines, long-term agentic memory, and automated knowledge graph creation.

SAP aims to integrate these tools with SAP Business Data Cloud and Snowflake through zero-copy data sharing next year.

The introduction of SAP-RPT-1, a new relational foundation model designed for structured enterprise data rather than general language tasks, is presented as a significant step toward improving prediction accuracy across finance, supply chains, and customer analytics.

SAP also seeks to empower developers through a mix of low-code and pro-code tools, allowing companies to design and orchestrate their own Joule Agents.

Agent governance is strengthened through the LeanIX agent hub. At the same time, new interoperability efforts based on the agent-to-agent protocol are expected to enable SAP systems to work more smoothly with models and agents from major partners, including AWS, Google, Microsoft, and ServiceNow.

Improvements in ABAP development, including the introduction of SAP-ABAP-1 and a new Visual Studio Code extension, aim to support developers who prefer modern, AI-enabled workflows over older, siloed environments.

Physical AI also takes a prominent role. SAP demonstrated how Joule Agents already operate inside autonomous robots for tasks linked to logistics, field services, and asset performance.

Plans extend from embodied AI to quantum-ready business algorithms designed to enhance complex decision-making without forcing companies to re-platform.

SAP frames the overall strategy as a means to support Europe’s digital sovereignty, which is strengthened through expanded infrastructure in Germany and cooperation with Deutsche Telekom under the Industrial AI Cloud project.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Deepfakes surge as scammers exploit AI video tools

Experts warn online video is entering a perilous new phase as AI deepfakes spread. Analysts say totals climbed from roughly 500,000 in 2023 to eight million in 2025.

Security researchers say deepfake scams have risen by more than 3,000 percent recently. Studies also indicate humans correctly spot high-quality fakes only around one in four times. People are urged to question surprising clips, verify stories elsewhere and trust their instincts.

Video apps such as Sora 2 create lifelike clips that fraudsters reuse for scams. Sora passed one million downloads and later tightened rules after racist deepfakes of Martin Luther King Jr.

Specialists at Outplayed suggest checking eye blinks, mouth movements and hands for subtle distortions. Inconsistent lighting, unnaturally smooth skin or glitching backgrounds can reveal manipulated or AI-generated video.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

US states weigh VPN restrictions to protect minors online

US legislators in Wisconsin and Michigan are weighing proposals that would restrict the use of VPNs to access sites deemed harmful to minors. The bills build on age-verification rules for websites hosting sexual content, which lawmakers say are too easy to bypass when users connect via VPNs.

In Wisconsin, a bill that has already passed the State Assembly would require adult sites to both verify age and block visitors using VPNs, potentially making the state the first in the US to outlaw VPN use for accessing such content if the Senate approves it.

In Michigan, similar legislation would go further by obliging internet providers to monitor and block VPN connections, though that proposal has yet to advance.

The Digital Rights Group and the Electronic Frontier Foundation argue that the approach would erode privacy for everyone, not just minors.

It warns that blanket restrictions would affect businesses, students, journalists and abuse survivors who rely on VPNs for security, calling the measures ‘surveillance dressed up as safety’ and urging lawmakers instead to improve education, parental tools and support for safer online environments.

The debate comes as several European countries, including France, Italy and the UK, have introduced age-verification rules for pornography sites, but none have proposed banning VPNs.

Would you like to learn more about AI, tech and digital diplomacyIf so, ask our Diplo chatbot!

New report warns retailers are unprepared for AI-powered attacks

Retailers are entering the peak shopping season amid warnings that AI-driven cyber threats will accelerate. LevelBlue’s latest Spotlight Report says nearly half of retail executives are already seeing significantly higher attack volumes, while one-third have suffered a breach in the past year.

The sector is under pressure to roll out AI-driven personalisation and new digital channels, yet only a quarter feel ready to defend against AI attacks. Readiness gaps also cover deepfakes and synthetic identity fraud, even though most expect these threats to arrive soon.

Supply chain visibility remains weak, with almost half of executives reporting limited insight into software suppliers. Few list supplier security as a near-term priority, fuelling concern that vulnerabilities could cascade across retail ecosystems.

High-profile breaches have pushed cybersecurity into the boardroom, and most retailers now integrate security teams with business operations. Leadership performance metrics and risk appetite frameworks are increasingly aligned with cyber resilience goals.

Planned investment is focused on application security, business-wide resilience processes, and AI-enabled defensive tools. LevelBlue argues that sustained spending and cultural change are required if retailers hope to secure consumer trust amid rapidly evolving threats.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

Mass CCTV hack in India exposes maternity ward videos sold on Telegram

Police in Gujarat have uncovered an extensive cybercrime network selling hacked CCTV footage from hospitals, schools, offices and private homes across India.

The case surfaced after local media spotted YouTube videos showing women in a maternity ward during exams and injections, with links directing viewers to Telegram channels selling longer clips. The hospital involved said cameras were installed to protect staff from false allegations.

Investigators say hackers accessed footage from an estimated 50,000 CCTV systems nationwide and sold videos for 800–2,000 rupees, with some Telegram channels offering live feeds by subscription.

Arrests since February span Maharashtra, Uttar Pradesh, Gujarat, Delhi and Uttarakhand. Police have charged suspects under laws covering privacy violations, publication of obscene material, voyeurism and cyberterrorism.

Experts say weak security practices make Indian CCTV systems easy targets. Many devices run on default passwords such as Admin123. Hackers used brute-force tools to break into networks, according to investigators. Cybercrime specialists advise users to change their IP addresses and passwords, run periodic audits, and secure both home and office networks.

The case highlights the widespread use of CCTV in India, as cameras are prevalent in both public and private spaces, often installed without consent or proper safeguards. Rights advocates say women face added stigma when sensitive footage is leaked, which discourages complaints.

Police said no patients or hospitals filed a report in this case due to fear of exposure, so an officer submitted the complaint.

Last year, the government urged states to avoid suppliers linked to past data breaches and introduced new rules to raise security standards, but breaches remain common.

Investigators say this latest case demonstrates how easily insecure systems can be exploited and how sensitive footage can spread online, resulting in severe consequences for victims.

Would you like to learn more about AI, tech and digital diplomacyIf so, ask our Diplo chatbot!

India’s data protection rules finally take effect

India has activated the Digital Personal Data Protection Act 2023 after extended delays. Final regulations notified in November operationalise a long-awaited national privacy framework. The Act, passed in August 2023, now gains a fully operational compliance structure.

Implementation of the rules is staggered so organisations can adjust governance, systems and contracts. Some provisions, including the creation of a Data Protection Board, take effect immediately. Obligations on consent notices, breach reporting and children’s data begin after 12 or 18 months.

India introduces regulated consent managers acting as a single interface between users and data fiduciaries. Managers must register with the Board and follow strict operational standards. Parents will use digital locker-based verification when authorising the processing of children’s information online.

Global technology, finance and health providers now face major upgrades to internal privacy programmes. Lawyers expect major work mapping data flows, refining consent journeys and tightening security practices.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

IBM advances quantum computing understanding with new project

IBM has introduced two new quantum processors, named ‘Nighthawk’ and ‘Loon’, aimed at major leaps in quantum computing. The Nighthawk chip features 120 qubits and 218 tunable couplers, enabling circuits with approximately 30% greater complexity than previous models.

The Loon processor is designed as a testbed for fault-tolerant quantum computing, implementing key hardware components, including six-way qubit connectivity and long-range couplers. These advances mark a strategic shift by IBM to scale quantum systems beyond experimental prototypes.

IBM has also upgraded its fabrication process by shifting to 300 mm wafers at its Albany NanoTech facility, which has doubled development speed and boosted physical chip complexity tenfold.

Looking ahead, IBM projects the initial delivery of Nighthawk by the end of 2025 and aims to achieve verified quantum advantage by the end of 2026, with fully fault-tolerant quantum systems targeted by 2029.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

Google commits 40 billion dollars to expand Texas AI infrastructure

Google will pour 40 billion dollars into Texas by 2027, expanding digital infrastructure. Funding focuses on new cloud and AI facilities alongside existing campuses in Midlothian and Dallas.

Three new US data centres are planned, one in Armstrong County and two in Haskell County. One Haskell site will sit beside a solar plant and battery storage facility. Investment is accompanied by agreements for more than 6,200 megawatts of additional power generation.

Google will create a 30 million dollar Energy Impact Fund supporting Texan energy efficiency and affordability projects. The company backs training for existing electricians and over 1,700 apprentices through electrical training programmes.

Spending strengthens Texas as a major hub for data centres and AI development. Google says expanded infrastructure and workforce will help maintain US leadership in advanced computing technologies. Company highlights its 15 year presence in Texas and pledges ongoing community support.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot