AI search tools put to the test in UK study

AI tools are shaping online searches, but testing reveals notable risks in relying on them. ChatGPT, Google Gemini, Microsoft Copilot, Meta AI, and Perplexity were tested on 40 questions in finance, law, health, and consumer rights.

Results show errors, incomplete advice, and ethical oversights remain widespread despite AI’s popularity.

More than half of UK adults now use AI for online searches, with frequent users showing higher trust in the responses. Around one in ten regularly seeks legal advice from AI, while others use it for financial or medical guidance.

Experts warn that overconfidence in AI recommendations could lead to costly mistakes, particularly when rules differ across regions in the UK.

Perplexity outperformed other tools in accuracy and reliability, while ChatGPT ranked near the bottom. Google’s AI overview (AIO) often delivers better results for legal and health queries, while its Gemini chatbot scores higher on finance and consumer questions.

Users are encouraged to verify sources, as many AI outputs cite vague or outdated references and occasionally promote questionable services.

Despite flaws, AI remains a valuable tool for basic research, summarising information quickly and highlighting key points. Experts advise using multiple AI tools and consulting professionals for complex financial, legal, or medical matters.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

AI energy demand strains electrical grids

Microsoft CEO Satya Nadella recently delivered a key insight, stating that the biggest hurdle to deploying new AI solutions is now electrical power, not chip supply. The massive energy requirements for running large language models (LLMs) have created a critical bottleneck for major cloud providers.

Nadella specified that Microsoft currently has a ‘bunch of chips sitting in inventory’ that cannot be plugged in and utilised. The problem is a lack of ‘warm shells’, meaning data centre buildings that are fully equipped with the necessary power and cooling capacity.

The escalating power requirements of AI infrastructure are placing extreme pressure on utility grids and capacity. Projections from the Lawrence Berkeley National Laboratory indicate that US data centres could consume up to 12 percent of the nation’s total electricity by 2028.

The disclosure should serve as a warning to investors, urging them to evaluate the infrastructure challenges alongside AI’s technological promise. This energy limitation could create a temporary drag on the sector, potentially slowing the massive projected returns on the $5 trillion investment.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

AI threatens global knowledge diversity

AI systems are increasingly becoming the primary source of global information, yet they rely heavily on datasets dominated by Western languages and institutions.

Such reliance creates significant blind spots that threaten to erase centuries of indigenous wisdom and local traditions not currently found in digital archives.

Dominant language models often overlook oral histories and regional practices, including specific ecological knowledge essential for sustainable living in tropical climates.

Experts warn of a looming ‘knowledge collapse’ where alternative viewpoints fade away simply because they are statistically less prevalent in training data.

Future generations may find themselves disconnected from vital human insights as algorithms reinforce a homogenised worldview through recursive feedback loops.

Preserving diverse epistemologies remains crucial for addressing global challenges, such as the climate crisis, rather than relying solely on Silicon Valley’s version of intelligence.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Old laws now target modern tracking technology

Class-action privacy litigation continues to grow in frequency, repurposing older laws to address modern data tracking technologies. Recent high-profile lawsuits have applied the California Invasion of Privacy Act and the Video Privacy Protection Act.

A unanimous jury verdict recently found Meta Platforms violated CIPA Section 632 (which is now under appeal) by eavesdropping on users’ confidential communications without consent. The court ruled that Meta intentionally used its SDK within a sexual health app, Flo, to intercept sensitive real-time user inputs.

That judgement suggests an electronic device under the statute need not be physical, with a user’s phone qualifying as the requisite device. The legal success in these cases highlights a significant, rising risk for all companies utilising tracking pixels and software development kits (SDKs).

Separately, the VPPA has found new power against tracking pixels in the case of Jancik v. WebMD concerning video-viewing data. The court held that a consumer need not pay for a video service but can subscribe by simply exchanging their email address for a newsletter.

Companies must ensure their privacy policies clearly disclose all such tracking conduct to obtain explicit, valid consent. The courts are taking real-time data interception seriously, noting intentionality may be implied when a firm fails to stem the flow of sensitive personally identifiable information.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

ALX and Anthropic partner with Rwanda on AI education

A landmark partnership between ALX, Anthropic, and the Government of Rwanda has launched a major AI learning initiative across Africa.

The program introduces ‘Chidi’, an AI-powered learning companion built on Anthropic’s Claude model. Instead of providing direct answers, the system is designed to guide learners through critical thinking and problem-solving, positioning African talent at the centre of global tech innovation.

An initiative, described as one of the largest AI-enhanced education deployments on the continent, that will see Chidi integrated into Rwanda’s public education system. A pilot phase will involve up to 2,000 educators and select civil servants.

According to the partners, the collaboration aims to ensure Africa’s youth become creators of AI technology instead of remaining merely consumers of it.

A three-way collaboration that unites ALX’s training infrastructure, Anthropic’s AI technology, and Rwanda’s progressive digital policy. The working group, the researchers noted, will document insights to inform Rwanda’s national AI policy.

The initiative sets a new standard for inclusive, AI-powered learning, with Rwanda serving as a launch hub for future deployments across the continent.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Cloudflare buys AI platform Replicate

Cloudflare has agreed to purchase Replicate, a platform simplifying the deployment and running of AI models. The technology aims to cut down on GPU hardware and infrastructure needs typically required for complex AI.

The acquisition will integrate Replicate’s extensive library of over 50,000 AI models into the Cloudflare platform. Developers can then access and deploy any AI model globally using just a single line of code for rapid implementation.

Matthew Prince, Cloudflare’s chief executive, stated the acquisition will make his company the ‘most seamless, all-in-one shop for AI development’. The move abstracts away infrastructure complexities so developers can focus only on delivering amazing products.

Replicate had previously raised $40m in venture funding from prominent investors in the US. Integrating Replicate’s community and models with Cloudflare’s global network will create a singular platform for building tomorrow’s next big AI applications.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

WhatsApp to support cross-app messaging

Meta is launching a ‘third-party chats’ feature on WhatsApp in Europe, allowing users to send and receive messages from other interoperable messaging apps.

Initially, only two apps, BirdyChat and Haiket, will support this integration, but users will be able to send text, voice, video, images and files. The rollout will begin in the coming months for iOS and Android users in the EU.

Meta emphasises that interoperability is opt-in, and messages exchanged via third-party apps will retain end-to-end encryption, provided the other apps match WhatsApp’s security requirements. Users can choose whether to display these cross-app conversations in a separate ‘third-party chats’ folder or mix them into their main inbox.

By opening up its messaging to external apps, WhatsApp is responding to the EU’s Digital Markets Act (DMA), which requires major tech platforms to allow interoperability. This move could reshape how messaging works in Europe, making it easier to communicate across different apps, though it also raises questions about privacy, spam risk and how encryption is enforced.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

Eurofiber France confirms the major data breach

The French telecommunications company Eurofiber has acknowledged a breach of its ATE customer platform and digital ticket system after a hacker accessed the network through software used by the company.

Engineers detected the intrusion quickly and implemented containment measures, while the company stressed that services remained operational and banking data stayed secure. The incident affected only French operations and subsidiaries such as Netiwan, Eurafibre, Avelia, and FullSave, according to the firm.

Security researchers instead argue that the scale is far broader. International Cyber Digest reported that more than 3,600 organisations may be affected, including prominent French institutions such as Orange, Thales, the national rail operator, and major energy companies.

The outlet linked the intrusion to the ransomware group ByteToBreach, which allegedly stole Eurofiber’s entire GLPI database and accessed API keys, internal messages, passwords and client records.

A known dark web actor has now listed the stolen dataset for sale, reinforcing concerns about the growing trade in exposed corporate information. The contents reportedly range from files and personal data to cloud configurations and privileged credentials.

Eurofiber did not clarify which elements belonged to its systems and which originated from external sources.

The company has notified the French privacy regulator CNIL and continues to investigate while assuring Dutch customers that their data remains safe.

A breach that underlines the vulnerability of essential infrastructure providers across Europe, echoing recent incidents in Sweden, where a compromised IT supplier exposed data belonging to over a million people.

Eurofiber says it aims to strengthen its defences instead of allowing similar compromises in future.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Report calls for new regulations as AI deepfakes threaten legal evidence

US courtrooms increasingly depend on video evidence, yet researchers warn that the legal system is unprepared for an era in which AI can fabricate convincing scenes.

A new report led by the University of Colorado Boulder argues that national standards are urgently needed to guide how courts assess footage generated or enhanced by emerging technologies.

The authors note that judges and jurors receive little training on evaluating altered clips, despite more than 80 percent of cases involving some form of video.

Concerns have grown as deepfakes become easier to produce. A civil case in California collapsed in September after a judge ruled that a witness video was fabricated, and researchers believe such incidents will rise as tools like Sora 2 allow users to create persuasive simulations in moments.

Experts also warn about the spread of the so-called deepfake defence, where lawyers attempt to cast doubt on genuine recordings instead of accepting what is shown.

AI is also increasingly used to clean up real footage and to match surveillance clips with suspects. Such techniques can improve clarity, yet they also risk deepening inequalities when only some parties can afford to use them.

High-profile errors linked to facial recognition have already led to wrongful arrests, reinforcing the need for more explicit courtroom rules.

The report calls for specialised judicial training, new systems for storing and retrieving video evidence and stronger safeguards that help viewers identify manipulated content without compromising whistleblowers.

Researchers hope the findings prompt legal reforms that place scientific rigour at the centre of how courts treat digital evidence as it shifts further into an AI-driven era.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

WhatsApp may show Facebook and Instagram usernames for unknown numbers

WhatsApp is reportedly testing a feature that will display Meta-verified usernames (from Facebook or Instagram) when users search for phone numbers they haven’t saved. According to WABetaInfo, this is currently in development for iOS.

When a searched number matches an active WhatsApp account, the app displays the associated username, along with limited profile details, depending on the user’s privacy settings. Importantly, if someone searches by username, their phone number remains hidden to protect privacy.

WhatsApp is also reportedly allowing users to reserve the same username they use on Facebook or Instagram. Verification of ownership happens through Meta’s Accounts Centre, ensuring a unified identity across Meta platforms.

However, this update is part of a broader push to enhance privacy: WhatsApp has previously announced that it will allow users to replace their phone numbers with usernames, enabling chats without revealing personal numbers.

From a digital-policy perspective, the change raises important issues about identity, discoverability and data integration across Meta’s apps. It may make it easier to identify and connect with unfamiliar contacts, but it also concentrates more of our personal data under Meta’s own digital identity infrastructure.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot