New quantum threat could weaken cryptocurrency encryption systems

A new warning from Google says advances in quantum computing could weaken widely used cryptographic systems protecting cryptocurrencies and digital infrastructure. A new whitepaper suggests future quantum machines may need fewer resources than previously estimated to break elliptic curve cryptography.

The research focuses on the elliptic curve discrete logarithm problem, which underpins much of today’s blockchain security. Findings suggest quantum algorithms like Shor’s could run with fewer qubits and gates, increasing concerns about cryptographic resilience.

To address the risk, the paper recommends a transition to post-quantum cryptography, which is designed to resist quantum attacks. It also outlines short-term blockchain measures, including avoiding reuse of vulnerable wallet addresses and preparing digital asset migration strategies.

Google also introduced a responsible disclosure approach using zero-knowledge proofs to communicate vulnerabilities without exposing exploitable details.

The company says this balances transparency and security, supporting coordinated efforts across crypto and research communities to prepare for quantum threats.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

Cloudflare adds LLM layer to client-side security detection pipeline

Cloudflare has announced two changes to its client-side security offering, making Client-Side Security Advanced available to self-serve customers and offering domain-based threat intelligence at no extra cost to all users on the free Client-Side Security bundle. The update is focused on browser-based attacks that can steal data via malicious scripts without visibly disrupting a website’s normal operation.

Cloudflare says its client-side security system assesses 3.5 billion scripts per day and monitors an average of 2,200 scripts per enterprise zone. According to the company, the product relies on browser reporting, including Content Security Policy signals, rather than scanners or application instrumentation, and requires only that traffic be proxied through Cloudflare.

A central part of the announcement is a new detection pipeline combining a Graph Neural Network (GNN) with a Large Language Model (LLM). Cloudflare says the GNN analyses the Abstract Syntax Tree of JavaScript code to identify malicious intent even when scripts are minified or obfuscated. Scripts flagged as suspicious are then passed to an open-source LLM running on Workers AI for a second-stage semantic assessment intended to reduce false positives.

Cloudflare says the GNN is tuned for high recall to identify novel and zero-day threats, but that false alarms remain a challenge at internet scale. Internal evaluation results cited by the company show that the secondary LLM layer reduced false positives in the JS Integrity threat category by nearly three times across the total analysed traffic, lowering the rate from about 0.3% to about 0.1%. On unique scripts, Cloudflare says the false-positive rate fell from about 1.39% to 0.007%.

The company also describes a recent case involving a heavily obfuscated malicious script named core.js. According to Cloudflare, the payload targeted Xiaomi OpenWrt-based home routers, altered DNS settings, and attempted to change admin passwords. Cloudflare says the script was injected through compromised browser extensions rather than by directly compromising a website, and adds that its GNN detected the malicious structure while the LLM confirmed the intent.

Cloudflare argues that the two-stage design provides structural detection via the GNN and broader semantic filtering via the LLM, enabling the company to lower the GNN decision threshold without sharply increasing alert volume. Every script flagged by the GNN is also logged to Cloudflare R2 for later auditing, which the company says helps it review cases where the LLM overrode the initial verdict.

Domain-based threat intelligence is now being made available to all Client-Side Security customers, including those not using the Advanced tier. Cloudflare says the move is partly a response to attacks seen in 2025 against smaller online shops, especially on Magento, where client-side compromises continued for days or weeks after public disclosure. By extending domain-based signals more broadly, the company says site owners can more quickly identify malicious JavaScript or suspicious connections and investigate possible compromises.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

Technology reshapes pensions engagement

New technology is reshaping how people engage with pensions, according to Financial Conduct Authority chief executive Nikhil Rathi. Speaking in London, he highlighted the growing role of AI and digital tools in helping savers better understand their retirement finances.

Pensions dashboards are expected to give millions a clearer view of their savings, potentially driving greater engagement and behavioural change. Increased visibility may encourage actions such as consolidating pension pots or adjusting contributions.

London officials warn that stronger engagement brings risks as well as opportunities, with many consumers still lacking clear retirement plans. Policymakers aim to balance protection with flexibility, promoting informed decisions while avoiding overly restrictive systems.

Advances in AI are also enabling more personalised financial guidance, making it easier for users to explore retirement scenarios. Experts say the future of pensions will depend on integrating savings, housing and wider financial planning into a more connected system.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

Italy fines major bank over data protection failures

The Italian Data Protection Authority has imposed a €31.8 million fine on Intesa Sanpaolo following serious shortcomings in its handling of personal data.

The case stems from unauthorised access by an employee to thousands of customer accounts, raising concerns about internal oversight and data protection safeguards.

Investigations revealed that monitoring systems failed to detect repeated unjustified access to sensitive financial information over an extended period. The breach also involved high-risk individuals, highlighting weaknesses in risk-based controls instead of robust, targeted protection measures.

Authorities in Italy identified violations of core data protection principles, including integrity, confidentiality and accountability. Additional concerns arose from delays in notifying both regulators and affected individuals, limiting the ability to respond effectively to the incident.

The case of Intesa Sanpaolo underscores increasing regulatory scrutiny of data governance practices in the financial sector. Strengthening internal controls and ensuring timely breach reporting remain essential for maintaining trust and compliance in data-driven banking environments.

Would you like to learn more about AI, tech and digital diplomacyIf so, ask our Diplo chatbot!  

Malta launches SMART Food project with AI and blockchain

Malta is advancing the SMART Food project to strengthen the agri-food sector. The initiative is a Malta-Italy partnership funded under the Interreg programme.

Minister Anton Refalo said the project aims to create a reliable and technologically advanced food system. A digital platform using AI and blockchain will provide real-time information on products from production to consumption.

The project seeks to meet consumer demand for clarity on food origin, safety, and sustainability. It will also support farmers and industry operators in adopting more efficient practices.

Minister Refalo added that the initiative strengthens trust across the food chain and empowers consumers. Malta’s scale allows it to adopt innovative solutions and take a leading role in modernising the sector.

The Malta Food Agency manages the project, including development, management, and training. Chief Executive Brian Vella said it safeguards product quality, improves traceability, and reinforces confidence in local produce.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot 

FTC accuses OkCupid of sharing user data contrary to privacy promises

The US Federal Trade Commission has taken action against OkCupid and Match Group Americas over allegations that the dating app shared users’ personal information, including photos and location data, with an unrelated third party despite privacy promises saying such sharing would not occur without notice or an opportunity to opt out.

According to the FTC’s complaint, OkCupid gave the third party access to personal data from millions of users even though the recipient was not a service provider, business partner, or affiliate within the company’s corporate family. The agency says consumers were not informed and were not given a chance to opt out.

The complaint says the third party sought large OkCupid datasets because OkCupid’s founders were financial investors in that company, despite there being no business relationship with the app. The FTC alleges that OkCupid provided access to nearly 3 million user photos, along with location and other information, without formal or contractual limits on how the data could be used.

Christopher Mufarrige, Director of the FTC’s Bureau of Consumer Protection, said: ‘The FTC enforces the privacy promises that companies make. We will investigate, and where appropriate, take action against companies that promise to safeguard your data but fail to follow through—even if that means we have to enforce our Civil Investigative Demands in court.’

The FTC also alleges that, since September 2014, Match and OkCupid have taken extensive steps to conceal and deny that the apps shared users’ personal information with the data recipient, including conduct the agency says obstructed its investigation. One example cited in the complaint is that, after a news report revealed the third party had obtained large OkCupid datasets, the company told the media and users that it was not involved with that third party.

Under the proposed settlement, OkCupid and Match would be permanently prohibited from misrepresenting how they collect, maintain, use, disclose, delete, or protect personal information, including photos, demographic data, and geolocation data. Restrictions would also cover how they describe the purposes of data collection and disclosure, as well as how they present privacy controls and consumer choices under state privacy laws.

The Commission vote authorising staff to file the complaint and stipulating the final order was 2-0. The FTC filed both in the US District Court for the Northern District of Texas, Dallas Division. The agency notes that a complaint reflects its view that it has ‘reason to believe’ the law has been or is about to be violated, while stipulated final orders carry the force of law only if approved and signed by the district court judge.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

Ofcom proposes tougher rules on scam mobile messages

New proposals from Ofcom aim to reduce scam activity on mobile messaging services across the UK. The measures are designed to strengthen protections for users and businesses affected by large-scale fraud campaigns.

Scammers often combine mobile messages with other channels such as calls, emails, social media and online adverts to trick victims into revealing personal information or making payments.

While telecom operators have introduced safeguards in recent years, regulators say current efforts do not go far enough.

The proposed framework would require mobile operators and messaging aggregators to prevent scammers from accessing messaging systems and to detect and disrupt malicious activity where it occurs.

The goal is to close existing gaps in industry defences and reduce the volume of scam messages reaching users. Ofcom plans to finalise its decision in summer 2026, following completion of its consultation process.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

Campaign highlights risks of profit-driven digital platforms

A global campaign led by the Norwegian Consumer Council (NCC) has drawn attention to the decline in quality across digital platforms, a phenomenon widely referred to as ‘enshitification’, in which services deteriorate over time as companies prioritise monetisation over user experience.

The initiative has gained momentum through a viral video and coordinated advocacy efforts across multiple regions.

Inshitification is a term coined by journalist Cory Doctorow that describes a pattern in which platforms initially serve users well, then shift towards extracting value from both users and business partners.

In practice, it often results in increased advertising, paywalls, and reduced functionality, with platforms leveraging user dependence to introduce less favourable conditions.

More than 70 advocacy groups across the EU, the US and Norway have urged policymakers to take stronger action, arguing that declining competition and market concentration allow platforms to degrade services without losing users.

Network effects and high switching costs further limit consumer choice, making it difficult to move to alternative platforms even when dissatisfaction grows.

Existing frameworks, such as the Digital Markets Act and the Digital Services Act, aim to address some of these issues by promoting interoperability, transparency, and accountability.

However, experts argue that enforcement remains too slow and insufficient to deter harmful practices, suggesting that stronger regulatory intervention will be necessary to restore balance between consumers, platforms, and competition in the digital economy.

Would you like to learn more about AI, tech and digital diplomacyIf so, ask our Diplo chatbot!  

Ofcom tightens online safety enforcement across major platforms

Enforcement of the Online Safety Act intensifies in 2026, with regulators pushing stronger age verification across social media, gaming, messaging, and adult platforms. Significant progress has been reported in the adult sector, with most major pornography services now using age assurance or restricting UK access.

Ofcom has issued new expectations for major children’s platforms, including stricter age verification, stronger protections against grooming, safer feeds, and tighter product testing. The regulator has warned that further enforcement action may follow if compliance is not met.

New obligations are also being introduced, including a requirement from April 2026 for services to report child sexual exploitation and abuse content to the National Crime Agency.

Providers are being instructed to keep risk assessments up to date and adapt to evolving regulatory guidance, including upcoming consultations and expanded reporting duties.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

UK regulator targets misleading online reviews in new crackdown

The Competition and Markets Authority has launched new investigations into five companies as part of a wider crackdown on fake and misleading online reviews, targeting practices that shape consumer decisions rather than reflect genuine customer experiences.

The cases involve Autotrader, Feefo, Dignity, Just Eat and Pasta Evangelists across sectors, including car sales, food delivery and funeral services.

CMA is examining whether negative reviews were suppressed, ratings inflated, or incentives offered in exchange for positive feedback without disclosure.

Concerns also extend to moderation practices and whether review systems provide a complete and accurate picture of customer experiences, rather than favouring reputational or commercial interests. No conclusions have yet been reached on whether consumer law has been breached.

Online reviews play a central role in consumer behaviour, influencing significant levels of spending across the UK economy.

Research indicates that a large majority of consumers rely on reviews when making purchasing decisions, raising concerns that misleading content can distort markets and undermine trust, particularly as AI makes it harder to detect fabricated reviews.

The investigations form part of a broader enforcement effort under the Digital Markets Competition and Consumers Act 2024, which introduced stricter rules on fake and misleading reviews.

Authorities aim to improve transparency and accountability across digital platforms, with potential penalties reaching up to 10% of global turnover for companies found to have breached consumer protection laws.

Would you like to learn more about AI, tech and digital diplomacyIf so, ask our Diplo chatbot!