Northamptonshire Police launches live facial recognition trial

Northamptonshire Police will roll out live facial recognition cameras in three town centres. Deployments are scheduled in Northampton on 28 November and 5 December, in Kettering on 29 November, and in Wellingborough on 6 December.

The initiative uses a van loaned from Bedfordshire Police and the watch-lists include high-risk sex offenders or persons wanted for arrest. Facial and biometric data for non-alerts are deleted immediately; alerts are held only up to 24 hours.

Police emphasise the AI based technology is ‘very much in its infancy’ but expect future acquisition of dedicated kit. A coordinator post is being created to manage the LFR programme in-house.

British campaigners express concern the biometric tool may erode privacy or resemble mass surveillance. Police assert that appropriate signage and open policy documents will be in place to maintain public confidence.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

Poll manipulation by AI threatens democratic accuracy, according to a new study

Public opinion surveys face a growing threat as AI becomes capable of producing highly convincing fake responses. New research from Dartmouth shows that AI-generated answers can pass every quality check, imitate real human behaviour and alter poll predictions without leaving evidence.

In several major polls conducted before the 2024 US election, inserting only a few dozen synthetic responses would have reversed expected outcomes.

The study reveals how easily malicious actors could influence democratic processes. AI models can operate in multiple languages yet deliver flawless English answers, allowing foreign groups to bypass detection.

An autonomous synthetic respondent that was created for the study passed nearly all attention tests, avoided errors in logic puzzles and adjusted its tone to match assigned demographic profiles instead of exposing its artificial nature.

The potential consequences extend far beyond electoral polling. Many scientific disciplines rely heavily on survey data to track public health risks, measure consumer behaviour or study mental wellbeing.

If AI-generated answers infiltrate such datasets, the reliability of thousands of studies could be compromised, weakening evidence used to shape policy and guide academic research.

Financial incentives further raise the risk. Human participants earn modest fees, while AI can produce survey responses at almost no cost. Existing detection methods failed to identify the synthetic respondent at any stage.

The researcher urges survey companies to adopt new verification systems that confirm the human identity of participants, arguing that stronger safeguards are essential to protect democratic accountability and the wider research ecosystem.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

EU investigates Google over potential Digital Markets Act breach

The European Commission has opened an investigation into whether Google may be breaching the Digital Markets Act by unfairly demoting news publishers in search results.

An inquiry that centres on Google’s ‘site reputation abuse policy’, which appears to lower rankings for publishers that host content from commercial partners, even when those partnerships support legitimate ways of monetising online journalism.

The Commission is examining whether Alphabet’s approach restricts publishers from conducting business, innovating, and cooperating with third-party content providers. Officials highlighted concerns that such demotions may undermine revenue at a difficult moment for the media sector.

These proceedings do not imply a final decision; instead, they allow the EU to gather evidence and assess Google’s practices in detail.

If the Commission finds evidence of non-compliance, it will present preliminary findings and request corrective measures. The investigation is expected to conclude within 12 months.

Under the DMA, infringements can lead to fines of up to ten percent of a company’s worldwide turnover, rising to twenty percent for repeated violations, alongside possible structural remedies.

Senior Commissioners stressed that gatekeepers must offer fair and non-discriminatory access to their platforms. They argued that protecting publishers’ ability to reach audiences supports media pluralism, innovation, and democratic resilience.

Google Search, designated as a core platform service under the DMA, has been required to comply fully with the regulation since March 2024.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

New York Times lawsuit prompts OpenAI to strengthen privacy protections

OpenAI says a New York Times demand to hand over 20 million private ChatGPT conversations threatens user privacy and breaks with established security norms. The request forms part of the Times’ lawsuit over alleged misuse of its content.

The company argues the demand would expose highly personal chats from people with no link to the case. It previously resisted broader requests, including one seeking more than a billion conversations, and says the latest move raises similar concerns about proportionality.

OpenAI says it offered privacy-preserving alternatives, such as targeted searches and high-level usage data, but these were rejected. It adds that chats covered by the order are being de-identified and stored in a secure, legally restricted environment.

The dispute arises as OpenAI accelerates its security roadmap, which includes plans for client-side encryption and automated systems that detect serious safety risks without requiring broad human access. These measures aim to ensure private conversations remain inaccessible to external parties.

OpenAI maintains that strong privacy protections are essential as AI tools handle increasingly sensitive tasks. It says it will challenge any attempt to make private conversations public and will continue to update users as the legal process unfolds.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

‘Wooing and suing’ defines News Corp’s AI strategy

News Corp chief executive Robert Thomson warned AI companies against using unlicensed publisher content, calling recipients of ‘stolen goods’ fair game for pursuit. He said ‘wooing and suing’ would proceed in parallel, with more licensing deals expected after the OpenAI pact.

Thomson argued that high-quality data must be paid for and that ingesting material without permission undermines incentives to produce journalism. He insisted that ‘content crime does not and will not pay,’ signalling stricter enforcement ahead.

While criticising bad actors, he praised partners that recognise publisher IP and are negotiating usage rights. The company is positioning itself to monetise archives and live reporting through structured licences.

He also pointed to a major author settlement with another AI firm as a watershed for compensation over past training uses. The message: legal and commercial paths are both accelerating.

Against this backdrop, News Corp said AI-related revenues are gaining traction alongside digital subscriptions and B2B data services. Further licensing announcements are likely in the coming months.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

Microsoft deal signals pay-per-use path for AI access to People Inc. content

People Inc. has joined Microsoft’s publisher content marketplace in a pay-per-use deal that compensates media for AI access. Copilot will be the first buyer, while People Inc. continues to block most AI crawlers via Cloudflare to force paid licensing.

People Inc., formerly Dotdash Meredith, said Microsoft’s marketplace lets AI firms pay ‘à la carte’ for specific content. The agreement differs from its earlier OpenAI pact, which the company described as more ‘all-you-can-eat’, but the priority remains ‘respected and paid for’ use.

Executives disclosed a sharp fall in Google search referrals: from 54% of traffic two years ago to 24% last quarter, citing AI Overviews. Leadership argues that crawler identification and paid access should become the norm as AI sits between publishers and audiences.

Blocking non-paying bots has ‘brought almost everyone to the table’, People Inc. said, signalling more licences to come. Such an approach by Microsoft is framed as a model for compensating rights-holders while enabling AI tools to use high-quality, authorised material.

IAC reported People Inc. digital revenue up 9% to $269m, with performance marketing and licensing up 38% and 24% respectively. The publisher also acquired Feedfeed, expanding its food vertical reach while pursuing additional AI content partnerships.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

EU considers classifying ChatGPT as a search engine under the DSA. What are the implications?

The European Commission is pondering whether OpenAI’s ChatGPT should be designated as a ‘Very Large Online Search Engine’ (VLOSE) under the Digital Services Act (DSA), a move that could reshape how generative AI tools are regulated across Europe.

OpenAI recently reported that ChatGPT’s search feature reached 120.4 million monthly users in the EU over the past six months, well above the 45 million threshold that triggers stricter obligations for major online platforms and search engines. The Commission confirmed it is reviewing the figures and assessing whether ChatGPT meets the criteria for designation.

The key question is whether ChatGPT’s live search function should be treated as an independent service or as part of the chatbot as a whole. Legal experts note that the DSA applies to intermediary services such as hosting platforms or search engines, categories that do not neatly encompass generative AI systems.

Implications for OpenAI

If designated, ChatGPT would be the first AI chatbot formally subject to DSA obligations, including systemic risk assessments, transparency reporting, and independent audits. OpenAI would need to evaluate how ChatGPT affects fundamental rights, democratic processes, and mental health, updating its systems and features based on identified risks.

‘As part of mitigation measures, OpenAI may need to adapt ChatGPT’s design, features, and functionality,’ said Laureline Lemoine of AWO. ‘Compliance could also slow the rollout of new tools in Europe if risk assessments aren’t planned in advance.’

The company could also face new data-sharing obligations under Article 40 of the DSA, allowing vetted researchers to request information about systemic risks and mitigation efforts, potentially extending to model data or training processes.

A test case for AI oversight

Legal scholars say the decision could set a precedent for generative AI regulation across the EU. ‘Classifying ChatGPT as a VLOSE will expand scrutiny beyond what’s currently covered under the AI Act,’ said Natali Helberger, professor of information law at the University of Amsterdam.

Experts warn the DSA would shift OpenAI from voluntary AI-safety frameworks and self-defined benchmarks to binding obligations, moving beyond narrow ‘bias tests’ to audited systemic-risk assessments, transparency and mitigation duties. ‘The DSA’s due diligence regime will be a tough reality check,’ said Mathias Vermeulen, public policy director at AWO.

Would you like to learn more aboutAI, tech and digital diplomacyIf so, ask our Diplo chatbot!

National internet shutdown grips Tanzania during contested vote

Tanzania is facing a nationwide internet shutdown that began as citizens headed to the polls in a tense general election. Connectivity across the country has been severely disrupted, with platforms like X (formerly Twitter), WhatsApp, and Instagram rendered inaccessible.

The blackout, confirmed by monitoring group NetBlocks, has left journalists, election observers, and citizens in Tanzania struggling to share updates as reports of protests and unrest spread. The government has reportedly deployed the army, deepening concerns over efforts to control information during this volatile period.

The move mirrors a growing global pattern where authorities restrict internet access during elections and political crises to curb dissent and manage narratives. Amnesty International has condemned the shutdown, warning that it risks escalating tensions and violating citizens’ right to information.

‘Authorities must ensure full internet access and allow free reporting before, during, and after the elections,’ said Tigere Chagutah, Amnesty’s Regional Director for East and Southern Africa.

Tanzania’s blackout follows similar crackdowns elsewhere, such as Afghanistan’s total internet shutdown, which left citizens completely cut off from the world.

These incidents underscore the fragility of digital freedoms in times of political turmoil. When governments ‘pull the plug,’ societies lose not only communication but also trust, transparency, and the fundamental ability to hold power to account.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

UN report shows human cost of Afghan telecommunications shutdowns

A new UN briefing highlights the severe human rights effects of recent telecommunications shutdowns in Afghanistan. The 48-hour nationwide disruption hindered access to healthcare, emergency services, banking, education, and daily communications, worsening the hardships already faced by the population.

Women and girls were disproportionately affected, with restricted contact with guardians preventing travel for essential activities and limiting access to online education. Health workers reported preventable deaths due to the inability to call for emergency assistance, while humanitarian aid was delayed in regions still recovering from natural disasters and involuntary returns from neighbouring countries.

The UN stresses that such shutdowns violate rights to freedom of expression and access to information, and urges authorities to ensure any communication restrictions comply with international human rights standards. Rapid restoration of services and legally justified measures are essential to protect the Afghan population.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

Big Tech ramps up Brussels lobbying as EU considers easing digital rules

Tech firms now spend a record €151 million a year on lobbying at EU institutions, up from €113 million in 2023, according to transparency-register analysis by Corporate Europe Observatory and LobbyControl.

Spending is concentrated among US giants. The ten biggest tech companies, including Meta, Microsoft, Apple, Amazon, Qualcomm and Google, together outspend the top ten in pharma, finance and automotive. Meta leads with a budget above €10 million.

Estimates calculate there are 890 full-time lobbyists now working to influence tech policy in Brussels, up from 699 in 2023, with 437 holding European Parliament access badges. In the first half of 2025, companies declared 146 meetings with the Commission and 232 with MEPs, with artificial intelligence regulation and the industry code of practice frequently on the agenda.

As industry pushes back on the Digital Markets Act and Digital Services Act and the Commission explores the ‘simplification’ of EU rulebooks, lobbying transparency campaigners fear a rollback on the progress made to regulate the digital sector. On the contrary, companies argue that lobbying helps lawmakers grasp complex markets and assess impacts on innovation and competitiveness.

Would you like to learn more about AI, tech and digital diplomacyIf so, ask our Diplo chatbot!