UNESCO strengthens Caribbean disaster reporting

UNESCO has launched a regional programme to improve disaster reporting across the Caribbean after Hurricane Melissa and rising misinformation.

The initiative equips journalists and emergency communicators with advanced tools such as AI, drones and geographic information systems to support accurate and ethical communication.

The 30-hour online course, funded through UNESCO’s Media Development Program, brings together twenty-three participants from ten Caribbean countries and territories.

Delivered in partnership with GeoTechVision/Jamaica Flying Labs, the training combines practical exercises with disaster simulations to help participants map hazards, collect aerial evidence and verify information using AI-supported methods.

Participants explore geospatial mapping, drone use and ethics while completing a capstone project in realistic scenarios. The programme aims to address gaps revealed by recent disasters and strengthen the region’s ability to deliver trusted information.

UNESCO’s wider Media in Crisis Preparedness and Response programme supports resilient media institutions, ensuring that communities receive timely and reliable information before, during and after crises.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

Survey reveals split views on AI in academic peer review

Growing use of generative AI within peer review is creating a sharp divide among physicists, according to a new survey by the Institute of Physics Publishing.

Researchers appear more informed and more willing to express firm views, with a notable rise in those who see a positive effect and a large group voicing strong reservations. Many believe AI tools accelerate early reading and help reviewers concentrate on novelty instead of routine work.

Others fear that reviewers might replace careful evaluation with automated text generation, undermining the value of expert judgement.

A sizeable proportion of researchers would be unhappy if AI-shaped assessments of their own papers, even though many quietly rely on such tools when reviewing for journals. Publishers are now revisiting their policies, yet they aim to respect authors who expect human-led scrutiny.

Editors also report that AI-generated reports often lack depth and fail to reflect domain expertise. Concerns extend to confidentiality, with organisations such as the American Physical Society warning that uploading manuscripts to chatbots can breach author trust.

Legal disputes about training data add further uncertainty, pushing publishers to approach policy changes with caution.

Despite disagreements, many researchers accept that AI will remain part of peer review as workloads increase and scientific output grows. The debate now centres on how to integrate new tools in a way that supports researchers instead of weakening the foundations of scholarly communication.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Real-time journalism becomes central to Meta AI strategy

Meta has signed commercial agreements with news publishers to feed real-time reporting into Meta AI, enabling its chatbot to answer news-related queries with up-to-date information from multiple editorial sources.

The company said responses will include links to full articles, directing users to publishers’ websites and helping partners reach new audiences beyond traditional platform distribution.

Initial partners span US and international outlets, covering global affairs, politics, entertainment, and sports, with Meta signalling that additional publishing deals are in the works.

The shift marks a recalibration. Meta previously reduced its emphasis on news across Facebook and ended most publisher payments, but now sees licensed reporting as essential to improving AI accuracy and relevance.

Facing intensifying competition in the AI market, Meta is positioning real-time journalism as a differentiator for its chatbot, which is available across its apps and to users worldwide.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

Northamptonshire Police launches live facial recognition trial

Northamptonshire Police will roll out live facial recognition cameras in three town centres. Deployments are scheduled in Northampton on 28 November and 5 December, in Kettering on 29 November, and in Wellingborough on 6 December.

The initiative uses a van loaned from Bedfordshire Police and the watch-lists include high-risk sex offenders or persons wanted for arrest. Facial and biometric data for non-alerts are deleted immediately; alerts are held only up to 24 hours.

Police emphasise the AI based technology is ‘very much in its infancy’ but expect future acquisition of dedicated kit. A coordinator post is being created to manage the LFR programme in-house.

British campaigners express concern the biometric tool may erode privacy or resemble mass surveillance. Police assert that appropriate signage and open policy documents will be in place to maintain public confidence.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

Poll manipulation by AI threatens democratic accuracy, according to a new study

Public opinion surveys face a growing threat as AI becomes capable of producing highly convincing fake responses. New research from Dartmouth shows that AI-generated answers can pass every quality check, imitate real human behaviour and alter poll predictions without leaving evidence.

In several major polls conducted before the 2024 US election, inserting only a few dozen synthetic responses would have reversed expected outcomes.

The study reveals how easily malicious actors could influence democratic processes. AI models can operate in multiple languages yet deliver flawless English answers, allowing foreign groups to bypass detection.

An autonomous synthetic respondent that was created for the study passed nearly all attention tests, avoided errors in logic puzzles and adjusted its tone to match assigned demographic profiles instead of exposing its artificial nature.

The potential consequences extend far beyond electoral polling. Many scientific disciplines rely heavily on survey data to track public health risks, measure consumer behaviour or study mental wellbeing.

If AI-generated answers infiltrate such datasets, the reliability of thousands of studies could be compromised, weakening evidence used to shape policy and guide academic research.

Financial incentives further raise the risk. Human participants earn modest fees, while AI can produce survey responses at almost no cost. Existing detection methods failed to identify the synthetic respondent at any stage.

The researcher urges survey companies to adopt new verification systems that confirm the human identity of participants, arguing that stronger safeguards are essential to protect democratic accountability and the wider research ecosystem.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

EU investigates Google over potential Digital Markets Act breach

The European Commission has opened an investigation into whether Google may be breaching the Digital Markets Act by unfairly demoting news publishers in search results.

An inquiry that centres on Google’s ‘site reputation abuse policy’, which appears to lower rankings for publishers that host content from commercial partners, even when those partnerships support legitimate ways of monetising online journalism.

The Commission is examining whether Alphabet’s approach restricts publishers from conducting business, innovating, and cooperating with third-party content providers. Officials highlighted concerns that such demotions may undermine revenue at a difficult moment for the media sector.

These proceedings do not imply a final decision; instead, they allow the EU to gather evidence and assess Google’s practices in detail.

If the Commission finds evidence of non-compliance, it will present preliminary findings and request corrective measures. The investigation is expected to conclude within 12 months.

Under the DMA, infringements can lead to fines of up to ten percent of a company’s worldwide turnover, rising to twenty percent for repeated violations, alongside possible structural remedies.

Senior Commissioners stressed that gatekeepers must offer fair and non-discriminatory access to their platforms. They argued that protecting publishers’ ability to reach audiences supports media pluralism, innovation, and democratic resilience.

Google Search, designated as a core platform service under the DMA, has been required to comply fully with the regulation since March 2024.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

New York Times lawsuit prompts OpenAI to strengthen privacy protections

OpenAI says a New York Times demand to hand over 20 million private ChatGPT conversations threatens user privacy and breaks with established security norms. The request forms part of the Times’ lawsuit over alleged misuse of its content.

The company argues the demand would expose highly personal chats from people with no link to the case. It previously resisted broader requests, including one seeking more than a billion conversations, and says the latest move raises similar concerns about proportionality.

OpenAI says it offered privacy-preserving alternatives, such as targeted searches and high-level usage data, but these were rejected. It adds that chats covered by the order are being de-identified and stored in a secure, legally restricted environment.

The dispute arises as OpenAI accelerates its security roadmap, which includes plans for client-side encryption and automated systems that detect serious safety risks without requiring broad human access. These measures aim to ensure private conversations remain inaccessible to external parties.

OpenAI maintains that strong privacy protections are essential as AI tools handle increasingly sensitive tasks. It says it will challenge any attempt to make private conversations public and will continue to update users as the legal process unfolds.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

‘Wooing and suing’ defines News Corp’s AI strategy

News Corp chief executive Robert Thomson warned AI companies against using unlicensed publisher content, calling recipients of ‘stolen goods’ fair game for pursuit. He said ‘wooing and suing’ would proceed in parallel, with more licensing deals expected after the OpenAI pact.

Thomson argued that high-quality data must be paid for and that ingesting material without permission undermines incentives to produce journalism. He insisted that ‘content crime does not and will not pay,’ signalling stricter enforcement ahead.

While criticising bad actors, he praised partners that recognise publisher IP and are negotiating usage rights. The company is positioning itself to monetise archives and live reporting through structured licences.

He also pointed to a major author settlement with another AI firm as a watershed for compensation over past training uses. The message: legal and commercial paths are both accelerating.

Against this backdrop, News Corp said AI-related revenues are gaining traction alongside digital subscriptions and B2B data services. Further licensing announcements are likely in the coming months.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

Microsoft deal signals pay-per-use path for AI access to People Inc. content

People Inc. has joined Microsoft’s publisher content marketplace in a pay-per-use deal that compensates media for AI access. Copilot will be the first buyer, while People Inc. continues to block most AI crawlers via Cloudflare to force paid licensing.

People Inc., formerly Dotdash Meredith, said Microsoft’s marketplace lets AI firms pay ‘à la carte’ for specific content. The agreement differs from its earlier OpenAI pact, which the company described as more ‘all-you-can-eat’, but the priority remains ‘respected and paid for’ use.

Executives disclosed a sharp fall in Google search referrals: from 54% of traffic two years ago to 24% last quarter, citing AI Overviews. Leadership argues that crawler identification and paid access should become the norm as AI sits between publishers and audiences.

Blocking non-paying bots has ‘brought almost everyone to the table’, People Inc. said, signalling more licences to come. Such an approach by Microsoft is framed as a model for compensating rights-holders while enabling AI tools to use high-quality, authorised material.

IAC reported People Inc. digital revenue up 9% to $269m, with performance marketing and licensing up 38% and 24% respectively. The publisher also acquired Feedfeed, expanding its food vertical reach while pursuing additional AI content partnerships.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

EU considers classifying ChatGPT as a search engine under the DSA. What are the implications?

The European Commission is pondering whether OpenAI’s ChatGPT should be designated as a ‘Very Large Online Search Engine’ (VLOSE) under the Digital Services Act (DSA), a move that could reshape how generative AI tools are regulated across Europe.

OpenAI recently reported that ChatGPT’s search feature reached 120.4 million monthly users in the EU over the past six months, well above the 45 million threshold that triggers stricter obligations for major online platforms and search engines. The Commission confirmed it is reviewing the figures and assessing whether ChatGPT meets the criteria for designation.

The key question is whether ChatGPT’s live search function should be treated as an independent service or as part of the chatbot as a whole. Legal experts note that the DSA applies to intermediary services such as hosting platforms or search engines, categories that do not neatly encompass generative AI systems.

Implications for OpenAI

If designated, ChatGPT would be the first AI chatbot formally subject to DSA obligations, including systemic risk assessments, transparency reporting, and independent audits. OpenAI would need to evaluate how ChatGPT affects fundamental rights, democratic processes, and mental health, updating its systems and features based on identified risks.

‘As part of mitigation measures, OpenAI may need to adapt ChatGPT’s design, features, and functionality,’ said Laureline Lemoine of AWO. ‘Compliance could also slow the rollout of new tools in Europe if risk assessments aren’t planned in advance.’

The company could also face new data-sharing obligations under Article 40 of the DSA, allowing vetted researchers to request information about systemic risks and mitigation efforts, potentially extending to model data or training processes.

A test case for AI oversight

Legal scholars say the decision could set a precedent for generative AI regulation across the EU. ‘Classifying ChatGPT as a VLOSE will expand scrutiny beyond what’s currently covered under the AI Act,’ said Natali Helberger, professor of information law at the University of Amsterdam.

Experts warn the DSA would shift OpenAI from voluntary AI-safety frameworks and self-defined benchmarks to binding obligations, moving beyond narrow ‘bias tests’ to audited systemic-risk assessments, transparency and mitigation duties. ‘The DSA’s due diligence regime will be a tough reality check,’ said Mathias Vermeulen, public policy director at AWO.

Would you like to learn more aboutAI, tech and digital diplomacyIf so, ask our Diplo chatbot!