AI search services face competition probe in Japan

Japan’s competition authority will probe AI search services from major domestic and international tech firms. The investigation aims to identify potential antitrust violations rather than impose immediate sanctions.

The probe is expected to cover LY Corp., Google, Microsoft and AI providers such as OpenAI and Perplexity AI. Concerns centre on how AI systems present and utilise news content within search results.

Legal action by Japanese news organisations alleges unauthorised use of articles by AI services. Regulators are assessing whether such practices constitute abuse of market dominance.

The inquiry builds on a 2023 review of news distribution contracts that warned against the use of unfair terms for publishers. Similar investigations overseas, including within the EU, have guided the commission’s approach.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

AI reshapes journalism faster than public perception

AI is transforming how news is produced and consumed, moving faster than audiences and policies can adapt. Journalists increasingly use AI for research, transcription and content optimisation, creating new trust challenges.

Ethical concerns are rising when AI misrepresents events or uses content without consent. Media organisations have introduced guidelines, but experts warn that rules alone cannot cover every scenario.

Audience scepticism remains, even as journalists adopt AI tools in daily practice. Transparency, apparent human oversight and ethical adoption are key to maintaining credibility and legitimacy.

Europe faces pressure to strengthen its trust infrastructure and regulate the use of AI in newsrooms. Experts argue that democratic stability depends on informed audiences and resilient journalism to counter disinformation.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

AI fuels online abuse of women in public life

Generative AI is increasingly being weaponised to harass women in public roles, according to a new report commissioned by UN Women. Journalists, activists, and human rights defenders face AI-assisted abuse that endangers personal safety and democratic freedoms.

The study surveyed 641 women from 119 countries and found that nearly one in four of those experiencing online violence reported AI-generated or amplified abuse.

Writers, communicators, and influencers reported the highest exposure, with human rights defenders and journalists also at significant risk. Rapidly developing AI tools, including deepfakes, facilitate the creation to harmful content that spreads quickly on social media.

Online attacks often escalate into offline harm, with 41% of women linking online abuse to physical harassment, stalking, or intimidation. Female journalists are particularly affected, with offline attacks more than doubling over five years.

Experts warn that such violence threatens freedom of expression and democratic processes, particularly in authoritarian contexts.

Researchers call for urgent legal frameworks, platform accountability, and technological safeguards to prevent AI-assisted attacks on women. They advocate for human rights-focused AI design and stronger support systems to protect women in public life.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

AI reshapes media in North Macedonia with new regulatory guidance

A new analysis examines the impact of AI on North Macedonia’s media sector, offering guidance on ethical standards, human rights, and regulatory approaches.

Prepared in both Macedonian and English, the study benchmarks the country’s practices against European frameworks and provides actionable recommendations for future regulation and self-regulation.

The research, supported by the EU and Council of Europe’s PRO-FREX initiative and in collaboration with the Agency for Audio and Audiovisual Media Services (AVMU), was presented during Media Literacy Days 2025 in Skopje.

It highlights the relevance of EU and Council of Europe guidelines, including the Framework Convention on AI and Human Rights, and guidance on responsible AI in journalism.

AVMU’s involvement underlines its role in ensuring media freedom, fairness, and accountability amid rapid technological change. Participants highlighted the need for careful policymaking to manage AI’s impact, protecting media diversity, journalistic standards, and public trust online.

The analysis forms part of broader efforts under the Council of Europe and the EU’s Horizontal Facility for the Western Balkans and Türkiye, aiming to support North Macedonia in aligning media regulation with European standards while responsibly integrating AI technologies.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

OpenAI expands AI training for newsrooms worldwide

The US tech company, OpenAI, has launched the OpenAI Academy for News Organisations, a new learning hub designed to support journalists, editors and publishers adopting AI in their work.

An initiative that builds on existing partnerships with the American Journalism Project and The Lenfest Institute for Journalism, reflecting a broader effort to strengthen journalism as a pillar of democratic life.

The Academy goes live with practical training, newsroom-focused playbooks and real-world examples aimed at helping news teams save time and focus on high-impact reporting.

Areas of focus include investigative research, multilingual reporting, data analysis, production efficiency and operational workflows that sustain news organisations over time.

Responsible use sits at the centre of the programme. Guidance on governance, internal policies and ethical deployment is intended to address concerns around trust, accuracy and newsroom culture, recognising that AI adoption raises structural questions rather than purely technical ones.

OpenAI plans to expand the Academy in the year ahead with additional courses, case studies and live programming.

Through collaboration with publishers, industry bodies and journalism networks worldwide, the Academy is positioned as a shared learning space that supports editorial independence while adapting journalism to an AI-shaped media environment.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

UNESCO strengthens Caribbean disaster reporting

UNESCO has launched a regional programme to improve disaster reporting across the Caribbean after Hurricane Melissa and rising misinformation.

The initiative equips journalists and emergency communicators with advanced tools such as AI, drones and geographic information systems to support accurate and ethical communication.

The 30-hour online course, funded through UNESCO’s Media Development Program, brings together twenty-three participants from ten Caribbean countries and territories.

Delivered in partnership with GeoTechVision/Jamaica Flying Labs, the training combines practical exercises with disaster simulations to help participants map hazards, collect aerial evidence and verify information using AI-supported methods.

Participants explore geospatial mapping, drone use and ethics while completing a capstone project in realistic scenarios. The programme aims to address gaps revealed by recent disasters and strengthen the region’s ability to deliver trusted information.

UNESCO’s wider Media in Crisis Preparedness and Response programme supports resilient media institutions, ensuring that communities receive timely and reliable information before, during and after crises.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

Survey reveals split views on AI in academic peer review

Growing use of generative AI within peer review is creating a sharp divide among physicists, according to a new survey by the Institute of Physics Publishing.

Researchers appear more informed and more willing to express firm views, with a notable rise in those who see a positive effect and a large group voicing strong reservations. Many believe AI tools accelerate early reading and help reviewers concentrate on novelty instead of routine work.

Others fear that reviewers might replace careful evaluation with automated text generation, undermining the value of expert judgement.

A sizeable proportion of researchers would be unhappy if AI-shaped assessments of their own papers, even though many quietly rely on such tools when reviewing for journals. Publishers are now revisiting their policies, yet they aim to respect authors who expect human-led scrutiny.

Editors also report that AI-generated reports often lack depth and fail to reflect domain expertise. Concerns extend to confidentiality, with organisations such as the American Physical Society warning that uploading manuscripts to chatbots can breach author trust.

Legal disputes about training data add further uncertainty, pushing publishers to approach policy changes with caution.

Despite disagreements, many researchers accept that AI will remain part of peer review as workloads increase and scientific output grows. The debate now centres on how to integrate new tools in a way that supports researchers instead of weakening the foundations of scholarly communication.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Real-time journalism becomes central to Meta AI strategy

Meta has signed commercial agreements with news publishers to feed real-time reporting into Meta AI, enabling its chatbot to answer news-related queries with up-to-date information from multiple editorial sources.

The company said responses will include links to full articles, directing users to publishers’ websites and helping partners reach new audiences beyond traditional platform distribution.

Initial partners span US and international outlets, covering global affairs, politics, entertainment, and sports, with Meta signalling that additional publishing deals are in the works.

The shift marks a recalibration. Meta previously reduced its emphasis on news across Facebook and ended most publisher payments, but now sees licensed reporting as essential to improving AI accuracy and relevance.

Facing intensifying competition in the AI market, Meta is positioning real-time journalism as a differentiator for its chatbot, which is available across its apps and to users worldwide.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

Northamptonshire Police launches live facial recognition trial

Northamptonshire Police will roll out live facial recognition cameras in three town centres. Deployments are scheduled in Northampton on 28 November and 5 December, in Kettering on 29 November, and in Wellingborough on 6 December.

The initiative uses a van loaned from Bedfordshire Police and the watch-lists include high-risk sex offenders or persons wanted for arrest. Facial and biometric data for non-alerts are deleted immediately; alerts are held only up to 24 hours.

Police emphasise the AI based technology is ‘very much in its infancy’ but expect future acquisition of dedicated kit. A coordinator post is being created to manage the LFR programme in-house.

British campaigners express concern the biometric tool may erode privacy or resemble mass surveillance. Police assert that appropriate signage and open policy documents will be in place to maintain public confidence.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

Poll manipulation by AI threatens democratic accuracy, according to a new study

Public opinion surveys face a growing threat as AI becomes capable of producing highly convincing fake responses. New research from Dartmouth shows that AI-generated answers can pass every quality check, imitate real human behaviour and alter poll predictions without leaving evidence.

In several major polls conducted before the 2024 US election, inserting only a few dozen synthetic responses would have reversed expected outcomes.

The study reveals how easily malicious actors could influence democratic processes. AI models can operate in multiple languages yet deliver flawless English answers, allowing foreign groups to bypass detection.

An autonomous synthetic respondent that was created for the study passed nearly all attention tests, avoided errors in logic puzzles and adjusted its tone to match assigned demographic profiles instead of exposing its artificial nature.

The potential consequences extend far beyond electoral polling. Many scientific disciplines rely heavily on survey data to track public health risks, measure consumer behaviour or study mental wellbeing.

If AI-generated answers infiltrate such datasets, the reliability of thousands of studies could be compromised, weakening evidence used to shape policy and guide academic research.

Financial incentives further raise the risk. Human participants earn modest fees, while AI can produce survey responses at almost no cost. Existing detection methods failed to identify the synthetic respondent at any stage.

The researcher urges survey companies to adopt new verification systems that confirm the human identity of participants, arguing that stronger safeguards are essential to protect democratic accountability and the wider research ecosystem.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!