AI-generated Jesuses spark concern over faith and bias

AI chatbots modelled on Jesus are becoming increasingly popular over Christmas, offering companionship or faith guidance to people who may feel emotionally vulnerable during the holidays.

Several platforms, including Character.AI, Talkie.AI and Text With Jesus, now host simulations claiming to answer questions in the voice of Jesus Christ.

Experts warn that such tools could gradually reshape religious belief and practice. Training data is controlled by a handful of technology firms, which means AI systems may produce homogenised and biased interpretations instead of reflecting the diversity of real-world faith communities.

Users who are young or unfamiliar with AI may also struggle to judge the accuracy or intent behind the answers they receive.

Researchers say AI chatbots are currently used as a supplement rather than a replacement for religious teaching.

However, concern remains that people may begin to rely on AI for spiritual reassurance during sensitive moments. Scholars recommend limiting use over the holidays and prioritising conversations with family, friends or trusted religious leaders instead of seeking emotional comfort from a chatbot.

Experts also urge users to reflect carefully on who designs these systems and why. Fact-checking answers and grounding faith in recognised sources may help reduce the risk of distortion as AI plays a growing role in people’s daily lives.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

ChatGPT becomes more customisable for tone and style

OpenAI has introduced new Personalisation settings in ChatGPT that allow users to fine-tune warmth, enthusiasm and emoji use. The changes are designed to make conversations feel more natural, instead of relying on a single default tone.

ChatGPT users can set each element to More, Less or Default, alongside existing tone styles such as Professional, Candid and Quirky. The update follows previous adjustments, where OpenAI first dialled back perceived agreeableness, then later increased warmth after users said the system felt overly cold.

Experts have raised concerns that highly agreeable AI could encourage emotional dependence, even as users welcome a more flexible conversational style.

Some commentators describe the feature as empowering, while others question whether customising a chatbot’s personality risks blurring emotional boundaries.

The new tone controls continue broader industry debates about how human-like AI should become. OpenAI hopes that added transparency and user choice will balance personal preference with responsible design, instead of encouraging reliance on a single conversational style.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Japan investigates AI search services over news use

The Japan Fair Trade Commission (JFTC) announced it will investigate AI-based online search services over concerns that using news articles without permission could violate antitrust laws.

Authorities said such practices may amount to an abuse of a dominant bargaining position under Japan’s antimonopoly regulations.

The inquiry is expected to examine services from global tech firms, including Google, Microsoft, and OpenAI’s ChatGPT, as well as US startup Perplexity AI and Japanese company LY Corp. AI search tools summarise online content, including news articles, raising concerns about their effect on media revenue.

The Japan Newspaper Publishers and Editors Association warned AI summaries may reduce website traffic and media revenue. JFTC Secretary General Hiroo Iwanari said generative AI is evolving quickly, requiring careful review to keep up with technological change.

The investigation reflects growing global scrutiny of AI services and their interaction with content providers, with regulators increasingly assessing the balance between innovation and fair competition in digital markets.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

South Korea fake news law sparks fears for press freedom

A significant debate has erupted in South Korea after the National Assembly passed new legislation aimed at tackling so-called fake news.

The revised Information and Communications Network Act bans the circulation of false or fabricated information online. It allows courts to impose punitive damages up to five times the losses suffered when media outlets or YouTubers intentionally spread disinformation for unjust profit.

Journalists, unions and academics warn that the law could undermine freedom of expression and weaken journalism’s watchdog function instead of strengthening public trust.

Critics argue that ambiguity over who decides what constitutes fake news could shift judgement away from the courts and toward regulators or platforms, encouraging self-censorship and increasing the risk of abusive lawsuits by influential figures.

Experts also highlight the lack of strong safeguards in South Korea against malicious litigation compared with the US, where plaintiffs must prove fault by journalists.

The controversy reflects more profound public scepticism about South Korean media and long-standing reporting practices that sometimes rely on relaying statements without sufficient verification, suggesting that structural reform may be needed instead of rapid, punitive legislation.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Meta restricts Congress AI videos in India

Meta has restricted access in India to two AI-generated videos posted by the Congress party. The clips depicted Prime Minister Narendra Modi alongside Gautam Adani, Chairman of the Adani Group.

The company stated that the content did not violate its community standards. Action followed takedown notices issued by Delhi Police under India’s information technology laws.

Meta warned that ignoring the orders could jeopardise safe harbour protections. Loss of those protections would expose platforms to direct legal liability.

The case highlights growing scrutiny of political AI content in India. Recent rule changes have tightened procedures for ordering online takedowns.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

AI reshapes journalism faster than public perception

AI is transforming how news is produced and consumed, moving faster than audiences and policies can adapt. Journalists increasingly use AI for research, transcription and content optimisation, creating new trust challenges.

Ethical concerns are rising when AI misrepresents events or uses content without consent. Media organisations have introduced guidelines, but experts warn that rules alone cannot cover every scenario.

Audience scepticism remains, even as journalists adopt AI tools in daily practice. Transparency, apparent human oversight and ethical adoption are key to maintaining credibility and legitimacy.

Europe faces pressure to strengthen its trust infrastructure and regulate the use of AI in newsrooms. Experts argue that democratic stability depends on informed audiences and resilient journalism to counter disinformation.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

AI fuels online abuse of women in public life

Generative AI is increasingly being weaponised to harass women in public roles, according to a new report commissioned by UN Women. Journalists, activists, and human rights defenders face AI-assisted abuse that endangers personal safety and democratic freedoms.

The study surveyed 641 women from 119 countries and found that nearly one in four of those experiencing online violence reported AI-generated or amplified abuse.

Writers, communicators, and influencers reported the highest exposure, with human rights defenders and journalists also at significant risk. Rapidly developing AI tools, including deepfakes, facilitate the creation to harmful content that spreads quickly on social media.

Online attacks often escalate into offline harm, with 41% of women linking online abuse to physical harassment, stalking, or intimidation. Female journalists are particularly affected, with offline attacks more than doubling over five years.

Experts warn that such violence threatens freedom of expression and democratic processes, particularly in authoritarian contexts.

Researchers call for urgent legal frameworks, platform accountability, and technological safeguards to prevent AI-assisted attacks on women. They advocate for human rights-focused AI design and stronger support systems to protect women in public life.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

AI-generated video falsely claims US military to ‘take over’ Nigerian army

A video circulating online, purported to show a US military officer announcing that the United States would take control of the Nigerian Army, is false.

Independent analysis has revealed that the clip was likely generated or heavily manipulated using AI, and no official announcement or credible source supports this claim.

Fact-checkers used AI-detection tools and found high levels of manipulation, and investigations uncovered inconsistencies in uniform insignia and microphones linked to non-existent media outlets. No verified reports indicate that US military forces are intervening in Nigerian defence operations.

The false claim has spread on platforms including X (formerly Twitter), generating alarm and misinterpretation about foreign military involvement in Nigeria.

Experts warn that deepfakes and AI-generated misinformation are becoming harder to spot without specialised tools and verification.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

AI reshapes media in North Macedonia with new regulatory guidance

A new analysis examines the impact of AI on North Macedonia’s media sector, offering guidance on ethical standards, human rights, and regulatory approaches.

Prepared in both Macedonian and English, the study benchmarks the country’s practices against European frameworks and provides actionable recommendations for future regulation and self-regulation.

The research, supported by the EU and Council of Europe’s PRO-FREX initiative and in collaboration with the Agency for Audio and Audiovisual Media Services (AVMU), was presented during Media Literacy Days 2025 in Skopje.

It highlights the relevance of EU and Council of Europe guidelines, including the Framework Convention on AI and Human Rights, and guidance on responsible AI in journalism.

AVMU’s involvement underlines its role in ensuring media freedom, fairness, and accountability amid rapid technological change. Participants highlighted the need for careful policymaking to manage AI’s impact, protecting media diversity, journalistic standards, and public trust online.

The analysis forms part of broader efforts under the Council of Europe and the EU’s Horizontal Facility for the Western Balkans and Türkiye, aiming to support North Macedonia in aligning media regulation with European standards while responsibly integrating AI technologies.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

US platforms signal political shift in DSA risk reports

Major online platforms have submitted their 2025 systemic risk assessments under the Digital Services Act as the European Commission moves towards issuing its first fine against a Very Large Online Platform.

The reports arrive amid mounting political friction between Brussels and Washington, placing platform compliance under heightened scrutiny on both regulatory and geopolitical fronts.

Several US-based companies adjusted how risks related to hate speech, misinformation and diversity are framed, reflecting political changes in the US while maintaining formal alignment with EU law.

Meta softened enforcement language, reclassified hate speech under broader categories and reduced visibility of civil rights structures, while continuing to emphasise freedom of expression as a guiding principle.

Google and YouTube similarly narrowed references to misinformation, replaced established terminology with less charged language and limited enforcement narratives to cases involving severe harm.

LinkedIn followed comparable patterns, removing references to earlier commitments on health misinformation, civic integrity and EU voluntary codes that have since been integrated into the DSA framework.

X largely retained its prior approach, although its report continues to reference cooperation with governments and civil society that contrasts with the platform’s public positioning.

TikTok diverged from other platforms by expanding disclosures on hate speech, election integrity and fact-checking, likely reflecting its vulnerability to regulatory action in both the EU and the US.

European regulators are expected to assess whether these shifts represent genuine risk mitigation or strategic alignment with US political priorities.

As systemic risk reports increasingly inform enforcement decisions, subtle changes in language, scope and emphasis may carry regulatory consequences well beyond their formal compliance function.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!