AI chatbots modelled on Jesus are becoming increasingly popular over Christmas, offering companionship or faith guidance to people who may feel emotionally vulnerable during the holidays.
Experts warn that such tools could gradually reshape religious belief and practice. Training data is controlled by a handful of technology firms, which means AI systems may produce homogenised and biased interpretations instead of reflecting the diversity of real-world faith communities.
Users who are young or unfamiliar with AI may also struggle to judge the accuracy or intent behind the answers they receive.
Researchers say AI chatbots are currently used as a supplement rather than a replacement for religious teaching.
However, concern remains that people may begin to rely on AI for spiritual reassurance during sensitive moments. Scholars recommend limiting use over the holidays and prioritising conversations with family, friends or trusted religious leaders instead of seeking emotional comfort from a chatbot.
Experts also urge users to reflect carefully on who designs these systems and why. Fact-checking answers and grounding faith in recognised sources may help reduce the risk of distortion as AI plays a growing role in people’s daily lives.
Would you like to learn more aboutAI, tech and digital diplomacy? If so, ask our Diplo chatbot!
OpenAI has introduced new Personalisation settings in ChatGPT that allow users to fine-tune warmth, enthusiasm and emoji use. The changes are designed to make conversations feel more natural, instead of relying on a single default tone.
ChatGPT users can set each element to More, Less or Default, alongside existing tone styles such as Professional, Candid and Quirky. The update follows previous adjustments, where OpenAI first dialled back perceived agreeableness, then later increased warmth after users said the system felt overly cold.
Experts have raised concerns that highly agreeable AI could encourage emotional dependence, even as users welcome a more flexible conversational style.
Some commentators describe the feature as empowering, while others question whether customising a chatbot’s personality risks blurring emotional boundaries.
The new tone controls continue broader industry debates about how human-like AI should become. OpenAI hopes that added transparency and user choice will balance personal preference with responsible design, instead of encouraging reliance on a single conversational style.
Would you like to learn more aboutAI, tech and digital diplomacy? If so, ask our Diplo chatbot!
The European Commission has welcomed Apple’s latest interoperability updates in iOS 26.3, crediting the Digital Markets Act for compelling the company to open its ecosystem.
The new features are currently in beta and allow third-party accessories to integrate more smoothly with iPhones and iPads, instead of favouring Apple’s own devices.
Proximity pairing will let headphones and other accessories connect through a simplified one-tap process, similar to AirPods. Notification forwarding to non-Apple wearables will also become available, although alerts can only be routed to one device at a time.
Apple is providing developers with the tools needed to support the features, which apply only within the EU.
The DMA classifies Apple as a gatekeeper and requires fairer access for rivals, with heavy financial penalties for non-compliance.
Apple has repeatedly warned that the rules risk undermining security and privacy, yet the company has already introduced DMA-driven changes such as allowing alternative app stores and opening NFC access.
Analysts expect the moves to reduce ecosystem lock-in and increase competition across the EU market. iOS 26.3 is expected to roll out fully across Europe from 2026 following the beta cycle, while further regulatory scrutiny may push Apple to extend interoperability even further.
Would you like to learn more aboutAI, tech and digital diplomacy? If so, ask our Diplo chatbot!
The Japan Fair Trade Commission (JFTC) announced it will investigate AI-based online search services over concerns that using news articles without permission could violate antitrust laws.
Authorities said such practices may amount to an abuse of a dominant bargaining position under Japan’s antimonopoly regulations.
The inquiry is expected to examine services from global tech firms, including Google, Microsoft, and OpenAI’s ChatGPT, as well as US startup Perplexity AI and Japanese company LY Corp. AI search tools summarise online content, including news articles, raising concerns about their effect on media revenue.
The Japan Newspaper Publishers and Editors Association warned AI summaries may reduce website traffic and media revenue. JFTC Secretary General Hiroo Iwanari said generative AI is evolving quickly, requiring careful review to keep up with technological change.
The investigation reflects growing global scrutiny of AI services and their interaction with content providers, with regulators increasingly assessing the balance between innovation and fair competition in digital markets.
Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!
US insurance firm Aflac has confirmed that a cyberattack disclosed in June affected around 22.65 million people. The breach involved the theft of sensitive personal and health information; however, the company initially did not specify the number of individuals affected.
In filings with the Texas attorney general, Aflac said the compromised data includes names, dates of birth, home addresses, government-issued identification numbers, driving licence details, and Social Security numbers. Medical and health insurance information was also accessed during the incident.
A separate filing with the Iowa attorney general suggested the attackers may be linked to a known cybercriminal organisation. Federal law enforcement and external cybersecurity specialists indicated the group had been targeting the insurance sector more broadly.
Security researchers have linked a wave of recent insurance-sector breaches to Scattered Spider, a loosely organised group of predominantly young, English-speaking hackers. The timing and targeting of the Aflac incident align with the group’s activity.
The US company stated that it has begun notifying the affected individuals. The company, which reports having around 50 million customers, did not respond to requests for comment. Other insurers, including Erie Insurance and Philadelphia Insurance Companies, reported breaches during the same period.
Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!
AI is extending the clinical value of chest X-rays beyond lung and heart assessment. Researchers are investigating whether routine radiographs can support broader disease screening without the need for additional scans. Early findings suggest existing images may contain underused diagnostic signals.
A study in Radiology: Cardiothoracic Imaging examined whether AI could detect hepatic steatosis from standard frontal chest X-rays. Researchers analysed more than 6,500 images from over 4,400 patients across two institutions. Deep learning models were trained and externally validated.
The AI system achieved area-under-curve scores above 0.8 in both internal and external tests. Saliency maps showed predictions focused near the diaphragm, where part of the liver appears on chest X-rays. Results suggest that reliable signal extraction can be achieved from routine imaging.
Researchers argue the approach could enable opportunistic screening during standard care. Patients flagged by AI could be referred for a dedicated liver assessment when appropriate. The method adds clinical value without increasing imaging costs or radiation exposure.
Experts caution that the model is not a standalone diagnostic tool and requires further prospective validation. Integration with clinical and laboratory data remains necessary to reduce false positives. If validated, AI-enhanced X-rays could support scalable risk stratification.
Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!
A new research paper published in Aging-US uses AI to map a century of global ageing research. The study analyses how scientific priorities have shifted over time. Underexplored areas are also identified.
Researchers analysed more than 460,000 scientific abstracts published between 1925 and 2023. Natural language processing and machine learning were used to cluster themes and track trends. The aim was to provide an unbiased view of the field’s evolution.
Findings show a shift from basic biological studies toward clinical research, particularly age-related diseases such as Alzheimer’s and dementia. Basic science continues to focus on cellular mechanisms. Limited overlap persists between laboratory and clinical research.
Several fast-growing topics, including autophagy, RNA biology, and nutrient sensing, remain weakly connected to clinical applications. Strong links endure in areas such as cancer and ageing. Other associations, including epigenetics and autophagy, are rarely explored.
The analysis highlights gaps that may shape future ageing research priorities. AI-based mapping provides insights into how funding and policy shape focus areas. Greater integration could support more effective translation into clinical outcomes.
Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!
Mandatory facial verification will be introduced in South Korea for anyone opening a new mobile phone account, as authorities try to limit identity fraud.
Officials said criminals have been using stolen personal details to set up phone numbers that later support scams such as voice phishing instead of legitimate services.
Major mobile carriers, including LG Uplus, Korea Telecom and SK Telecom, will validate users by matching their faces against biometric data stored in the PASS digital identity app.
Such a requirement expands the country’s identity checks rather than replacing them outright, and is intended to make it harder for fraud rings to exploit stolen data at scale.
The measure follows a difficult year for data security in South Korea, marked by cyber incidents affecting more than half the population.
SK Telecom reported a breach involving all 23 million of its customers and now faces more than $1.5 billion in penalties and compensation.
Regulators also revealed that mobile virtual network operators were linked to 92% of counterfeit phones uncovered in 2024, strengthening the government’s case for tougher identity controls.
Would you like to learn more aboutAI, tech and digital diplomacy? If so, ask our Diplo chatbot!
Prime Minister Kim Min-seok has called for punitive fines of up to 10 percent of company sales for repeated and serious data breaches, as public anger grows over large-scale leaks.
The government is seeking swift legislation to impose stronger sanctions on firms that fail to safeguard personal data, reflecting President Lee Jae Myung’s stance that violations require firm penalties instead of lenient warnings.
Kim said corporate responses to recent breaches had fallen far short of public expectations and stressed that companies must take full responsibility for protecting customer information.
Under the proposed framework, affected individuals would receive clearer notifications that include guidance on their rights to seek damages.
The government of South Korea also plans to strengthen investigative powers through coercive fines for noncompliance, while pursuing rapid reforms aimed at preventing further harm.
The tougher line follows a series of major incidents, including a leak at Shinhan Card that affected around 190,000 merchant records and a large-scale breach at Coupang that exposed the data of 33.7 million users.
Officials have described the Coupang breach as a serious social crisis that has eroded public trust.
Authorities have launched an interagency task force to identify responsibility and ensure tighter data protection across South Korea’s digital economy instead of relying on voluntary company action.
Would you like to learn more aboutAI, tech and digital diplomacy? If so, ask our Diplo chatbot!
Researchers at the Icahn School of Medicine at Mount Sinai have developed an AI tool capable of predicting which critically ill ventilated patients may be underfed, potentially enabling earlier nutritional intervention in intensive care units.
NutriSighT, the AI model, analyses routine ICU data, including vital signs, lab results, medications, and feeding information. Predictions are updated every four hours, allowing clinicians to identify patients at risk of underfeeding during days three to seven of ventilation.
The study found that 41–53% of patients were underfed by day three, while 25–35% remained underfed by day seven.
The model is dynamic and interpretable, highlighting key factors such as blood pressure, sodium levels, and sedation that influence underfeeding risk. Researchers emphasise that NutriSighT supports personalised nutrition and guides clinical decisions without replacing medical judgement.
Future research will focus on prospective multi-site trials, integration with electronic health records, and expansion to broader, individualised nutrition targets. Investigators hope these advances will enhance patient outcomes and enable more tailored ICU care.
Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!