ChatGPT becomes more customisable for tone and style

OpenAI has introduced new Personalisation settings in ChatGPT that allow users to fine-tune warmth, enthusiasm and emoji use. The changes are designed to make conversations feel more natural, instead of relying on a single default tone.

ChatGPT users can set each element to More, Less or Default, alongside existing tone styles such as Professional, Candid and Quirky. The update follows previous adjustments, where OpenAI first dialled back perceived agreeableness, then later increased warmth after users said the system felt overly cold.

Experts have raised concerns that highly agreeable AI could encourage emotional dependence, even as users welcome a more flexible conversational style.

Some commentators describe the feature as empowering, while others question whether customising a chatbot’s personality risks blurring emotional boundaries.

The new tone controls continue broader industry debates about how human-like AI should become. OpenAI hopes that added transparency and user choice will balance personal preference with responsible design, instead of encouraging reliance on a single conversational style.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Japan investigates AI search services over news use

The Japan Fair Trade Commission (JFTC) announced it will investigate AI-based online search services over concerns that using news articles without permission could violate antitrust laws.

Authorities said such practices may amount to an abuse of a dominant bargaining position under Japan’s antimonopoly regulations.

The inquiry is expected to examine services from global tech firms, including Google, Microsoft, and OpenAI’s ChatGPT, as well as US startup Perplexity AI and Japanese company LY Corp. AI search tools summarise online content, including news articles, raising concerns about their effect on media revenue.

The Japan Newspaper Publishers and Editors Association warned AI summaries may reduce website traffic and media revenue. JFTC Secretary General Hiroo Iwanari said generative AI is evolving quickly, requiring careful review to keep up with technological change.

The investigation reflects growing global scrutiny of AI services and their interaction with content providers, with regulators increasingly assessing the balance between innovation and fair competition in digital markets.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

ChatGPT may move beyond GPTs as OpenAI develops new Skills feature

OpenAI is said to be testing a new feature for ChatGPT that would mark a shift from Custom GPTs toward a more modular system of Skills.

Reports suggest the project, internally codenamed Hazelnut, will allow users and developers to teach the AI model standalone abilities, workflows and domain knowledge instead of relying only on role-based configurations.

The Skills framework is designed to allow multiple abilities to be combined automatically when a task requires them. The system aims to increase portability across the web version, desktop client and API, while loading instructions only when needed instead of consuming the entire context window.

Support for running executable code is also expected, providing the model with stronger reliability for logic-driven work, rather than relying entirely on generated text.

Industry observers note similarities to Anthropic’s Claude, which already benefits from a skill-like structure. Further features are expected to include slash-command interactions, a dedicated Skill editor and one-click conversion from existing GPTs.

Market expectations point to an early 2026 launch, signalling a move toward ChatGPT operating as an intelligent platform rather than a traditional chatbot.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Italy orders Meta to lift WhatsApp AI restrictions

Italy’s competition authority has ordered Meta to halt restrictions limiting rival AI chatbots on WhatsApp. Regulators say the measures may distort competition as Meta integrates its own AI services.

The Italian watchdog argues Meta’s conduct risks restricting market access and slowing technical development. Officials warned that continued enforcement could cause lasting harm to competition and consumer choice.

Meta rejected the ruling and confirmed plans to appeal, calling the decision unfounded. The company stated that WhatsApp Business was never intended to serve as a distribution platform for AI services.

The case forms part of a broader European push to scrutinise dominant tech firms. Regulators are increasingly focused on the integration of AI across platforms with entrenched market power.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

Nomani investment scam spreads across social media

Fraudulent investment platform Nomani has surged, spreading from Facebook to YouTube. ESET blocked tens of thousands of malicious links this year, mainly in Czech Republic, Japan, Slovakia, Spain, and Poland.

The scam utilises AI-generated videos, branded posts, and social media advertisements to lure victims into fake investments that promise high returns. Criminals then request extra fees or sensitive personal data, and often attempt a secondary scam posing as Europol or INTERPOL.

Recent improvements make Nomani’s AI videos more realistic, using trending news or public figures to appear credible. Campaigns run briefly and misuse social media forms and surveys to harvest information while avoiding detection.

Despite overall growth, detections fell 37% in the second half of 2025, suggesting that scammers are adapting to more stringent law enforcement measures. Meta’s ad platforms earned billions from scams, demonstrating the global reach of Nomani fraud.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

Deutsche Bank warns on scale of AI spending

Deutsche Bank has warned that surging AI investment is helping to prop up US economic growth. Analysts say that broader spending would have stalled without the heavy outlays on technology.

The bank estimates hyperscalers could spend $4 trillion on AI data centres by 2030. Analysts cautioned returns remain uncertain despite the scale of investment.

Official data showed US GDP grew at a 4.3% annualised rate in the third quarter. Economists linked much of the momentum to AI-driven capital expenditure.

Market experts remain divided on risks, although many reject fears of a bubble. Corporate cash flows, rather than excessive borrowing, are funding the majority of AI infrastructure.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

Creators embrace AI music on YouTube

Increasingly, YouTube creators are utilising AI-generated music to enhance video quality, saving time and costs. Selecting tracks that align with the content tone and audience expectations is crucial for engagement.

Subtle, balanced music supports narration without distraction and guides viewers through sections. Thoughtful use of intros, transitions and outros builds channel identity and reinforces branding.

Customisation tools allow creators to adjust tempo, mood and intensity for better pacing and cohesion with visuals. Testing multiple versions ensures the music feels natural and aligns with storytelling.

Understanding licensing terms protects monetisation and avoids copyright issues. Combining AI music with creative judgement keeps content authentic and original while maximising production impact.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

Meta restricts Congress AI videos in India

Meta has restricted access in India to two AI-generated videos posted by the Congress party. The clips depicted Prime Minister Narendra Modi alongside Gautam Adani, Chairman of the Adani Group.

The company stated that the content did not violate its community standards. Action followed takedown notices issued by Delhi Police under India’s information technology laws.

Meta warned that ignoring the orders could jeopardise safe harbour protections. Loss of those protections would expose platforms to direct legal liability.

The case highlights growing scrutiny of political AI content in India. Recent rule changes have tightened procedures for ordering online takedowns.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

AI search services face competition probe in Japan

Japan’s competition authority will probe AI search services from major domestic and international tech firms. The investigation aims to identify potential antitrust violations rather than impose immediate sanctions.

The probe is expected to cover LY Corp., Google, Microsoft and AI providers such as OpenAI and Perplexity AI. Concerns centre on how AI systems present and utilise news content within search results.

Legal action by Japanese news organisations alleges unauthorised use of articles by AI services. Regulators are assessing whether such practices constitute abuse of market dominance.

The inquiry builds on a 2023 review of news distribution contracts that warned against the use of unfair terms for publishers. Similar investigations overseas, including within the EU, have guided the commission’s approach.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

Chest X-rays gain new screening potential through AI

AI is extending the clinical value of chest X-rays beyond lung and heart assessment. Researchers are investigating whether routine radiographs can support broader disease screening without the need for additional scans. Early findings suggest existing images may contain underused diagnostic signals.

A study in Radiology: Cardiothoracic Imaging examined whether AI could detect hepatic steatosis from standard frontal chest X-rays. Researchers analysed more than 6,500 images from over 4,400 patients across two institutions. Deep learning models were trained and externally validated.

The AI system achieved area-under-curve scores above 0.8 in both internal and external tests. Saliency maps showed predictions focused near the diaphragm, where part of the liver appears on chest X-rays. Results suggest that reliable signal extraction can be achieved from routine imaging.

Researchers argue the approach could enable opportunistic screening during standard care. Patients flagged by AI could be referred for a dedicated liver assessment when appropriate. The method adds clinical value without increasing imaging costs or radiation exposure.

Experts caution that the model is not a standalone diagnostic tool and requires further prospective validation. Integration with clinical and laboratory data remains necessary to reduce false positives. If validated, AI-enhanced X-rays could support scalable risk stratification.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!