ChatGPT may move beyond GPTs as OpenAI develops new Skills feature

OpenAI is said to be testing a new feature for ChatGPT that would mark a shift from Custom GPTs toward a more modular system of Skills.

Reports suggest the project, internally codenamed Hazelnut, will allow users and developers to teach the AI model standalone abilities, workflows and domain knowledge instead of relying only on role-based configurations.

The Skills framework is designed to allow multiple abilities to be combined automatically when a task requires them. The system aims to increase portability across the web version, desktop client and API, while loading instructions only when needed instead of consuming the entire context window.

Support for running executable code is also expected, providing the model with stronger reliability for logic-driven work, rather than relying entirely on generated text.

Industry observers note similarities to Anthropic’s Claude, which already benefits from a skill-like structure. Further features are expected to include slash-command interactions, a dedicated Skill editor and one-click conversion from existing GPTs.

Market expectations point to an early 2026 launch, signalling a move toward ChatGPT operating as an intelligent platform rather than a traditional chatbot.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Italy orders Meta to lift WhatsApp AI restrictions

Italy’s competition authority has ordered Meta to halt restrictions limiting rival AI chatbots on WhatsApp. Regulators say the measures may distort competition as Meta integrates its own AI services.

The Italian watchdog argues Meta’s conduct risks restricting market access and slowing technical development. Officials warned that continued enforcement could cause lasting harm to competition and consumer choice.

Meta rejected the ruling and confirmed plans to appeal, calling the decision unfounded. The company stated that WhatsApp Business was never intended to serve as a distribution platform for AI services.

The case forms part of a broader European push to scrutinise dominant tech firms. Regulators are increasingly focused on the integration of AI across platforms with entrenched market power.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

South Korea fake news law sparks fears for press freedom

A significant debate has erupted in South Korea after the National Assembly passed new legislation aimed at tackling so-called fake news.

The revised Information and Communications Network Act bans the circulation of false or fabricated information online. It allows courts to impose punitive damages up to five times the losses suffered when media outlets or YouTubers intentionally spread disinformation for unjust profit.

Journalists, unions and academics warn that the law could undermine freedom of expression and weaken journalism’s watchdog function instead of strengthening public trust.

Critics argue that ambiguity over who decides what constitutes fake news could shift judgement away from the courts and toward regulators or platforms, encouraging self-censorship and increasing the risk of abusive lawsuits by influential figures.

Experts also highlight the lack of strong safeguards in South Korea against malicious litigation compared with the US, where plaintiffs must prove fault by journalists.

The controversy reflects more profound public scepticism about South Korean media and long-standing reporting practices that sometimes rely on relaying statements without sufficient verification, suggesting that structural reform may be needed instead of rapid, punitive legislation.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Nomani investment scam spreads across social media

Fraudulent investment platform Nomani has surged, spreading from Facebook to YouTube. ESET blocked tens of thousands of malicious links this year, mainly in Czech Republic, Japan, Slovakia, Spain, and Poland.

The scam utilises AI-generated videos, branded posts, and social media advertisements to lure victims into fake investments that promise high returns. Criminals then request extra fees or sensitive personal data, and often attempt a secondary scam posing as Europol or INTERPOL.

Recent improvements make Nomani’s AI videos more realistic, using trending news or public figures to appear credible. Campaigns run briefly and misuse social media forms and surveys to harvest information while avoiding detection.

Despite overall growth, detections fell 37% in the second half of 2025, suggesting that scammers are adapting to more stringent law enforcement measures. Meta’s ad platforms earned billions from scams, demonstrating the global reach of Nomani fraud.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

Aflac confirms large-scale data breach following cyber incident

US insurance firm Aflac has confirmed that a cyberattack disclosed in June affected around 22.65 million people. The breach involved the theft of sensitive personal and health information; however, the company initially did not specify the number of individuals affected.

In filings with the Texas attorney general, Aflac said the compromised data includes names, dates of birth, home addresses, government-issued identification numbers, driving licence details, and Social Security numbers. Medical and health insurance information was also accessed during the incident.

A separate filing with the Iowa attorney general suggested the attackers may be linked to a known cybercriminal organisation. Federal law enforcement and external cybersecurity specialists indicated the group had been targeting the insurance sector more broadly.

Security researchers have linked a wave of recent insurance-sector breaches to Scattered Spider, a loosely organised group of predominantly young, English-speaking hackers. The timing and targeting of the Aflac incident align with the group’s activity.

The US company stated that it has begun notifying the affected individuals. The company, which reports having around 50 million customers, did not respond to requests for comment. Other insurers, including Erie Insurance and Philadelphia Insurance Companies, reported breaches during the same period.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

Deutsche Bank warns on scale of AI spending

Deutsche Bank has warned that surging AI investment is helping to prop up US economic growth. Analysts say that broader spending would have stalled without the heavy outlays on technology.

The bank estimates hyperscalers could spend $4 trillion on AI data centres by 2030. Analysts cautioned returns remain uncertain despite the scale of investment.

Official data showed US GDP grew at a 4.3% annualised rate in the third quarter. Economists linked much of the momentum to AI-driven capital expenditure.

Market experts remain divided on risks, although many reject fears of a bubble. Corporate cash flows, rather than excessive borrowing, are funding the majority of AI infrastructure.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

Creators embrace AI music on YouTube

Increasingly, YouTube creators are utilising AI-generated music to enhance video quality, saving time and costs. Selecting tracks that align with the content tone and audience expectations is crucial for engagement.

Subtle, balanced music supports narration without distraction and guides viewers through sections. Thoughtful use of intros, transitions and outros builds channel identity and reinforces branding.

Customisation tools allow creators to adjust tempo, mood and intensity for better pacing and cohesion with visuals. Testing multiple versions ensures the music feels natural and aligns with storytelling.

Understanding licensing terms protects monetisation and avoids copyright issues. Combining AI music with creative judgement keeps content authentic and original while maximising production impact.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

Meta restricts Congress AI videos in India

Meta has restricted access in India to two AI-generated videos posted by the Congress party. The clips depicted Prime Minister Narendra Modi alongside Gautam Adani, Chairman of the Adani Group.

The company stated that the content did not violate its community standards. Action followed takedown notices issued by Delhi Police under India’s information technology laws.

Meta warned that ignoring the orders could jeopardise safe harbour protections. Loss of those protections would expose platforms to direct legal liability.

The case highlights growing scrutiny of political AI content in India. Recent rule changes have tightened procedures for ordering online takedowns.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

Chest X-rays gain new screening potential through AI

AI is extending the clinical value of chest X-rays beyond lung and heart assessment. Researchers are investigating whether routine radiographs can support broader disease screening without the need for additional scans. Early findings suggest existing images may contain underused diagnostic signals.

A study in Radiology: Cardiothoracic Imaging examined whether AI could detect hepatic steatosis from standard frontal chest X-rays. Researchers analysed more than 6,500 images from over 4,400 patients across two institutions. Deep learning models were trained and externally validated.

The AI system achieved area-under-curve scores above 0.8 in both internal and external tests. Saliency maps showed predictions focused near the diaphragm, where part of the liver appears on chest X-rays. Results suggest that reliable signal extraction can be achieved from routine imaging.

Researchers argue the approach could enable opportunistic screening during standard care. Patients flagged by AI could be referred for a dedicated liver assessment when appropriate. The method adds clinical value without increasing imaging costs or radiation exposure.

Experts caution that the model is not a standalone diagnostic tool and requires further prospective validation. Integration with clinical and laboratory data remains necessary to reduce false positives. If validated, AI-enhanced X-rays could support scalable risk stratification.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

AI chatbots reshape learning habits and critical thinking debates

Use of AI chatbots for everyday tasks, from structuring essays to analysing data, has become widespread. Researchers are increasingly examining whether reliance on such tools affects critical thinking and learning. Recent studies suggest a more complex picture than simple decline.

A research study published by MIT found reduced cognitive activity among participants who used ChatGPT to write essays. Participants also showed weaker recall than those who completed tasks without AI assistance, raising questions about how learning develops when writing is outsourced.

Similar concerns emerged from studies by Carnegie Mellon University and Microsoft. Surveys of white-collar workers linked higher confidence in AI tools with lower levels of critical engagement, prompting warnings about possible overreliance.

Studies involving students present a more nuanced outcome. Research published by Oxford University Press found that many pupils felt AI supported skills such as revision and creativity. At the same time, some reported that tasks became too easy, limiting deeper learning.

Experts emphasise that outcomes depend on how AI tools are used. Educators argue for clearer guidance, transparency, and further research into long-term effects. Used as a tutor rather than a shortcut, AI may support learning rather than weaken it.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!