OpenAI ads in ChatGPT signal a shift in conversational advertising

The AI firm, OpenAI, plans to introduce advertising within ChatGPT for logged-in adult users, marking a structural shift in how brands engage audiences through conversational interfaces.

Ads would be clearly labelled and positioned alongside responses, aiming to replace interruption-driven formats with context-aware brand suggestions delivered during moments of active user intent.

Industry executives describe conversational AI advertising as a shift from exposure to earned presence, in which brands must provide clarity or utility to justify inclusion.

Experts warn that trust remains fragile, as AI recommendations carry the weight of personal consultation, and undisclosed commercial influence could prompt rapid user disengagement instead of passive ad avoidance.

Regulators and marketers alike highlight risks linked to dark patterns, algorithmic framing and subtle manipulation within AI-mediated conversations.

As conversational systems begin to shape discovery and decision-making, media planning is expected to shift toward intent-led engagement, authority-building, and transparency, reshaping digital advertising economics beyond search rankings and impression-based buying.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

The House of Lords backs social media ban for under-16s

The upper house of the Parliament of the United Kingdom,, the House of Lords has voted in favour of banning under-16s from social media platforms, backing an amendment to the government’s schools bill by 261 votes to 150. The proposal would require ministers to define restricted platforms and enforce robust age verification within a year.

Political momentum for tighter youth protections has grown after Australia’s similar move, with cross-party support emerging at Westminster. More than 60 Labour MPs have joined Conservatives in urging a UK ban, increasing pressure ahead of a Commons vote.

Supporters argue that excessive social media use contributes to declining mental health, online radicalisation, and classroom disruption. Critics warn that a blanket ban could push teenagers toward less regulated platforms and limit positive benefits, urging more vigorous enforcement of existing safety rules.

The government has rejected the amendment and launched a three-month consultation on age checks, curfews, and curbing compulsive online behaviour. Ministers maintain that further evidence is needed before introducing new legal restrictions.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Ransomware attack on Under Armour leads to massive customer data exposure

Under Armour is facing growing scrutiny following the publication of customer data linked to a ransomware attack disclosed in late 2025.

According to breach verification platform Have I Been Pwned, a dataset associated with the incident appeared on a hacking forum in January, exposing information tied to tens of millions of customers.

The leaked material reportedly includes 72 million email addresses alongside names, dates of birth, location details and purchase histories. Security analysts warn that such datasets pose risks that extend far beyond immediate exposure, particularly when personal identifiers and behavioural data are combined.

Experts note that verified customer information linked to a recognised brand can enable compelling phishing and fraud campaigns powered by AI tools.

Messages referencing real transactions or purchase behaviour can blur the boundary between legitimate communication and malicious activity, increasing the likelihood of delayed victimisation.

The incident has also led to legal action against Under Armour, with plaintiffs alleging failures in safeguarding sensitive customer information. The case highlights how modern data breaches increasingly generate long-term consequences rather than immediate technical disruption.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

New AI method boosts reasoning without extra training

Researchers at the University of California, Riverside, have introduced a technique that improves AI reasoning without requiring additional training data. Called Test-Time Matching, the approach enhances AI performance by enabling dynamic model adaptation.

The method addresses a persistent weakness in multimodal AI systems, which often struggle to interpret unfamiliar combinations of images and text. Traditional evaluation metrics rely on isolated comparisons that can obscure deeper reasoning capabilities.

By replacing these with a group-based matching approach, the researchers uncovered hidden model potential and achieved markedly stronger results.

Test-Time Matching lets AI systems refine predictions through repeated self-correction. Tests on SigLIP-B16 showed substantial gains, with performance surpassing larger models, including GPT-4.1, on key reasoning benchmarks.

The findings suggest that smarter evaluation and adaptation strategies may unlock powerful reasoning abilities even in smaller models. Researchers say the approach could speed AI deployment across robotics, healthcare, and autonomous systems.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

AI tools reshape legal research and court efficiency in India

AI is rapidly reshaping India’s legal sector, as law firms and research platforms deploy conversational tools to address mounting caseloads and administrative strain.

SCC Online has launched an AI-powered legal research assistant that enables lawyers to ask complex questions in plain language, replacing rigid keyword-based searches and significantly reducing research time.

The need for speed and accuracy is pressing. India’s courts face a backlog exceeding 46 million cases, driven by procedural delays, documentation gaps, and limited judicial capacity.

Legal professionals routinely lose hours navigating precedents, limiting time for strategy, analysis, and client engagement.

Law firms are responding by embedding AI into everyday workflows. At Trilegal, AI supports drafting, document management, analytics, and collaboration, enabling lawyers to prioritise judgment and case strategy.

Secure AI platforms process high-volume legal material in minutes, improving productivity while preserving confidentiality and accuracy.

Beyond private practice, AI adoption is reshaping court operations and public access to justice. Real-time transcription, multilingual translation, and automated document analysis are shortening timelines and improving comprehension.

Incremental efficiency gains are beginning to translate into faster proceedings and broader legal accessibility.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

Microsoft restores Exchange and Teams after Microsoft 365 disruption

The US tech giant, Microsoft, investigated a service disruption affecting Exchange Online, Teams and other Microsoft 365 services after users reported access and performance problems.

An incident that began late on Wednesday affected core communication tools used by enterprises for daily operations.

Engineers initially focused on diagnosing the fault, with Microsoft indicating that a potential third-party networking issue may have interfered with access to Outlook and Teams.

During the disruption, users experienced intermittent connectivity failures, latency and difficulties signing in across parts of the Microsoft 365 ecosystem.

Microsoft later confirmed that service access had been restored, although no detailed breakdown of the outage scope was provided.

The incident underlined the operational risks associated with cloud productivity platforms and the importance of transparency and resilience in enterprise digital infrastructure.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Anthropic releases new constitution shaping Claude’s AI behaviour

Anthropic has published a new constitution for its AI model Claude, outlining the values, priorities, and behavioural principles designed to guide its development. Released under a Creative Commons licence, the document aims to boost transparency while shaping Claude’s learning and reasoning.

The constitution plays a central role in training, guiding how Claude balances safety, ethics, compliance, and helpfulness. Rather than rigid rules, the framework explains core principles, enabling AI systems to generalise and apply nuanced judgment.

Anthropic says this approach supports more responsible decision-making while improving adaptability.

The updated framework also enables Claude to refine its own training through synthetic data generation and self-evaluation. Using the constitution in training helps future Claude models align behaviour with human values while maintaining safety and oversight.

Anthropic described the constitution as a living document that will evolve alongside AI capabilities. External feedback and ongoing evaluation will guide updates to strengthen alignment, transparency, and responsible AI development.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

South Korea sets the global standard for frontier AI regulation

South Korea will begin enforcing its Artificial Intelligence Act on Thursday, becoming the first country to introduce formal safety requirements for high-performance, or frontier, AI systems, reshaping the global regulatory landscape.

The law establishes a national AI governance framework, led by the Presidential Council on National Artificial Intelligence Strategy, and creates an AI Safety Institute to oversee safety and trust assessments.

Alongside regulatory measures, the government is rolling out broad support for research, data infrastructure, talent development, startups, and overseas expansion, signalling a growth-oriented policy stance.

To minimise early disruption, authorities will introduce a minimum one-year grace period centred on guidance, consultation, and education rather than enforcement.

Obligations cover three areas: high-impact AI in critical sectors, safety rules for frontier models, and transparency requirements for generative AI, including disclosure of realistic synthetic content.

Enforcement remains light-touch, prioritising corrective orders over penalties, with fines capped at 30 million won for persistent noncompliance. Officials said the framework aims to build public trust while supporting innovation, serving as a foundation for ongoing policy development.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

YouTube’s 2026 strategy places AI at the heart of moderation and monetisation

As announced yesterday, YouTube is expanding its response to synthetic media by introducing experimental likeness detection tools that allow creators to identify videos where their face appears altered or generated by AI.

The system, modelled conceptually on Content ID, scans newly uploaded videos for visual matches linked to enrolled creators, enabling them to review content and pursue privacy or copyright complaints when misuse is detected.

Participation requires identity verification through government-issued identification and a biometric reference video, positioning facial data as both a protective and governance mechanism.

While the platform stresses consent and limited scope, the approach reflects a broader shift towards biometric enforcement as platforms attempt to manage deepfakes, impersonation, and unauthorised synthetic content at scale.

Alongside likeness detection, YouTube’s 2026 strategy places AI at the centre of content moderation, creator monetisation, and audience experience.

AI tools already shape recommendation systems, content labelling, and automated enforcement, while new features aim to give creators greater control over how their image, voice, and output are reused in synthetic formats.

The move highlights growing tensions between creative empowerment and platform authority, as safeguards against AI misuse increasingly rely on surveillance, verification, and centralised decision-making.

As regulators debate digital identity, biometric data, and synthetic media governance, YouTube’s model signals how private platforms may effectively set standards ahead of formal legislation.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Snapchat settles social media addiction lawsuit as landmark trial proceeds

Snapchat’s parent company has settled a social media addiction lawsuit in California just days before the first major trial examining platform harms was set to begin.

The agreement removes Snapchat from one of the three bellwether cases consolidating thousands of claims, while Meta, TikTok and YouTube remain defendants.

These lawsuits mark a legal shift away from debates over user content and towards scrutiny of platform design choices, including recommendation systems and engagement mechanics.

A US judge has already ruled that such features may be responsible for harm, opening the door to liability that section 230 protections may not cover.

Legal observers compare the proceedings to historic litigation against tobacco and opioid companies, warning of substantial damages and regulatory consequences.

A ruling against the remaining platforms could force changes in how social media products are designed, particularly in relation to minors and mental health risks.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!