Cambodia Internet Governance Forum marks major step toward inclusive digital policy

The first national Internet Governance Forum in Cambodia has taken place, establishing a new platform for digital policy dialogue. The Cambodia Internet Governance Forum (CamIGF) included civil society, private sector and youth participants.

The forum follows an Internet Universality Indicators assessment led by UNESCO and national partners. The assessment recommended a permanent multistakeholder platform for digital governance, grounded in human rights, openness, accessibility and participation.

Opening remarks from national and international stakeholders framed the CamIGF as a move toward people-centred and rights-based digital transformation. Speakers stressed the need for cross-sector cooperation to ensure connectivity, innovation and regulation deliver public benefit.

Discussions focused on online safety in the age of AI, meaningful connectivity, youth participation and digital rights. The programme also included Cambodia’s Youth Internet Governance Forum, highlighting young people’s role in addressing data protection and digital skills gaps.

By institutionalising a national IGF, Cambodia joins a growing global network using multistakeholder dialogue to guide digital policy. UNESCO confirmed continued support for implementing assessment recommendations and strengthening inclusive digital governance.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

EU cyber rules target global tech dependence

The European Union has proposed new cybersecurity rules aimed at reducing reliance on high-risk technology suppliers, particularly from China. In the European Union, policymakers argue existing voluntary measures failed to curb dependence on vendors such as Huawei and ZTE.

The proposal would introduce binding obligations for telecom operators across the European Union to phase out Chinese equipment. At the same time, officials have warned that reliance on US cloud and satellite services also poses security risks for Europe.

Despite increased funding and expanded certification plans, divisions remain within the European Union. Countries including Germany and France support stricter sovereignty rules, while others favour continued partnerships with US technology firms.

Analysts say the lack of consensus in the European Union could weaken the impact of the reforms. Without clear enforcement and investment in European alternatives, Europe may struggle to reduce dependence on both China and the US.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

AI Act strengthens training rules despite 2025 Digital Omnibus reforms

The European AI Regulation reinforces training and awareness as core compliance requirements, even as the EU considers simplifications through the proposed Digital Omnibus. Regulation (EU) 2024/1689 sets a risk-based framework for AI systems under the AI Act.

AI literacy is promoted through a multi-level approach. The EU institutions focus on public awareness, national authorities support voluntary codes of conduct, and organisations are currently required under the AI Act to ensure adequate AI competence among staff and third parties involved in system use.

A proposed amendment to Article 4, submitted in November 2025 under the Digital Omnibus, would replace mandatory internal competence requirements with encouragement-based measures. The change seeks to reduce administrative burden without removing AI Act risk management duties.

Even if adopted, the amendment would not eliminate the practical need for AI training. Competence in AI systems remains essential for governance, transparency, monitoring, and incident handling, particularly for high-risk use cases regulated by the AI Act.

Companies are therefore expected to continue investing in tailored AI training across management, technical, legal, and operational roles. Embedding awareness and competence into risk management frameworks remains critical to compliance and risk mitigation.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

OpenAI ads in ChatGPT signal a shift in conversational advertising

The AI firm, OpenAI, plans to introduce advertising within ChatGPT for logged-in adult users, marking a structural shift in how brands engage audiences through conversational interfaces.

Ads would be clearly labelled and positioned alongside responses, aiming to replace interruption-driven formats with context-aware brand suggestions delivered during moments of active user intent.

Industry executives describe conversational AI advertising as a shift from exposure to earned presence, in which brands must provide clarity or utility to justify inclusion.

Experts warn that trust remains fragile, as AI recommendations carry the weight of personal consultation, and undisclosed commercial influence could prompt rapid user disengagement instead of passive ad avoidance.

Regulators and marketers alike highlight risks linked to dark patterns, algorithmic framing and subtle manipulation within AI-mediated conversations.

As conversational systems begin to shape discovery and decision-making, media planning is expected to shift toward intent-led engagement, authority-building, and transparency, reshaping digital advertising economics beyond search rankings and impression-based buying.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

The House of Lords backs social media ban for under-16s

The upper house of the Parliament of the United Kingdom,, the House of Lords has voted in favour of banning under-16s from social media platforms, backing an amendment to the government’s schools bill by 261 votes to 150. The proposal would require ministers to define restricted platforms and enforce robust age verification within a year.

Political momentum for tighter youth protections has grown after Australia’s similar move, with cross-party support emerging at Westminster. More than 60 Labour MPs have joined Conservatives in urging a UK ban, increasing pressure ahead of a Commons vote.

Supporters argue that excessive social media use contributes to declining mental health, online radicalisation, and classroom disruption. Critics warn that a blanket ban could push teenagers toward less regulated platforms and limit positive benefits, urging more vigorous enforcement of existing safety rules.

The government has rejected the amendment and launched a three-month consultation on age checks, curfews, and curbing compulsive online behaviour. Ministers maintain that further evidence is needed before introducing new legal restrictions.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Greece selected for Binance’s EU crypto approval

Binance has applied for a pan-European MiCA licence in Greece, positioning the country as a key regulatory gateway into the EU. The MiCA framework harmonises oversight across member states, enabling licensed firms to operate EU-wide under a single approval.

Contrary to expectations that Malta or Latvia would host the filing, the exchange selected Athens, where it has already established a holding company. The Hellenic Capital Market Commission is reportedly fast-tracking the review with support from leading accounting firms.

Company representatives said the MiCA regime offers legal clarity, regulatory certainty, and a framework that supports responsible innovation. Approval could lead to Binance expanding its corporate presence in Greece, including the opening of new offices and local staffing.

Regulatory urgency is intensifying as the July deadline approaches, particularly for firms operating across multiple EU jurisdictions. A successful application would strengthen Binance’s European strategy, expanding market access and reinforcing regulatory compliance.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

Ransomware attack on Under Armour leads to massive customer data exposure

Under Armour is facing growing scrutiny following the publication of customer data linked to a ransomware attack disclosed in late 2025.

According to breach verification platform Have I Been Pwned, a dataset associated with the incident appeared on a hacking forum in January, exposing information tied to tens of millions of customers.

The leaked material reportedly includes 72 million email addresses alongside names, dates of birth, location details and purchase histories. Security analysts warn that such datasets pose risks that extend far beyond immediate exposure, particularly when personal identifiers and behavioural data are combined.

Experts note that verified customer information linked to a recognised brand can enable compelling phishing and fraud campaigns powered by AI tools.

Messages referencing real transactions or purchase behaviour can blur the boundary between legitimate communication and malicious activity, increasing the likelihood of delayed victimisation.

The incident has also led to legal action against Under Armour, with plaintiffs alleging failures in safeguarding sensitive customer information. The case highlights how modern data breaches increasingly generate long-term consequences rather than immediate technical disruption.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

New AI method boosts reasoning without extra training

Researchers at the University of California, Riverside, have introduced a technique that improves AI reasoning without requiring additional training data. Called Test-Time Matching, the approach enhances AI performance by enabling dynamic model adaptation.

The method addresses a persistent weakness in multimodal AI systems, which often struggle to interpret unfamiliar combinations of images and text. Traditional evaluation metrics rely on isolated comparisons that can obscure deeper reasoning capabilities.

By replacing these with a group-based matching approach, the researchers uncovered hidden model potential and achieved markedly stronger results.

Test-Time Matching lets AI systems refine predictions through repeated self-correction. Tests on SigLIP-B16 showed substantial gains, with performance surpassing larger models, including GPT-4.1, on key reasoning benchmarks.

The findings suggest that smarter evaluation and adaptation strategies may unlock powerful reasoning abilities even in smaller models. Researchers say the approach could speed AI deployment across robotics, healthcare, and autonomous systems.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

Higher education urged to lead on AI skills and ethics

AI is reshaping how people work, learn and participate in society, prompting calls for universities to take a more active leadership role. A new book by Juan M. Lavista Ferres of Microsoft’s AI Economy Institute argues that higher education institutions must move faster to prepare students for an AI-driven world.

Balancing technical training with long-standing academic values remains a central challenge. Institutions are encouraged to teach practical AI skills while continuing to emphasise critical thinking, communication and ethical reasoning.

AI literacy is increasingly seen as essential for both employment and daily life. Early labour market data suggests that AI proficiency is already linked to higher wages, reinforcing calls for higher education institutions to embed AI education across disciplines rather than treating it as a specialist subject.

Developers, educators and policymakers are also urged to improve their understanding of each other’s roles. Technical knowledge must be matched with awareness of AI’s social impact, while non-technical stakeholders need clearer insight into how AI systems function.

Closer cooperation between universities, industry and governments is expected to shape the next phase of AI adoption. Higher education institutions are being asked to set recognised standards for AI credentials, expand access to training, and ensure inclusive pathways for diverse learners.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

AI tools reshape legal research and court efficiency in India

AI is rapidly reshaping India’s legal sector, as law firms and research platforms deploy conversational tools to address mounting caseloads and administrative strain.

SCC Online has launched an AI-powered legal research assistant that enables lawyers to ask complex questions in plain language, replacing rigid keyword-based searches and significantly reducing research time.

The need for speed and accuracy is pressing. India’s courts face a backlog exceeding 46 million cases, driven by procedural delays, documentation gaps, and limited judicial capacity.

Legal professionals routinely lose hours navigating precedents, limiting time for strategy, analysis, and client engagement.

Law firms are responding by embedding AI into everyday workflows. At Trilegal, AI supports drafting, document management, analytics, and collaboration, enabling lawyers to prioritise judgment and case strategy.

Secure AI platforms process high-volume legal material in minutes, improving productivity while preserving confidentiality and accuracy.

Beyond private practice, AI adoption is reshaping court operations and public access to justice. Real-time transcription, multilingual translation, and automated document analysis are shortening timelines and improving comprehension.

Incremental efficiency gains are beginning to translate into faster proceedings and broader legal accessibility.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot