AI chips exports face tighter US oversight under new proposal

Washington is considering rules that would require US government approval for overseas purchases of AI chips, tightening control over the global semiconductor supply chain. Draft proposals would make foreign buyers seek Department of Commerce authorisation before acquiring AI chips from US suppliers.

Furthermore, scrutiny will vary by order size, giving US authorities more oversight of international demand for advanced processors. The proposed rules could significantly expand oversight of leading semiconductor manufacturers such as NVIDIA and AMD, whose AI chips underpin many advanced AI systems.

The new approach to regulating exports of AI chips marks a shift toward a more interventionist strategy. Previously, during the Biden administration, an AI diffusion regulation was finalised to control the global spread of AI technology. Yet, before this rule could take effect, the current administration scrapped it. Building on these developments, the current proposed rules represent a new chapter in US AI export policy.

A US Department of Commerce spokesperson said the agency remains committed to ‘promoting secure exports of the American tech stack,’ but rejected claims that the government is reviving the earlier diffusion framework, calling it ‘burdensome, overreaching, and disastrous.’

Meanwhile, critics warn that tighter controls could have unintended effects. Restrictions on AI chip exports may drive international buyers to non-US suppliers, potentially weakening US leadership in advanced semiconductor technology as global AI hardware competition intensifies.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

AI exposure highlights jobs most at risk

A new study introduces observed exposure, a measure that combines theoretical AI capability and real-world use to estimate which jobs are most susceptible to automation. Tasks performed by LLMs and actively automated at work receive higher exposure scores.

Computer programmers, customer service representatives, and financial analysts rank among the most exposed occupations.

The analysis finds that AI is far from reaching its full potential, with many tasks still beyond current capabilities. Occupations with higher observed exposure tend to grow more slowly, and workers in these roles are more likely to be older, female, highly educated, and earn higher wages.

Despite concerns, no systematic rise in unemployment has been detected among highly exposed workers since late 2022.

Early evidence suggests that the hiring of younger workers aged 22-25 may be slowing in highly exposed occupations. While these effects are small, they may indicate initial labour market adjustments as AI tools become more integrated into workplace tasks.

Researchers emphasise that observed exposure provides a framework for tracking AI’s economic impact over time, helping policymakers and businesses identify potential vulnerabilities.

The study underscores the gap between AI’s theoretical capabilities and actual usage, highlighting the importance of monitoring adoption patterns. The framework uses task automation and job data to track AI’s impact on the workforce.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

Privacy lawsuit targets Meta AI glasses after reports of footage review

Meta is facing a new lawsuit in the US over privacy concerns tied to its AI smart glasses.

The legal complaint follows investigative reporting indicating that contractors working for a Kenya-based subcontractor reviewed footage captured by users’ devices, including sensitive personal scenes.

The lawsuit alleges that some of the reviewed material included nudity and other intimate activities recorded by the glasses’ cameras.

According to the complaint, the footage formed part of a data review process designed to improve the AI system integrated into the wearable device.

Plaintiffs claim Meta marketed the product as prioritising user privacy, citing advertisements suggesting that the glasses were ‘designed for privacy’ and that users remained in control of their personal data.

The complaint argues that such messaging could mislead consumers if the footage were subject to human review without clear disclosure.

A legal action that also names eyewear manufacturer Luxottica, which partnered with Meta to produce the glasses.

Meanwhile, the UK’s Information Commissioner’s Office has begun examining the issue after reports that face-blurring safeguards may not have consistently protected individuals captured in the recordings.

Would you like to learn more about AI, tech and digital diplomacyIf so, ask our Diplo chatbot!

Gemini leads latest ORCA benchmark on AI maths accuracy

A new round of the ORCA (Omni Research on Calculation in AI) benchmark reveals significant progress in how leading AI chatbots handle real-world mathematical problems, while also highlighting persistent limitations in reliability and consistency.

The latest results show Google’s Gemini 3 Flash moving clearly ahead of competing systems, correctly answering nearly three-quarters of the 500 practical questions used in the benchmark.

Our readers may recall that the platform previously analysed the first edition of the ORCA benchmark, examining how AI chatbots performed on everyday quantitative tasks rather than purely academic problems. The earlier analysis already showed notable gaps between systems and raised questions about the reliability of AI models for calculations people might encounter in daily life.

The second benchmark compares four widely accessible models: ChatGPT-5.2, Gemini 3 Flash, Grok-4.1 and DeepSeek V3.2. Gemini recorded the largest improvement, decisively outpacing the others. ChatGPT and DeepSeek posted smaller but steady gains, while Grok’s results declined slightly in several subject areas.

Performance improvements were uneven across domains, with Gemini showing particularly strong gains in fields such as biology, chemistry, physics and health-related calculations.

Closer examination of the errors reveals why AI still struggles with mathematical accuracy. Calculation mistakes have increased as a share of total errors, while rounding and formatting problems have decreased.

Researchers explain that large language models do not actually compute numbers in the same way that calculators do. Instead, they predict likely sequences of words and numbers, which can lead to small shortcuts during multi-step reasoning that eventually produce incorrect results.

The benchmark also highlights another challenge: instability. The same question can produce different answers when asked multiple times, even when the model initially responded correctly. Such variation reflects the probabilistic nature of AI systems.

As a result, the benchmark concludes that AI chatbots can assist with calculations but cannot yet match the consistency of traditional calculators, which always return the same answer for the same input.

Would you like to learn more about AI, tech and digital diplomacyIf so, ask our Diplo chatbot!

Debate grows over the future of privacy

Experts gathered in London, UK, to examine how the concept of privacy has evolved over centuries. Discussions in London, UK, highlighted that privacy was only widely recognised as a legal and social norm after the Second World War.

Speakers in London noted that earlier societies often viewed privacy with suspicion or did not recognise it at all. Historical examples discussed included practices from Roman society and the French monarchy.

Modern legal protections expanded rapidly in recent decades, with privacy laws now covering about 80 percent of the global population. Scholars said the concept remains relatively new despite its central role in modern democracies.

The debate also explored whether privacy will remain a stable social value as technology evolves. Analysts in London said emerging technologies such as AI are reshaping debates over personal data and surveillance.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

EU competition scrutiny pushes Meta to reopen WhatsApp AI access

Meta has announced that third-party AI chatbots will again be allowed to operate through WhatsApp in Europe, reversing restrictions introduced earlier this year.

The decision follows pressure from the European Commission, which had warned it could impose interim competition measures.

Earlier in 2026, Meta limited access to rival chatbot services on the messaging platform, prompting regulators to examine whether the move unfairly restricted competition in the rapidly expanding AI market.

WhatsApp remains one of the most widely used messaging applications across European countries, making platform access critical for emerging AI services.

Under the new arrangement, companies will be able to distribute general-purpose AI chatbots via the WhatsApp Business API for 12 months.

The change is intended to give European regulators time to complete their investigation while allowing competing AI services to operate within the platform ecosystem.

Meta has also indicated that businesses offering chatbots through WhatsApp will be required to pay fees to access the system.

The European Commission is now assessing whether these adjustments sufficiently address competition concerns surrounding the integration of AI services inside major digital platforms.

Would you like to learn more about AI, tech and digital diplomacyIf so, ask our Diplo chatbot!

EU launches panel on child safety online and social media age rules

The European Commission has convened a new expert panel tasked with examining how children can be better protected across digital platforms, including social media, gaming environments and AI tools.

The initiative reflects growing concern across Europe regarding the psychological and safety risks associated with young users’ online behaviour.

Announced during the 2025 State of the Union Address by Commission President Ursula von der Leyen, the panel will evaluate evidence on both the opportunities and harms linked to children’s digital engagement.

Specialists from health, computer science, child rights and digital literacy will work alongside youth representatives to assess current research and policy responses.

Discussions during the first meeting centred on platform responsibility, including age-appropriate safety-by-design features, algorithmic amplification and addictive product design.

An initiative that also addresses digital literacy for children, parents and educators, while considering how regulatory measures can reduce risks without undermining the benefits of online participation.

The panel’s work complements the enforcement of the Digital Services Act and related European policies designed to strengthen protections for minors online.

Among the tools under development is an EU age-verification application currently tested in several member states, intended to support privacy-preserving checks compatible with the future EU digital identity framework.

The panel is expected to deliver policy recommendations to the Commission by summer 2026.

Would you like to learn more about AI, tech and digital diplomacyIf so, ask our Diplo chatbot!

OpenAI explains 5 AI value models transforming enterprise strategy

AI is beginning to reshape corporate strategy as organisations shift from isolated technology experiments to broader operational transformation.

According to OpenAI, businesses that treat AI as a collection of disconnected pilots risk missing the bigger structural change that the technology enables.

A new framework describes five value models through which AI can gradually reshape companies. The first stage focuses on workforce empowerment, where tools such as ChatGPT spread AI capabilities across teams and improve everyday productivity.

Once employees develop fluency, organisations can introduce AI-native distribution models that transform how customers discover products and interact with digital services.

More advanced stages involve specialised systems. Expert capability integrates AI into research, creative production, and domain-specific analysis, allowing professionals to explore a wider range of ideas and experiments.

Meanwhile, systems and dependency management introduce AI tools capable of safely updating interconnected digital environments, including codebases, documentation, and operational processes.

The final stage involves full process re-engineering through autonomous agents. In such environments, AI systems coordinate complex workflows across departments while maintaining governance, accountability, and auditability.

Organisations that successfully progress through these stages may eventually redesign their business models rather than merely improving efficiency within existing structures.

Would you like to learn more about AI, tech and digital diplomacyIf so, ask our Diplo chatbot!

EU watchdog urges limits on US data access

The European Union’s data protection watchdog has urged stronger safeguards as negotiations continue with the US over access to biometric databases. European Data Protection Supervisor Wojciech Wiewiórowski said limits must ensure Europeans’ data is used only for agreed purposes.

Talks between the EU and the US involve potential arrangements that would allow US authorities to query national biometric systems. Databases across the EU contain sensitive information, including fingerprints and facial recognition data.

Past transatlantic data-sharing agreements between the two have faced legal challenges due to insufficient safeguards. European regulators are closely monitoring the Data Privacy Framework amid ongoing concerns about oversight.

Officials also warned that emerging AI technologies could create new surveillance risks linked to US data access. European authorities said they must negotiate as a unified bloc when dealing with the US.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

U Mobile named Malaysia’s fastest 5G network in 2025

U Mobile has been ranked Malaysia’s fastest 5G network for the third and fourth quarters of 2025, according to Ookla Speedtest Awards data drawn from millions of real-world user tests.

The result is attributed to the company’s ULTRA5G network, which deploys advanced antenna technologies, including 64T64R systems and extremely large antenna arrays, to boost coverage and handle heavier data traffic.

Chief Technology Officer Woon Ooi Yuen said the recognition validates the company’s infrastructure investments, emphasising that the award reflects actual user experience rather than controlled lab conditions.

U Mobile is targeting 5G coverage across 80% of populated areas in Malaysia by the second half of 2026, with its rollout said to be ahead of schedule.

Beyond coverage expansion, U Mobile has signed a memorandum of understanding with ZTE Malaysia to explore AI-native capabilities in its 5G core network.

The collaboration centres on integrating AI tools for traffic prediction, automated network management, and security monitoring, with digital twin technology potentially allowing engineers to simulate changes before deployment.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!