Nvidia’s plans to export its H200 AI chips to China remain pending nearly two months after US President Donald Trump approved. A national security review is still underway before licences can be issued to Chinese customers.
Chinese companies have delayed new H200 orders while awaiting clarity on licence approvals and potential conditions, according to people familiar with the discussions. The uncertainty has slowed anticipated demand and affected production planning across Nvidia’s supply chain.
In January, the US Commerce Department eased H200 export restrictions to China but required licence applications to be reviewed by the departments of State, Defence, and Energy.
Commerce has completed its analysis, but inter-agency discussions continue, with the US State Department seeking additional safeguards.
The export framework, which also applies to AMD, introduces conditions related to shipment allocation, testing, and end-use reporting. Until the review process concludes, Nvidia and prospective Chinese buyers remain unable to proceed with confirmed transactions.
Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!
The European Commission has missed a key deadline to issue guidance on how companies should classify high-risk AI systems under the EU AI Act, fuelling uncertainty around the landmark law’s implementation.
Guidance on Article 6, which defines high-risk AI systems and stricter compliance rules, was due by early February. Officials have indicated that feedback is still being integrated, with a revised draft expected later this month and final adoption potentially slipping to spring.
The delay follows warnings that regulators and businesses are unprepared for the act’s most complex rules, due to apply from August. Brussels has suggested delaying high-risk obligations under its Digital Omnibus package, citing unfinished standards and the need for legal clarity.
Industry groups want enforcement delayed until guidance and standards are finalised, while some lawmakers warn repeated slippage could undermine confidence in the AI Act. Critics warn further changes could deepen uncertainty if proposed revisions fail or disrupt existing timelines.
Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!
The European Commission plans to decide by early 2026 whether OpenAI’s ChatGPT should be classified as a vast online platform under the Digital Services Act.
OpenAI’s tool reported 120.4 million average monthly users in the EU back in October, a figure far above the 45-million threshold that triggers more onerous obligations instead of lighter oversight.
Officials said the designation procedure depends on both quantitative and qualitative assessments of how a service operates, together with input from national authorities.
The Commission is examining whether a standalone AI chatbot can fall within the scope of rules usually applied to platforms such as social networks, online marketplaces and significant search engines.
ChatGPT’s user data largely stems from its integrated online search feature, which prompts users to allow the chatbot to search the web. The Commission noted that OpenAI could voluntarily meet the DSA’s risk-reduction obligations while the formal assessment continues.
The EU’s latest wave of designations included Meta’s WhatsApp, though the rules applied only to public channels, not private messaging.
A decision on ChatGPT that will clarify how far the bloc intends to extend its most stringent online governance framework to emerging AI systems.
Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!
Austria is advancing plans to bar children under 14 from social media when the new school year begins in September 2026, according to comments from a senior Austrian official. Poland’s government is drafting a law to restrict access for under-15s, using digital ID tools to confirm age.
Austria’s governing parties support protecting young people online but differ on how to verify ages securely without undermining privacy. In Poland supporters of the draft argue that early exposure to screens is a parental and platform enforcement issue.
Austria and Poland form part of a broader European trend as France moves to ban under-15s and the UK is debating similar measures. Wider debates tie these proposals to concerns about children’s mental health and online safety.
Proponents in both Austria and Poland aim to finalise legal frameworks by 2026, with implementation potentially rolling out in the following year if national parliaments approve the age restrictions.
Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!
A major international AI safety report warns that AI systems are advancing rapidly, with sharp gains in reasoning, coding and scientific tasks. Researchers say progress remains uneven, leaving systems powerful yet unreliable.
The report highlights rising concerns over deepfakes, cyber misuse and emotional reliance on AI companions in the UK and the US. Experts note growing difficulty in distinguishing AI generated content from human work.
Safeguards against biological, chemical and cyber risks have improved, though oversight challenges persist in the UK and the US. Analysts warn advanced models are becoming better at evading evaluation and controls.
The impact of AI on jobs in the UK and the US remains uncertain, with mixed evidence across sectors. Researchers say labour disruption could accelerate if systems gain greater autonomy.
Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!
Oracle is expanding AI data centres across the United States while pairing infrastructure growth with workforce development through its philanthropic education programme, Oracle Academy.
The initiative provides schools and educators with curriculum, cloud tools, software, and hands-on training designed to prepare students for enterprise-scale technology roles increasingly linked to AI operations.
As demand for specialised skills rises, Oracle Academy is introducing Data Centre Technician courses to fast-track learners into permanent roles supporting AI infrastructure development and maintenance.
The programme already works with hundreds of institutions across multiple US states, including Texas, Michigan, Wisconsin, and New Mexico, spanning disciplines from computer science and engineering to construction management and supply chain studies.
Alongside new courses in machine learning, generative AI, and analytics, Oracle says the approach is intended to close skills gaps and ensure local communities benefit from the rapid expansion of AI infrastructure.
Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!
A proposal filed with the US Federal Communications Commission seeks approval for a constellation of up to one million solar-powered satellites designed to function as orbiting data centres for artificial intelligence computing, according to documents submitted by SpaceX.
The company described the network as an efficient response to growing global demand for AI processing power, positioning space-based infrastructure as a new frontier for large-scale computation.
In its filing, SpaceX framed the project in broader civilisational terms, suggesting the constellation could support humanity’s transition towards harnessing the Sun’s full energy output and enable long-term multi-planetary development.
Regulators are unlikely to approve the full scale immediately, with analysts viewing the figure as a negotiating position. The USFCC recently authorised thousands of additional Starlink satellites while delaying approval for a larger proposed expansion.
Concerns continue to grow over orbital congestion, space debris, and environmental impacts, as satellite numbers rise sharply and rival companies seek similar regulatory extensions.
Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!
The UK and Bulgaria are expanding cooperation on semiconductor technology to strengthen supply chains and support Europe’s growing need for advanced materials.
A partnership that links British expertise with the ambitions of Bulgaria under the EU Chips Act 2023, creating opportunities for investment, innovation and skills development.
The Science and Technology Network has acted as a bridge between both countries by bringing together government, industry and academia. A high-level roundtable in Sofia, a study visit to Scotland and a trade mission to Bulgaria encouraged firms and institutions to explore new partnerships.
These exchanges helped shape joint projects and paved the way for shared training programmes.
Several concrete outcomes have followed. A €350 million Green Silicon Carbide wafer factory is moving ahead, supported by significant UK export wins.
Universities in Glasgow and Sofia have signed a research memorandum, while TechWorks UK and Bulgaria’s BASEL have agreed on an industry partnership. The next phase is expected to focus on launching the new factory, deepening research cooperation and expanding skills initiatives.
Bulgaria’s fast-growing electronics and automotive sectors have strengthened its position as a key European manufacturing hub. The country produces most sensors used in European cars and hosts modern research centres and smart factories.
The combined effect of the EU funding, national investment and international collaboration is helping Bulgaria secure a prominent role in Europe’s semiconductor supply chain.
Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!
A court in China has ruled that AI developers are not automatically liable for hallucinations produced by their systems. The decision was issued by the Hangzhou Internet Court in eastern China and sets an early legal precedent.
Judges found that AI-generated content should be treated as a service rather than a product in such cases. In China, users must therefore prove developer fault and show concrete harm caused by the erroneous output.
The case involved a user in China who relied on AI-generated information about a university campus that did not exist. The court ruled no damages were owed, citing a lack of demonstrable harm and no authorisation for the AI to make binding promises.
The Hangzhou Internet Court warned that strict liability could hinder innovation in China’s AI sector. Legal experts say the ruling clarifies expectations for developers while reinforcing the need for user warnings about AI limitations.
Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!
UN experts are intensifying efforts to shape a people-first approach to AI, warning that unchecked adoption could deepen inequality and disrupt labour markets. AI offers productivity gains, but benefits must outweigh social and economic risks, the organisation says.
UN Secretary-General António Guterres has repeatedly stressed that human oversight must remain central to AI decision-making. UN efforts now focus on ethical governance, drawing on the Global Digital Compact to align AI with human rights.
Education sits at the heart of the strategy. UNESCO has warned against prioritising technology investment over teachers, arguing that AI literacy should support, not replace, human development.
Labour impacts also feature prominently, with the International Labour Organization predicting widespread job transformation rather than inevitable net losses.
Access and rights remain key concerns. The UN has cautioned that AI dominance by a small group of technology firms could widen global divides, while calling for international cooperation to regulate harmful uses, protect dignity, and ensure the technology serves society as a whole.
Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!