Austria is advancing plans to bar children under 14 from social media when the new school year begins in September 2026, according to comments from a senior Austrian official. Poland’s government is drafting a law to restrict access for under-15s, using digital ID tools to confirm age.
Austria’s governing parties support protecting young people online but differ on how to verify ages securely without undermining privacy. In Poland supporters of the draft argue that early exposure to screens is a parental and platform enforcement issue.
Austria and Poland form part of a broader European trend as France moves to ban under-15s and the UK is debating similar measures. Wider debates tie these proposals to concerns about children’s mental health and online safety.
Proponents in both Austria and Poland aim to finalise legal frameworks by 2026, with implementation potentially rolling out in the following year if national parliaments approve the age restrictions.
Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!
A major international AI safety report warns that AI systems are advancing rapidly, with sharp gains in reasoning, coding and scientific tasks. Researchers say progress remains uneven, leaving systems powerful yet unreliable.
The report highlights rising concerns over deepfakes, cyber misuse and emotional reliance on AI companions in the UK and the US. Experts note growing difficulty in distinguishing AI generated content from human work.
Safeguards against biological, chemical and cyber risks have improved, though oversight challenges persist in the UK and the US. Analysts warn advanced models are becoming better at evading evaluation and controls.
The impact of AI on jobs in the UK and the US remains uncertain, with mixed evidence across sectors. Researchers say labour disruption could accelerate if systems gain greater autonomy.
Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!
Oracle is expanding AI data centres across the United States while pairing infrastructure growth with workforce development through its philanthropic education programme, Oracle Academy.
The initiative provides schools and educators with curriculum, cloud tools, software, and hands-on training designed to prepare students for enterprise-scale technology roles increasingly linked to AI operations.
As demand for specialised skills rises, Oracle Academy is introducing Data Centre Technician courses to fast-track learners into permanent roles supporting AI infrastructure development and maintenance.
The programme already works with hundreds of institutions across multiple US states, including Texas, Michigan, Wisconsin, and New Mexico, spanning disciplines from computer science and engineering to construction management and supply chain studies.
Alongside new courses in machine learning, generative AI, and analytics, Oracle says the approach is intended to close skills gaps and ensure local communities benefit from the rapid expansion of AI infrastructure.
Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!
A proposal filed with the US Federal Communications Commission seeks approval for a constellation of up to one million solar-powered satellites designed to function as orbiting data centres for artificial intelligence computing, according to documents submitted by SpaceX.
The company described the network as an efficient response to growing global demand for AI processing power, positioning space-based infrastructure as a new frontier for large-scale computation.
In its filing, SpaceX framed the project in broader civilisational terms, suggesting the constellation could support humanity’s transition towards harnessing the Sun’s full energy output and enable long-term multi-planetary development.
Regulators are unlikely to approve the full scale immediately, with analysts viewing the figure as a negotiating position. The USFCC recently authorised thousands of additional Starlink satellites while delaying approval for a larger proposed expansion.
Concerns continue to grow over orbital congestion, space debris, and environmental impacts, as satellite numbers rise sharply and rival companies seek similar regulatory extensions.
Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!
The UK and Bulgaria are expanding cooperation on semiconductor technology to strengthen supply chains and support Europe’s growing need for advanced materials.
A partnership that links British expertise with the ambitions of Bulgaria under the EU Chips Act 2023, creating opportunities for investment, innovation and skills development.
The Science and Technology Network has acted as a bridge between both countries by bringing together government, industry and academia. A high-level roundtable in Sofia, a study visit to Scotland and a trade mission to Bulgaria encouraged firms and institutions to explore new partnerships.
These exchanges helped shape joint projects and paved the way for shared training programmes.
Several concrete outcomes have followed. A €350 million Green Silicon Carbide wafer factory is moving ahead, supported by significant UK export wins.
Universities in Glasgow and Sofia have signed a research memorandum, while TechWorks UK and Bulgaria’s BASEL have agreed on an industry partnership. The next phase is expected to focus on launching the new factory, deepening research cooperation and expanding skills initiatives.
Bulgaria’s fast-growing electronics and automotive sectors have strengthened its position as a key European manufacturing hub. The country produces most sensors used in European cars and hosts modern research centres and smart factories.
The combined effect of the EU funding, national investment and international collaboration is helping Bulgaria secure a prominent role in Europe’s semiconductor supply chain.
Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!
A court in China has ruled that AI developers are not automatically liable for hallucinations produced by their systems. The decision was issued by the Hangzhou Internet Court in eastern China and sets an early legal precedent.
Judges found that AI-generated content should be treated as a service rather than a product in such cases. In China, users must therefore prove developer fault and show concrete harm caused by the erroneous output.
The case involved a user in China who relied on AI-generated information about a university campus that did not exist. The court ruled no damages were owed, citing a lack of demonstrable harm and no authorisation for the AI to make binding promises.
The Hangzhou Internet Court warned that strict liability could hinder innovation in China’s AI sector. Legal experts say the ruling clarifies expectations for developers while reinforcing the need for user warnings about AI limitations.
Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!
UN experts are intensifying efforts to shape a people-first approach to AI, warning that unchecked adoption could deepen inequality and disrupt labour markets. AI offers productivity gains, but benefits must outweigh social and economic risks, the organisation says.
UN Secretary-General António Guterres has repeatedly stressed that human oversight must remain central to AI decision-making. UN efforts now focus on ethical governance, drawing on the Global Digital Compact to align AI with human rights.
Education sits at the heart of the strategy. UNESCO has warned against prioritising technology investment over teachers, arguing that AI literacy should support, not replace, human development.
Labour impacts also feature prominently, with the International Labour Organization predicting widespread job transformation rather than inevitable net losses.
Access and rights remain key concerns. The UN has cautioned that AI dominance by a small group of technology firms could widen global divides, while calling for international cooperation to regulate harmful uses, protect dignity, and ensure the technology serves society as a whole.
Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!
Google has unveiled AlphaGenome, a new AI research tool designed to analyse the human genome and uncover the genetic roots of disease. The announcement was made in Paris, where researchers described the model as a major step forward.
AlphaGenome focuses on non-coding DNA, which makes up most of the human genome and plays a key role in regulating genes. Google scientists in Paris said the system can analyse extremely long DNA sequences at high resolution.
The model was developed by Google DeepMind using public genomic datasets from humans and mice. Researchers in Paris said the tool predicts how genetic changes influence biological processes inside cells.
Independent experts in the UK welcomed the advance but urged caution. Scientists at University of Cambridge and the Francis Crick Institute noted that environmental factors still limit what AI models can explain.
Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!
French streaming platform Deezer has opened access to its AI music detection tool for rival services, including Spotify. The move follows mounting concern in France and across the industry over the rapid rise of synthetic music uploads.
Deezer said around 60,000 AI-generated tracks are uploaded daily, with 13.4 million detected in 2025. In France, the company has already demonetised 85% of AI-generated streams to redirect royalties to human artists.
The tool automatically tags fully AI-generated tracks, removes them from recommendations and flags fraudulent streaming activity. Spotify, which also operates widely in France, has introduced its own measures but relies more heavily on creator disclosure.
Challenges remain for Deezer in France and beyond, as the system struggles to identify hybrid tracks mixing human and AI elements. Industry pressure continues to grow for shared standards that balance innovation, transparency and fair payment.
Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!
Pentagon officials are at odds with AI developer Anthropic over restrictions designed to prevent autonomous weapons targeting and domestic surveillance. The disagreement has stalled discussions under a $200 million contract.
Anthropic has expressed concern about its tools being used in ways that could harm civilians or breach privacy. The company emphasises that human oversight is essential for national security applications.
The dispute reflects broader tensions between Silicon Valley firms and government use of AI. Pentagon officials argue that commercial AI can be deployed as long as it follows US law, regardless of corporate guidelines.
Anthropic’s stance may affect its Pentagon contracts as the firm prepares for a public offering. The company continues to engage with officials while advocating for ethical AI deployment in defence operations.
Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!