Departures from Elon Musk’s AI startup xAI have reached a symbolic milestone, with two more co-founders announcing exits within days of each other. Yuhuai Tony Wu and Jimmy Ba both confirmed their decisions publicly, marking a turning point for the company’s leadership.
Losses now total six out of the original 12 founding members, signalling significant turnover in less than three years. Several prominent researchers had already moved on to competitors, launched new ventures, or stepped away for personal reasons.
Timing coincides with major developments, including SpaceX’s acquisition of xAI and preparations for a potential public listing. Financial opportunities and intense demand for AI expertise are encouraging senior talent to pursue independent projects or new roles.
Challenges surrounding the Grok chatbot, including technical issues and controversy over its harmful content, have added internal pressure. Growing competition from OpenAI and Anthropic means retaining skilled researchers will be vital to sustaining investor confidence and future growth.
Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!
Investors and researchers are increasingly arguing that the future of AI lies beyond large language models. In London and across Europe, startups are developing so-called world models designed to simulate physical reality rather than simply predict text.
Unlike LLMs, which rely on static datasets, world models aim to build internal representations of cause and effect. Advocates say these systems are better suited to autonomous vehicles, robotics, defence and industrial simulation.
London based Stanhope AI is among companies pursuing this approach, claiming its systems learn by inference and continuously update their internal maps. The company is reportedly working with European governments and aerospace firms on AI drone applications.
Supporters argue that safety and explainability must be embedded from the outset, particularly under frameworks such as the EU AI Act. Investors suggest that hybrid systems combining LLMs with physics aware models could unlock large commercial markets across Europe.
Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!
LegalOn Technologies has introduced five agentic AI tools aimed at transforming in-house legal operations. The company says the agents complete specialised contract and workflow tasks in seconds within its secure platform.
Unlike conventional AI assistants that respond to prompts, the new system is designed to plan and execute multi-step workflows independently, tailoring outputs to each organisation’s templates and standards while keeping lawyers informed of every action.
The suite includes tools for generating playbooks, processing legal intake requests and translating contracts across dozens of languages. Additional agents triage high-volume agreements and produce review-ready drafts from clause libraries and deal inputs.
Founded by two corporate lawyers in Japan, LegalOn now operates across Asia, Europe and North America. Backed by $200m in funding, it serves more than 8,000 clients globally, including Fortune 500 companies.
Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!
Accelerating AI adoption is exposing clear weaknesses in corporate AI governance. Research shows that while most organisations claim to have oversight processes, only a small minority describe them as mature.
Rapid rollouts across marketing, operations and manufacturing have outpaced safeguards designed to manage bias, transparency and accountability, leaving many firms reacting rather than planning ahead.
Privacy rules, data sovereignty questions and vendor data-sharing risks are further complicating deployment decisions. Fragmented data governance and unclear ownership across departments often stall progress.
Experts argue that effective AI governance must operate as an ongoing, cross-functional model embedded into product lifecycles. Defined accountability, routine audits and clear escalation paths are increasingly viewed as essential for building trust and reducing long-term risk.
Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!
Students speaking at a major education technology conference said AI has revealed weaknesses in traditional learning. Heavy focus on memorisation is becoming less relevant in a world where digital tools provide instant answers.
AI helps learners summarise information and understand complex subjects more easily. Improved access to such tools has made studying more efficient and, in some cases, more engaging.
Teachers have responded by restricting technology use and returning to handwritten assignments. These measures aim to protect academic integrity but have created mixed reactions among students.
Participants supported guided AI use instead of banning it completely. Communication, collaboration and presentation skills were seen as more valuable and less vulnerable to AI shortcuts.
Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!
Fukushima is repositioning itself as a technology and innovation hub, more than a decade after the 2011 earthquake, tsunami and nuclear disaster in Japan. The Fukushima Innovation Coast Framework aims to revitalise the coastal Hamadori region of Fukushima Prefecture.
At the centre of the push in Fukushima is the Fukushima Institute for Research, Education and Innovation, which plans a major research complex in Namie. The site in Fukushima will focus on robotics, energy, agriculture and radiation science, drawing researchers from across Japan and overseas.
Fukushima already hosts the Fukushima Robot Test Field and the Fukushima Hydrogen Energy Research Field. Projects in Fukushima include hydrogen production from solar power and large-scale robotics and drone testing.
Officials in Fukushima say the strategy combines clean energy, sustainable materials and advanced research to create jobs and attract families back to Japan’s northeast. Fukushima is positioning itself as a global case study in post-disaster recovery through technology.
Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!
Generative AI tools saw significant uptake among young Europeans in 2025, with usage rates far outpacing the broader population. Data shows that 63.8% of individuals aged 16–24 across the EU engaged with generative AI, nearly double the 32.7% recorded among citizens aged 16–74.
Adoption patterns indicate that younger users are embedding AI into everyday routines at a faster pace. Private use led the trend, with 44.2% of young people applying generative AI in personal contexts, compared with 25.1% of the general population.
Educational deployment also stood out, reaching 39.3% among youth, while only 9.4% of the wider population reported similar academic use.
The professional application presented the narrowest gap between age groups. Around 15.8% of young users reported workplace use of generative AI tools, closely aligned with 15.1% among the overall population- a reflection of many young people still transitioning into the labour market.
Country-level data highlights notable regional differences. Greece (83.5%), Estonia (82.8%), and Czechia (78.5%) recorded the highest youth adoption rates, while Romania (44.1%), Italy (47.2%), and Poland (49.3%) ranked lowest.
The findings coincide with Safer Internet Day, observed on 10 February, underscoring the growing importance of digital literacy and online safety as AI usage accelerates.
Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!
Research from the UK Safer Internet Centre reveals nearly all young people aged eight to 17 now use artificial intelligence tools, highlighting how deeply the technology has entered daily life. Growing adoption has also increased reliance, with many teenagers using AI regularly for schoolwork, social interactions and online searches.
Education remains one of the main uses, with students turning to AI for homework support and study assistance. However, concerns about fairness and creativity have emerged, as some pupils worry about false accusations of misuse and reduced independent thinking.
Safety fears remain significant, especially around harmful content and privacy risks linked to AI-generated images. Many teenagers and parents worry the technology could be used to create inappropriate or misleading visuals, raising questions about online protection.
Emotional and social impacts are also becoming clear, with some young people using AI for personal advice or practising communication. Limited parental guidance and growing dependence suggest governments and schools may soon consider stronger oversight and clearer rules.
Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!
Elon Musk’s move to integrate SpaceX with his AI company xAI is strengthening plans to develop data centres in orbit. Experts warn that such infrastructure could give one company or country significant control over global AI and cloud computing.
Fully competitive orbital data centres remain at least 20 years away due to launch costs, cooling limits, and radiation damage to hardware. Their viability depends heavily on Starship achieving fully reusable, low-cost launches, which remain unproven.
Interest in space computing is growing because constant solar energy could dramatically reduce AI operating costs and improve efficiency. China has already deployed satellites capable of supporting computing tasks, highlighting rising global competition.
European specialists warn that the region risks becoming dependent on US cloud providers that operate under laws such as the US Cloud Act. Without coordinated investment, control over future digital infrastructure and cybersecurity may be decided by early leaders.
Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!
Dutch MPs have renewed calls for companies and public services in the Netherlands to reduce reliance on US-based cloud servers. The move reflects growing concern over data security and foreign access in the Netherlands.
Research by NOS found that two-thirds of essential service providers in the Netherlands rely on at least one US cloud server. Local councils, health insurers and hospitals in the Netherlands remain heavily exposed.
Concerns intensified following a proposed sale of Solvinity, which manages the DigiD system used across the Netherlands. A sale to a US firm could place Dutch data under the US Cloud Act.
Parties including D66, VVD and CDA say critical infrastructure data in the Netherlands should be prioritised for protection. Dutch cloud providers say Europe could handle most services if procurement rules changed.
Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!