Departures from Elon Musk’s AI startup xAI have reached a symbolic milestone, with two more co-founders announcing exits within days of each other. Yuhuai Tony Wu and Jimmy Ba both confirmed their decisions publicly, marking a turning point for the company’s leadership.
Losses now total six out of the original 12 founding members, signalling significant turnover in less than three years. Several prominent researchers had already moved on to competitors, launched new ventures, or stepped away for personal reasons.
Timing coincides with major developments, including SpaceX’s acquisition of xAI and preparations for a potential public listing. Financial opportunities and intense demand for AI expertise are encouraging senior talent to pursue independent projects or new roles.
Challenges surrounding the Grok chatbot, including technical issues and controversy over its harmful content, have added internal pressure. Growing competition from OpenAI and Anthropic means retaining skilled researchers will be vital to sustaining investor confidence and future growth.
Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!
Officials in Russia have confirmed that no plans are underway to restrict access to Google, despite recent public debate about the possibility of a technical block. Anton Gorelkin, a senior lawmaker, said regulators clarified that such a step is not being considered.
Concerns centre on the impact a ban would have on devices running Android, which are used by a significant share of smartphone owners in the country.
A block on Google would disrupt essential digital services instead of encouraging the company to resolve ongoing legal disputes involving unpaid fines.
Gorelkin noted that court proceedings abroad are still in progress, meaning enforcement options remain open. He added that any future move to reduce reliance on Google services should follow a gradual pathway supported by domestic technological development rather than abrupt restrictions.
The comments follow earlier statements from another lawmaker, Andrey Svintsov, who acknowledged that blocking Google in Russia is technically feasible but unnecessary.
Officials now appear focused on creating conditions that would allow local digital platforms to grow without destabilising existing infrastructure.
Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!
Lawmakers in the European Parliament remain divided over whether a direct ban on AI-driven ‘pornification’ should be added to the emerging digital omnibus.
Left-wing members push for an explicit prohibition, arguing that synthetic sexual imagery generated without consent has created a rapidly escalating form of online abuse. They say a strong legal measure is required instead of fragmented national responses.
Centre and liberal groups take a different position by promoting lighter requirements for industrial AI and seeking clarity on how any restrictions would interact with the AI Act.
They warn that an unrefined ban could spill over into general-purpose models and complicate enforcement across the European market. Their priority is a more predictable regulatory environment for companies developing high-volume AI systems.
Key figures across the political spectrum, including lawmakers such as Assita Kanko, Axel Voss and Brando Benifei, continue to debate how far the omnibus should go.
Some argue that safeguarding individuals from non-consensual sexual deepfakes must outweigh concerns about administrative burdens, while others insist that proportionality and technical feasibility need stronger assessment.
The lack of consensus leaves the proposal in a delicate phase as negotiations intensify. Lawmakers now face growing public scrutiny over how Europe will respond to the misuse of generative AI.
A clear stance from the Parliament is still pending, rather than an assured path toward agreement.
Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!
The European Union is revisiting the idea of an EU-wide social media age restriction as several member states move ahead with national measures to protect children online. Spain, France, and Denmark are among the countries considering the enforcement of age limits for access to social platforms.
The issue was raised in the European Commission’s new action plan against cyberbullying, published on Tuesday. The plan confirms that a panel of child protection experts will advise the Commission by the summer on possible EU-wide age restrictions for social media use.
Commission President Ursula von der Leyen announced the creation of an expert panel last September, although its launch was delayed until early 2026. The panel will assess options for a coordinated European approach, including potential legislation and awareness-raising measures for parents.
The document notes that diverging national rules could lead to uneven protection for children across the bloc. A harmonised EU framework, the Commission argues, would help ensure consistent safeguards and reduce fragmentation in how platforms apply age restrictions.
So far, the Commission has relied on non-binding guidance under the Digital Services Act to encourage platforms such as TikTok, Instagram, and Snap to protect minors. Increasing pressure from member states pursuing national bans may now prompt a shift towards more formal EU-level regulation.
Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!
A new consultative body has been established in South Korea to manage growing anxiety over AI and rapid industrial change.
The Ministry of Employment and Labour joined forces with the Korean Confederation of Trade Unions to create a regular channel for negotiating how workplaces should adapt as robots and AI systems become more widespread across key industries.
The two sides will meet monthly to seek agreement on major labour issues. The union argued for a human-centred transition instead of a purely technological one, urging the government to strengthen protections for workers affected by restructuring and AI-powered production methods.
Officials in South Korea responded by promising that policy decisions will reflect direct input gathered from employees on the ground.
Concerns heightened after Hyundai Motor confirmed plans to mass-produce Atlas humanoid robots by 2028 and introduce them across its assembly lines. The project forms part of the company’s ambition to build a ‘physical AI’ future where machines perform risky or repetitive tasks in place of humans.
The debate intensified as new labour statistics showed a sharp decline in employment within professional and scientific technical services, where AI deployment is suspected of reducing demand for new hires.
KCTU warned that industrial transformation could widen inequality unless government policy prioritises people over profit.
Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!
The Olympic ice dance format combines a themed rhythm dance with a free dance. For the 2026 season, skaters must draw on 1990s music and styles. While most competitors chose recognisable tracks, the Czech siblings used a hybrid soundtrack blending AC/DC with an AI-generated music piece.
Katerina Mrazkova and Daniel Mrazek, ice dancers from Czechia, made their Olympic debut using a rhythm dance soundtrack that included AI-generated music, a choice permitted under current competition rules but one that quickly drew attention.
The International Skating Union lists the rhythm dance music as ‘One Two by AI (of 90s style Bon Jovi)’ alongside ‘Thunderstruck’ by AC/DC. Olympic organisers confirmed the use of AI-generated material, with commentators noting the choice during the broadcast.
Criticism of the music selection extends beyond novelty. Earlier versions of the programme reportedly included AI-generated music with lyrics that closely resembled lines from well-known 1990s songs, raising concerns about originality.
The episode reflects wider tensions across creative industries, where generative tools increasingly produce outputs that closely mirror existing works. For the athletes, attention remains on performance, but questions around authorship and creative value continue to surface.
Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!
Ambitions for AI were outlined during a presentation at the Jožef Stefan Institute, where Slovenia’s Prime Minister Robert Golob highlighted the country’s growing role in scientific research and technological innovation.
He argued that AI has moved far beyond a supportive research tool and is now shaping the way societies function.
He called for deeper cooperation between engineering and the natural sciences instead of isolated efforts, while stressing that social sciences and the humanities must also be involved to secure balanced development.
Golob welcomed the joint bid for a new national supercomputer, noting that institutions once competing for excellence are now collaborating. He said Europe must build a stronger collective capacity if it wants to keep pace with the US and China.
Europe may excel in knowledge, he added, yet it continues to lag behind in turning that knowledge into useful tools for society.
Government officials set out the investment increases that support Slovenia’s long-term scientific agenda. Funding for research, innovation and development has risen sharply, while work has begun on two major projects: the national supercomputer and the Centre of Excellence for Artificial Intelligence.
Leaders from the Jožef Stefan Institute praised the government for recognising Slovenia’s AI potential and strengthening financial support.
Slovenia will present its progress at next week’s AI Action Summit in Paris, where global leaders, researchers, civil society and industry representatives will discuss sustainable AI standards.
Officials said that sustained investment in knowledge remains the most reliable route to social progress and international competitiveness.
Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!
Elon Musk’s move to integrate SpaceX with his AI company xAI is strengthening plans to develop data centres in orbit. Experts warn that such infrastructure could give one company or country significant control over global AI and cloud computing.
Fully competitive orbital data centres remain at least 20 years away due to launch costs, cooling limits, and radiation damage to hardware. Their viability depends heavily on Starship achieving fully reusable, low-cost launches, which remain unproven.
Interest in space computing is growing because constant solar energy could dramatically reduce AI operating costs and improve efficiency. China has already deployed satellites capable of supporting computing tasks, highlighting rising global competition.
European specialists warn that the region risks becoming dependent on US cloud providers that operate under laws such as the US Cloud Act. Without coordinated investment, control over future digital infrastructure and cybersecurity may be decided by early leaders.
Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!
A landmark trial has begun in Los Angeles, accusing Meta and Google’s YouTube of deliberately addicting children to their platforms.
The case is part of a wider series of lawsuits across the US seeking to hold social media companies accountable for harms to young users. TikTok and Snap settled before trial, leaving Meta and YouTube to face the allegations in court.
The first bellwether case involves a 19-year-old identified as ‘KGM’, whose claims could shape thousands of similar lawsuits. Plaintiffs allege that design features were intentionally created to maximise engagement among children, borrowing techniques from slot machines and the tobacco industry.
A trial that may see testimony from executives, including Meta CEO Mark Zuckerberg, and could last six to eight weeks.
Social media companies deny the allegations, emphasising existing safeguards and arguing that teen mental health is influenced by numerous factors, such as academic pressure, socioeconomic challenges and substance use, instead of social media alone.
Meta and YouTube maintain that they prioritise user safety and privacy while providing tools for parental oversight.
Similar trials are unfolding across the country. New Mexico is investigating allegations of sexual exploitation facilitated by Meta platforms, while Oakland will hear cases representing school districts.
More than 40 state attorneys general have filed lawsuits against Meta, with TikTok facing claims in over a dozen states. Outcomes could profoundly impact platform design, regulation and legal accountability for youth-focused digital services.
Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!
A global health organisation is urging the EU to make fuller use of its digital rules to curb health disinformation as concerns grow over the impact of deepfakes on public confidence.
Warnings point to a rising risk that manipulated content could reduce vaccine uptake instead of supporting informed public debate.
Experts argue that the Digital Services Act already provides the framework needed to limit harmful misinformation, yet enforcement remains uneven. Stronger oversight could improve platforms’ ability to detect manipulated content and remove inaccurate claims that jeopardise public health.
Campaigners emphasise that deepfake technology is now accessible enough to spread false narratives rapidly. The trend threatens vaccination campaigns at a time when several member states are attempting to address declining trust in health authorities.
The EU officials continue to examine how digital regulation can reinforce public health strategies. The call for stricter enforcement highlights the pressure on Brussels to ensure that digital platforms act responsibly rather than allowing misleading material to circulate unchecked.
Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!