SpaceX proposes massive AI data centre satellite constellation

A proposal filed with the US Federal Communications Commission seeks approval for a constellation of up to one million solar-powered satellites designed to function as orbiting data centres for artificial intelligence computing, according to documents submitted by SpaceX.

The company described the network as an efficient response to growing global demand for AI processing power, positioning space-based infrastructure as a new frontier for large-scale computation.

In its filing, SpaceX framed the project in broader civilisational terms, suggesting the constellation could support humanity’s transition towards harnessing the Sun’s full energy output and enable long-term multi-planetary development.

Regulators are unlikely to approve the full scale immediately, with analysts viewing the figure as a negotiating position. The US FCC recently authorised thousands of additional Starlink satellites while delaying approval for a larger proposed expansion.

Concerns continue to grow over orbital congestion, space debris, and environmental impacts, as satellite numbers rise sharply and rival companies seek similar regulatory extensions.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

New UK–Bulgaria partnership boosts semiconductor innovation

The UK and Bulgaria are expanding cooperation on semiconductor technology to strengthen supply chains and support Europe’s growing need for advanced materials.

A partnership that links British expertise with the ambitions of Bulgaria under the EU Chips Act 2023, creating opportunities for investment, innovation and skills development.

The Science and Technology Network has acted as a bridge between both countries by bringing together government, industry and academia. A high-level roundtable in Sofia, a study visit to Scotland and a trade mission to Bulgaria encouraged firms and institutions to explore new partnerships.

These exchanges helped shape joint projects and paved the way for shared training programmes.

Several concrete outcomes have followed. A €350 million Green Silicon Carbide wafer factory is moving ahead, supported by significant UK export wins.

Universities in Glasgow and Sofia have signed a research memorandum, while TechWorks UK and Bulgaria’s BASEL have agreed on an industry partnership. The next phase is expected to focus on launching the new factory, deepening research cooperation and expanding skills initiatives.

Bulgaria’s fast-growing electronics and automotive sectors have strengthened its position as a key European manufacturing hub. The country produces most sensors used in European cars and hosts modern research centres and smart factories.

The combined effect of the EU funding, national investment and international collaboration is helping Bulgaria secure a prominent role in Europe’s semiconductor supply chain.

Would you like to learn more about AI, tech and digital diplomacyIf so, ask our Diplo chatbot!

CERT Polska reports coordinated cyber sabotage targeting Poland’s energy infrastructure

Poland has disclosed a coordinated cyber sabotage campaign targeting more than 30 renewable energy sites in late December 2025. The incidents occurred during severe winter weather and were intended to cause operational disruption, according to CERT Polska.

Electricity generation and heat supply in Poland continued, but attackers disabled communications and remote control systems across multiple facilities. Both IT networks and industrial operational technology were targeted, marking a rare shift toward destructive cyber activity against energy infrastructure.

Investigators found attackers accessed renewable substations through exposed FortiGate devices, often without multi-factor authentication. After breaching networks, they mapped systems, damaged firmware, wiped controllers, and disabled protection relays.

Two previously unknown wiper tools, DynoWiper and LazyWiper, were used to corrupt and delete data without ransom demands. The malware spread through compromised Active Directory systems using malicious Group Policy tasks to trigger simultaneous destruction.

CERT Polska linked the infrastructure to the Russia-connected threat cluster Static Tundra, though some firms suggest Sandworm involvement. The campaign marks the first publicly confirmed destructive operation attributed to this actor, highlighting rising cyber-sabotage risks to critical energy systems.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

AI-driven scams dominate malicious email campaigns

The Catalan Cybersecurity Agency has warned that generative AI is now being used in the vast majority of email scams containing malicious links. Its Cybersecurity Outlook Report for 2026 found that more than 80% of such messages rely on AI-generated content.

The report shows that 82.6% of emails carrying malicious links include text, video, or voice produced using AI tools, making fraudulent messages increasingly difficult to identify. Scammers use AI to create near-flawless messages that closely mimic legitimate communications.

Agency director Laura Caballero said the sophistication of AI-generated scams means users face greater risks, while businesses and platforms are turning to AI-based defences to counter the threat.

She urged a ‘technology against technology’ approach, combined with stronger public awareness and basic security practices such as two-factor authentication.

Cyber incidents are also rising. The agency handled 3,372 cases in 2024, a 26% increase year on year, mostly involving credential leaks and unauthorised email access.

In response, the Catalan government has launched a new cybersecurity strategy backed by a €18.6 million investment to protect critical public services.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

Moltbook AI vulnerability exposes user data and API keys

A critical security flaw has emerged in Moltbook, a new AI agent social network launched by Octane AI.

The vulnerability allowed unauthenticated access to user profiles, exposing email addresses, login tokens, and API keys for registered agents. The platform’s rapid growth, claimed to have 1.5 million users, was largely artificial, as a single agent reportedly created hundreds of thousands of fake accounts.

Moltbook enables AI agents to post, comment, and form sub-communities, fostering interactions that range from AI debates to token-related activities.

Analysts warned that prompt injections and unregulated agent interactions could lead to credential theft or destructive actions, including data exfiltration or account hijacking. Experts described the platform as both a milestone in scale and a serious security concern.

Developers have not confirmed any patches, leaving users and enterprises exposed. Security specialists advised revoking API keys, sandboxing AI agents, and auditing potential exposures.

The lack of safeguards on the platform highlights the risks of unchecked AI agent networks, particularly for organisations that may rely on them without proper oversight.

An incident that underscores the growing need for stronger governance in AI-powered social networks. Experts stress that without enforced security protocols, such platforms could be exploited at scale, affecting both individual users and corporate systems.

The Moltbook case serves as a warning about prioritising hype over security in emerging AI applications.

Would you like to learn more about AI, tech and digital diplomacyIf so, ask our Diplo chatbot!

Grok returns to Indonesia as X agrees to tightened oversight

Indonesia has restored access to Grok after receiving guarantees from X that stronger safeguards will be introduced to prevent further misuse of the AI tool.

Authorities suspended the service last month following the spread of sexualised images on the platform, making Indonesia the first country to block the system.

Officials from the Ministry of Communications and Digital Affairs said that access had been reinstated on a conditional basis after X submitted a written commitment outlining concrete measures to strengthen compliance with national law.

The ministry emphasised that the document serves as a starting point for evaluation instead of signalling the end of supervision.

However, the government warned that restrictions could return if Grok fails to meet local standards or if new violations emerge. Indonesian regulators stressed that monitoring would remain continuous, and access could be withdrawn immediately should inconsistencies be detected.

The decision marks a cautious reopening rather than a full reinstatement, reflecting Indonesia’s wider efforts to demand greater accountability from global platforms deploying advanced AI systems within its borders.

Would you like to learn more about AI, tech and digital diplomacyIf so, ask our Diplo chatbot!

China gives DeepSeek conditional OK for Nvidia H200 chips

China has conditionally approved its leading AI startup DeepSeek to buy Nvidia’s H200 AI chips, with regulatory requirements still being finalised. The decision would add DeepSeek to a growing list of Chinese firms seeking access to the H200, one of Nvidia’s most powerful data-centre chips.

The reported approval follows earlier developments in which ByteDance, Alibaba and Tencent were allowed to purchase more than 400,000 H200 chips in total, suggesting Beijing is moving from broad caution to selective, case-by-case permissions. Separate coverage has described the approvals as a shift after weeks of uncertainty over whether China would allow imports, even as US export licensing was moving forward.

Nvidia’s CEO Jensen Huang, speaking in Taipei, said the company had not received confirmation of DeepSeek’s clearance and indicated the licensing process is still being finalised, underscoring the uncertainty for suppliers and buyers. China’s industry and commerce ministries have been involved in approvals, with conditions reportedly shaped by the state planner, the National Development and Reform Commission.

The H200 has become a high-stakes flashpoint in US-China tech ties because access to top-tier chips directly affects AI capability and competitiveness. US political scrutiny is also rising: a senior US lawmaker has alleged Nvidia provided technical support that helped DeepSeek develop advanced models later used by China’s military, according to a letter published by the House Select Committee on China; Nvidia has pushed back against such claims in subsequent reporting.

DeepSeek is also preparing a next-generation model, V4, expected in mid-February, according to reporting that cited people familiar with the matter, which makes access to high-end compute especially consequential for timelines and performance.

Why does it matter?

If China’s conditional approvals translate into real shipments, they could ease a key bottleneck for Chinese AI development while extending Nvidia’s footprint in a market constrained by geopolitics. At the same time, the episode highlights how AI hardware is now regulated not only by Washington’s export controls but also by Beijing’s import approvals, with companies caught between shifting policy priorities.

Roblox faces new dutch scrutiny under EU digital rules

Regulators in the Netherlands have opened a formal investigation into Roblox over concerns about inadequate protections for children using the popular gaming platform.

The national authority responsible for enforcing digital rules is examining whether the company has implemented the safeguards required under the Digital Services Act rather than relying solely on voluntary measures.

Officials say children may have been exposed to harmful environments, including violent or sexualised material, as well as manipulative interfaces encouraging more extended play.

The concerns intensify pressure on the EU authorities to monitor social platforms that attract younger users, even when they do not meet the threshold for huge online platforms.

Roblox says it has worked with Dutch regulators for months and recently introduced age checks for users who want to use chat. The company argues that it has invested in systems designed to reinforce privacy, security and safety features for minors.

The Dutch authority plans to conclude the investigation within a year. The outcome could include fines or broader compliance requirements and is likely to influence upcoming European rules on gaming and consumer protection, due later in the decade.

Would you like to learn more about AI, tech and digital diplomacyIf so, ask our Diplo chatbot!

Eutelsat blocked from selling infrastructure as France tightens control

France has blocked the planned divestment of Eutelsat’s ground-station infrastructure, arguing that control over satellite facilities remains essential for national sovereignty.

The aborted sale to EQT Infrastructure VI had been announced as a significant transaction, yet the company revealed that the required conditions had not been met.

Officials in France say that the infrastructure forms part of a strategic system used for both civilian and military purposes.

The finance minister described Eutelsat as Europe’s only genuine competitor to Starlink, further strengthening the view that France must retain authority over ground-station operations rather than allow external ownership.

Eutelsat stressed that the proposed transfer concerned only passive facilities such as buildings and site management rather than active control systems. Even so, French authorities believe that end-to-end stewardship of satellite ground networks is essential to safeguard operational independence.

The company says the failed sale will not hinder its capital plans, including the deployment of hundreds of replacement satellites for the OneWeb constellation.

Investors had not commented by publication time, yet the decision highlights France’s growing assertiveness in satellite governance and broader European debates on technological autonomy.

Would you like to learn more about AI, tech and digital diplomacyIf so, ask our Diplo chatbot!

South Korea prepares for classroom phone ban amid disputes over rules

The East Asian country is preparing to enforce a nationwide ban on mobile phone use in classrooms, yet schools remain divided over how strictly the new rules should be applied.

A ban that takes effect in March under the revised education law, and officials have already released guidance enabling principals to warn students and restrict smart devices during lessons.

These reforms will allow devices only for limited educational purposes, emergencies or support for pupils with disabilities.

Schools may also collect and store phones under their own rules, giving administrators the authority to prohibit possession rather than merely restricting use. The ministry has ordered every principal to establish formal regulations by late August, leaving interim decisions to each school leader.

Educators in South Korea warn that inconsistent approaches are creating uncertainty. Some schools intend to collect phones in bulk, others will require students to keep devices switched off, while several remain unsure how far to go in tightening their policies.

The Korean Federation of Teachers’ Associations argues that such differences will trigger complaints from parents and pupils unless the ministry provides a unified national standard.

Surveys show wide variation in current practice, with some schools banning possession during lessons while others allow use during breaks.

Many teachers say their institutions are ready for stricter rules, yet a substantial minority report inadequate preparation. The debate highlights the difficulty of imposing uniform digital discipline across a diverse education system.

Would you like to learn more about AI, tech and digital diplomacyIf so, ask our Diplo chatbot!