Generative AI accelerates discovery in complex materials science

Scientists are increasingly applying generative AI models to address complex problems in materials science, such as predicting structures, simulating properties, and guiding the discovery of advanced materials with novel functions.

Traditional computational methods, such as density functional theory, can be slow and resource-intensive, whereas AI-based tools can learn from existing data and propose candidate materials more efficiently.

Early applications of these generative approaches include designing materials for energy storage, catalysis, and electronic applications, speeding up workflows that previously involved large amounts of trial and error.

Researchers emphasise that while AI does not yet replace physics-based modelling, it can complement it by narrowing the search space and suggesting promising leads for experimental validation.

The work reflects a broader trend of AI-augmented science, where machine learning and generative models act as accelerators for discovery across disciplines such as chemistry, physics and bioengineering.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

SpaceX proposes massive AI data centre satellite constellation

A proposal filed with the US Federal Communications Commission seeks approval for a constellation of up to one million solar-powered satellites designed to function as orbiting data centres for artificial intelligence computing, according to documents submitted by SpaceX.

The company described the network as an efficient response to growing global demand for AI processing power, positioning space-based infrastructure as a new frontier for large-scale computation.

In its filing, SpaceX framed the project in broader civilisational terms, suggesting the constellation could support humanity’s transition towards harnessing the Sun’s full energy output and enable long-term multi-planetary development.

Regulators are unlikely to approve the full scale immediately, with analysts viewing the figure as a negotiating position. The US FCC recently authorised thousands of additional Starlink satellites while delaying approval for a larger proposed expansion.

Concerns continue to grow over orbital congestion, space debris, and environmental impacts, as satellite numbers rise sharply and rival companies seek similar regulatory extensions.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

GPT-4o set for retirement as OpenAI shifts focus to newer systems

OpenAI has confirmed that several legacy AI models will be removed from ChatGPT, with GPT-4o scheduled for retirement on 13 February. The decision follows months of debate after the company reinstated the model amid strong user backlash.

Alongside GPT-4o, the models being withdrawn include GPT-5 Instant, GPT-5 Thinking, GPT-4.1, GPT-4.1 mini, and o4-mini. The changes apply only to ChatGPT, while developers will continue to access the models through OpenAI’s API.

GPT-4o had built a loyal following for its natural writing style and emotional awareness, with many users arguing newer models felt less expressive. When OpenAI first attempted to phase it out in 2025, widespread criticism prompted a temporary reversal.

Company data now suggests active use of GPT-4o has dropped to around 0.1% of daily users. OpenAI says features associated with the model have since been integrated into GPT-5.2, including personality tuning and creative response controls.

Despite this, criticism has resurfaced across social platforms, with users questioning usage metrics and highlighting that GPT-4o was no longer prominently accessible. Comments from OpenAI leadership acknowledging recent declines in writing quality have further fuelled concerns about the model’s removal.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

CERT Polska reports coordinated cyber sabotage targeting Poland’s energy infrastructure

Poland has disclosed a coordinated cyber sabotage campaign targeting more than 30 renewable energy sites in late December 2025. The incidents occurred during severe winter weather and were intended to cause operational disruption, according to CERT Polska.

Electricity generation and heat supply in Poland continued, but attackers disabled communications and remote control systems across multiple facilities. Both IT networks and industrial operational technology were targeted, marking a rare shift toward destructive cyber activity against energy infrastructure.

Investigators found attackers accessed renewable substations through exposed FortiGate devices, often without multi-factor authentication. After breaching networks, they mapped systems, damaged firmware, wiped controllers, and disabled protection relays.

Two previously unknown wiper tools, DynoWiper and LazyWiper, were used to corrupt and delete data without ransom demands. The malware spread through compromised Active Directory systems using malicious Group Policy tasks to trigger simultaneous destruction.

CERT Polska linked the infrastructure to the Russia-connected threat cluster Static Tundra, though some firms suggest Sandworm involvement. The campaign marks the first publicly confirmed destructive operation attributed to this actor, highlighting rising cyber-sabotage risks to critical energy systems.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

Church leaders question who should guide moral answers in the age of AI

AI is increasingly being used to answer questions about faith, morality, and suffering, not just everyday tasks. As AI systems become more persuasive, religious leaders are raising concerns about the authority people may assign to machine-generated guidance.

Within this context, Catholic outlet EWTN Vatican examined Magisterium AI, a platform designed to reference official Church teaching rather than produce independent moral interpretations. Its creators say responses are grounded directly in doctrinal sources.

Founder Matthew Sanders argues mainstream AI models are not built for theological accuracy. He warns that while machines sound convincing, they should never be treated as moral authorities without grounding in Church teaching.

Church leaders have also highlighted broader ethical risks associated with AI, particularly regarding human dignity and emotional dependency. Recent Vatican discussions stressed the need for education and safeguards.

Supporters say faith-based AI tools can help navigate complex religious texts responsibly. Critics remain cautious, arguing spiritual formation should remain rooted in human guidance.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

Best moments from MoltBook archives

A new ‘Best of MoltBook’ post on Astral Codex Ten has renewed debate over how AI-assisted writing is being presented and understood. The collection highlights selected excerpts from MoltBook, a public notebook used to explore ideas with the help of AI tools.

MoltBook is framed as a space for experimentation rather than finished analysis, with short-form entries reflecting drafts, prompts and revisions. Human judgement remains central, with outputs curated, edited or discarded rather than treated as autonomous reasoning.

Some readers have questioned descriptions of the work as ‘agentic AI’, arguing the label exaggerates the technology’s role. The AI involved responds to instructions but does not act independently, plan goals or retain long-term memory.

The discussion reflects wider scepticism about inflated claims around AI capability. MoltBook is increasingly viewed as an example of AI as a productivity aid for thinking, rather than evidence of a new form of independent intelligence.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

Grok returns to Indonesia as X agrees to tightened oversight

Indonesia has restored access to Grok after receiving guarantees from X that stronger safeguards will be introduced to prevent further misuse of the AI tool.

Authorities suspended the service last month following the spread of sexualised images on the platform, making Indonesia the first country to block the system.

Officials from the Ministry of Communications and Digital Affairs said that access had been reinstated on a conditional basis after X submitted a written commitment outlining concrete measures to strengthen compliance with national law.

The ministry emphasised that the document serves as a starting point for evaluation instead of signalling the end of supervision.

However, the government warned that restrictions could return if Grok fails to meet local standards or if new violations emerge. Indonesian regulators stressed that monitoring would remain continuous, and access could be withdrawn immediately should inconsistencies be detected.

The decision marks a cautious reopening rather than a full reinstatement, reflecting Indonesia’s wider efforts to demand greater accountability from global platforms deploying advanced AI systems within its borders.

Would you like to learn more about AI, tech and digital diplomacyIf so, ask our Diplo chatbot!

Why smaller AI models may be the smarter choice

Most everyday jobs do not actually need the most powerful, cutting-edge AI models, argues Jovan Kurbalija in his blog post ‘Do we really need frontier AI for everyday work?’. While frontier AI systems dominate headlines with ever-growing capabilities, their real-world value for routine professional tasks is often limited. For many people, much of daily work remains simple, repetitive, and predictable.

Kurbalija points out that large parts of professional life, from administration and law to healthcare and corporate management, operate within narrow linguistic and cognitive boundaries. Daily communication relies on a small working vocabulary, and most decision-making follows familiar mental patterns.

In this context, highly complex AI models are often unnecessary. Smaller, specialised systems can handle these tasks more efficiently, at lower cost and with fewer risks.

Using frontier AI for routine work, the author suggests, is like using a sledgehammer to crack a nut. These large models are designed to handle almost anything, but that breadth comes with higher costs, heavier governance requirements, and stronger dependence on major technology platforms.

In contrast, small language models tailored to specific tasks or organisations can be faster, cheaper, and easier to control, while still delivering strong results.

Kurbalija compares this to professional expertise itself. Most jobs never required having the Encyclopaedia Britannica open on the desk. Real expertise lives in procedures, institutions, and communities, not in massive collections of general knowledge.

Similarly, the most useful AI tools are often those designed to draft standard documents, summarise meetings, classify requests, or answer questions based on a defined body of organisational knowledge.

Diplomacy, an area Kurbalija knows well, illustrates both the strengths and limits of AI. Many diplomatic tasks are highly ritualised and can be automated using rules-based systems or smaller models. But core diplomatic skills, such as negotiation, persuasion, empathy, and trust-building, remain deeply human and resistant to automation. The lesson, he argues, is to automate routines while recognising where AI should stop.

The broader paradox is that large AI platforms may benefit more from users than users benefit from frontier AI. By sitting at the centre of workflows, these platforms collect valuable data and organisational knowledge, even when their advanced capabilities are not truly needed.

As Kurbalija concludes, a more common-sense approach would prioritise smaller, specialised models for everyday work, reserving frontier AI for genuinely complex tasks, and moving beyond the assumption that bigger AI is always better.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

China gives DeepSeek conditional OK for Nvidia H200 chips

China has conditionally approved its leading AI startup DeepSeek to buy Nvidia’s H200 AI chips, with regulatory requirements still being finalised. The decision would add DeepSeek to a growing list of Chinese firms seeking access to the H200, one of Nvidia’s most powerful data-centre chips.

The reported approval follows earlier developments in which ByteDance, Alibaba and Tencent were allowed to purchase more than 400,000 H200 chips in total, suggesting Beijing is moving from broad caution to selective, case-by-case permissions. Separate coverage has described the approvals as a shift after weeks of uncertainty over whether China would allow imports, even as US export licensing was moving forward.

Nvidia’s CEO Jensen Huang, speaking in Taipei, said the company had not received confirmation of DeepSeek’s clearance and indicated the licensing process is still being finalised, underscoring the uncertainty for suppliers and buyers. China’s industry and commerce ministries have been involved in approvals, with conditions reportedly shaped by the state planner, the National Development and Reform Commission.

The H200 has become a high-stakes flashpoint in US-China tech ties because access to top-tier chips directly affects AI capability and competitiveness. US political scrutiny is also rising: a senior US lawmaker has alleged Nvidia provided technical support that helped DeepSeek develop advanced models later used by China’s military, according to a letter published by the House Select Committee on China; Nvidia has pushed back against such claims in subsequent reporting.

DeepSeek is also preparing a next-generation model, V4, expected in mid-February, according to reporting that cited people familiar with the matter, which makes access to high-end compute especially consequential for timelines and performance.

Why does it matter?

If China’s conditional approvals translate into real shipments, they could ease a key bottleneck for Chinese AI development while extending Nvidia’s footprint in a market constrained by geopolitics. At the same time, the episode highlights how AI hardware is now regulated not only by Washington’s export controls but also by Beijing’s import approvals, with companies caught between shifting policy priorities.

Education and rights central to UN AI strategy

UN experts are intensifying efforts to shape a people-first approach to AI, warning that unchecked adoption could deepen inequality and disrupt labour markets. AI offers productivity gains, but benefits must outweigh social and economic risks, the organisation says.

UN Secretary-General António Guterres has repeatedly stressed that human oversight must remain central to AI decision-making. UN efforts now focus on ethical governance, drawing on the Global Digital Compact to align AI with human rights.

Education sits at the heart of the strategy. UNESCO has warned against prioritising technology investment over teachers, arguing that AI literacy should support, not replace, human development.

Labour impacts also feature prominently, with the International Labour Organization predicting widespread job transformation rather than inevitable net losses.

Access and rights remain key concerns. The UN has cautioned that AI dominance by a small group of technology firms could widen global divides, while calling for international cooperation to regulate harmful uses, protect dignity, and ensure the technology serves society as a whole.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!