Top AI safety expert warns that an unregulated AI ‘arms race’ may pose existential risks

At an AI Impact Summit in New Delhi, Stuart Russell, a computer science professor at the University of California, Berkeley and a prominent AI safety advocate, said the ongoing AI arms race between big tech companies carries ‘existential risk’ that could ultimately threaten humanity if super-intelligent AI systems overpower human control.

He argued that while CEOs of leading AI developers, whom he believes privately recognise the dangers, are reluctant to slow development unilaterally due to investor pressure, governments could work together to impose collective regulation and safety standards.

Russell characterised the current trajectory as akin to ‘Russian roulette’ with humanity’s future and urged political action to address both safety and ethical concerns around AI advancement.

He also highlighted other societal issues tied to rapid AI deployment, including potential job losses, surveillance concerns and misuse. He pointed to growing public unease, especially among younger people, about AI’s dehumanising aspects.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Microsoft pledges $50bn for AI in Global South

Microsoft has announced it is on pace to invest $50 billion by the end of the decade to expand AI access across the Global South, speaking at the India AI Impact Summit in Delhi. The company said AI usage in the Global North is roughly double that of the Global South, with the gap widening.

In India and other regions of the Global South, Microsoft is increasing investment in data centre infrastructure, connectivity and electricity to support AI deployment. The company reported more than $8 billion invested in infrastructure serving the Global South in its last fiscal year.

Microsoft is also expanding skills and education programmes in India, including a pledge to help 20 million people gain AI credentials by 2028 and a target to equip 20 million people in India with AI skills by 2030.

Additional initiatives focus on multilingual AI development, food security projects in Kenya and across Sub-Saharan Africa, and new data tools to measure AI diffusion. Microsoft said coordinated global partnerships are essential to ensure AI benefits reach countries in the Global South.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

Proposed GDPR changes target AI development

The European Commission has proposed changes to the GDPR and the EU AI Act as part of its Digital Omnibus Package, seeking to clarify how personal data may be processed for AI development and operation across the EU.

A new provision would recognise AI development and operation as a potential legitimate interest under the GDPR, subject to necessity and a balancing test. Controllers in the EU would still need to demonstrate safeguards, including data minimisation, transparency and an unconditional right to object.

The package also introduces a proposed legal ground for processing sensitive data in AI systems where removal is not feasible without disproportionate effort. Claims that strict conditions would apply, requiring technical protections and documentation throughout the lifecycle of AI models in the EU.

Further amendments would permit biometric data processing for identity verification under defined conditions and expand the rules allowing sensitive data to be used for bias detection beyond high-risk AI systems.

Overall, the proposals aim to provide greater legal certainty without overturning existing data protection principles. The EU lawmakers and supervisory authorities continue to debate the proposals before any final adoption.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

AI climate benefits overstated says new civil society report

Environmental groups, including Beyond Fossil Fuels and Stand.earth, have published a report challenging claims that AI will meaningfully address climate change. The analysis argues that rapid data centre expansion is being justified by overstated promises of ‘AI for climate’ benefits.

Researchers found that many cited emissions reductions relate to older forms of machine learning rather than energy-intensive generative AI systems. At the same time, rising electricity demand from large-scale AI deployment is driving increased fossil fuel use.

The report also questions evidence presented by corporations and institutions such as the International Energy Agency, stating that projected climate gains are often weak or exaggerated. Companies are reported to be drifting away from climate targets even when renewable energy offsets are included.

Campaigners say framing AI as a climate solution risks distracting from corporate decisions that increase pollution and digital infrastructure growth. They call for stronger accountability and clearer scrutiny of environmental claims linked to emerging technologies.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

Ericsson launches AI integrated RAN radios and software to support next generation 5G networks

Telecoms giant Ericsson has launched a new range of AI-ready radios, antennas and RAN software designed to meet growing demand from AI-enabled and augmented reality devices. The portfolio will be showcased ahead of Mobile World Congress 2026 in Barcelona.

New Massive MIMO and remote radios integrate Ericsson Silicon with neural network accelerators, enabling real-time AI inference and improved uplink performance. Higher-power FDD and TDD configurations aim to support data-intensive AI applications while lowering the total cost of ownership.

Updated RAN software introduces AI-managed beamforming, AI-powered outdoor positioning and instant coverage prediction. Additional latency prioritisation tools are designed to deliver faster response times for AI and AR services.

Five new energy-efficient antennas complete the lineup, enhancing spectrum use and simplifying site design. Ericsson says deeper AI integration across hardware and software will help communications service providers monetise next-generation connectivity.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

Fake Gemini AI chatbot used in Google Coin crypto investment scam

Fraudsters are using a fake AI chatbot posing as Google’s Gemini to promote a bogus ‘Google Coin’ cryptocurrency presale. The automated assistant delivers convincing investment projections and directs victims to send irreversible crypto payments.

The scam site copies Google branding and claims the token will surge in value after launch, despite Google having no cryptocurrency project. Visitors are shown fabricated presale stages, countdowns and token sales figures to create urgency.

When questioned about regulatory or company details, the chatbot avoids providing verifiable information and instead repeats scripted claims about security and transparency. Tougher queries are redirected to a supposed ‘manager’, suggesting human operators step in to close larger payments.

Researchers warn that AI tools are making crypto scams more scalable and more challenging to detect. Consumers are urged to verify claims on official websites and to avoid sending digital assets in exchange for promised returns.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

Rwanda and Anthropic sign AI partnership

Anthropic and the Government of Rwanda have signed a three-year Memorandum of Understanding to expand AI deployment across health, education and public sector services in Rwanda. The agreement marks Anthropic’s first multi-sector government partnership in Africa.

In Rwanda’s health system, Anthropic will support national priorities, including efforts to eliminate cervical cancer and reduce malaria and maternal mortality. Rwanda’s Ministry of Health will work with Anthropic to integrate AI tools aligned with national objectives.

Public sector developer teams in Rwanda will gain access to Claude and Claude Code, alongside training, API credits and technical support. The partnership also formalises an education programme launched in 2025 that provided 2,000 Claude Pro licences to educators in Rwanda.

Officials in Rwanda have said the collaboration focuses on capacity development, responsible deployment and local autonomy. Anthropic stated that investment in skills and infrastructure in Rwanda aims to enable safe and independent use of AI by teachers, health workers and public servants.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

Generative AI revives historic images in New Brighton with Remarkable community engagement

Generative AI is increasingly being used to reinterpret cultural heritage and re-engage communities with their local history. In New Brighton, a creative initiative has digitally restored, colourised, and reanimated archival photographs dating from the Victorian era to the late twentieth century.

The project demonstrates how AI can transform static historical images into moving sequences, making the past more accessible to digital audiences. By combining archival research with creative experimentation, the initiative bridges heritage and contemporary technology.

Public response was immediate and substantial. Within hours of publication, the videos generated tens of thousands of views, hundreds of shares, and extensive social media commentary, reflecting strong community interest.

Beyond numerical engagement, the project prompted residents and former visitors to share personal memories of the pier, fairground, cinemas, and promenade. Organisers described the depth of emotional response as evidence that local identity and civic pride remain deeply rooted.

The initiative forms part of a broader creative revival in New Brighton. Upcoming public art projects, including a large-scale mural celebrating community volunteers, aim to build on this momentum and connect heritage with future regeneration efforts.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Wikipedia in the AI era highlights essential human oversight

Human-curated knowledge remains central in the AI era, according to the co-founder of Wikipedia. Speaking at the AI Impact Summit 2026, he stressed that editorial judgement, reliable sourcing, and community debate are essential to maintaining trust. AI tools may assist contributors, but oversight and accountability must remain human-led.

Wikipedia has become part of the digital infrastructure underpinning AI systems. Large language models are extensively trained on their openly licensed content, increasing the platform’s responsibility to safeguard accuracy. Wales emphasised that while AI is now embedded in global information systems, it still depends on human-verified knowledge foundations.

Concerns about reliability and misinformation featured prominently in the discussion. AI systems can fabricate convincing but inaccurate details, highlighting the continued importance of journalism and source verification. Wikipedia’s model, requiring citations and scrutinising source credibility, positions it as a safeguard against rapidly generated false content.

The conversation also addressed bias and language diversity. AI models trained predominantly on English-language data risk marginalising other linguistic communities. Wikipedia’s co-founder pointed to the importance of multilingual knowledge ecosystems and inclusive data practices to ensure global representation in both AI development and online information governance.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

The reality behind AI hype

As governments and tech leaders gather at global forums such as the AI Impact Summit in New Delhi, one assumption dominates discussion: the more computing power poured into AI, the better it will become. In his blog ‘‘The elephant in the AI room’: Does more computing power really bring more useful AI?’, Jovan Kurbalija questions whether that belief is as solid as it seems.

For years, the AI race has been driven by the idea that ever-larger models and vast GPU farms are the key to progress. That logic has justified enormous energy consumption and multi-billion-dollar investments in data centres. But Kurbalija argues that bigger is not always better, especially when everyday tasks often require far less computational firepower than frontier models provide.

He points out that most people rely on a limited vocabulary and a small set of reasoning tools in their daily work. Smaller, specialised AI systems can already draft emails, summarise meetings, or classify documents effectively. The push for trillion-parameter models, he suggests, may reflect ambition more than necessity.

There are also technical limits to consider. Adding more computing power can lead to diminishing returns, and some prominent researchers doubt that simply scaling up large language models will lead to human-level intelligence. More hardware, Kurbalija notes, does not automatically solve deeper conceptual challenges in AI design.

The economic picture is equally complex. Training cutting-edge proprietary models can cost hundreds of millions of dollars, while newer open-source systems have been developed at a fraction of that price. If cheaper models can deliver similar performance, questions arise about the sustainability of current spending and whether investors are backing efficiency or hype.

Beyond cost and performance lies a broader ethical issue. Even if massive computing power could eventually produce superintelligent systems, the key question is whether society truly needs them. Kurbalija warns that technological possibilities should not be confused with social desirability, and that innovation without a clear purpose can create new risks.

Rather than escalating an arms race for ever-larger models, the blog calls for a shift toward needs-driven design. Right-sized tools, viable business models, and ethical clarity about AI’s role in society may prove more valuable than raw computing muscle.

In challenging the prevailing narrative, Kurbalija urges policymakers and industry leaders to rethink whether the future of AI depends on scale alone or on smarter priorities.

Would you like to learn more about AI, tech and digital diplomacyIf so, ask our Diplo chatbot!