Private AI Compute by Google blends cloud power with on-device privacy

Google introduced Private AI Compute, a cloud platform that combines the power of Gemini with on-device privacy. It delivers faster AI while ensuring that personal data remains private and inaccessible, even to Google. The system builds on Google’s privacy-enhancing innovations across AI experiences.

As AI becomes more anticipatory, Private AI Compute enables advanced reasoning that exceeds the limits of local devices. It runs on Google’s custom TPUs and Titanium Intelligence Enclaves, securely powering Gemini models in the cloud. The design keeps all user data isolated and encrypted.

Encrypted attestation links a user’s device to sealed processing environments, allowing only the user to access the data. Features like Magic Cue and Recorder on Pixel now perform smarter, multilingual actions privately. Google says this extends on-device protection principles into secure cloud operations.

The platform’s multi-layered safeguards follow Google’s Secure AI Framework and Privacy Principles. Private AI Compute enables enterprises and consumers to utilise Gemini models without exposing sensitive inputs. It reinforces Google’s vision for privacy-centric infrastructure in cloud-enabled AI.

By merging local and cloud intelligence, Google says Private AI Compute opens new paths for private, personalised AI. It will guide the next wave of Gemini capabilities while maintaining transparency and safety. The company positions it as a cornerstone of responsible AI innovation.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

The AI soldier and the ethics of war

The rise of the machine soldier

For decades, Western militaries have led technological revolutions on the battlefield. From bows to tanks to drones, technological innovation has disrupted and redefined warfare for better or worse. However, the next evolution is not about weapons, it is about the soldier.

New AI-integrated systems such as Anduril’s EagleEye Helmet are transforming troops into data-driven nodes, capable of perceiving and responding with machine precision. This fusion of human and algorithmic capabilities is blurring the boundary between human roles and machine learning, redefining what it means to fight and to feel in war.

Today’s ‘AI soldier’ is more than just enhanced. They are networked, monitored, and optimised. Soldiers now have 3D optical displays that give them a god’s-eye view of combat, while real-time ‘guardian angel’ systems make decisions faster than any human brain can process.

Yet in this pursuit of efficiency, the soldier’s humanity and the rules-based order of war risk being sidelined in favour of computational power.

From soldier to avatar

In the emerging AI battlefield, the soldier increasingly resembles a character in a first-person shooter video game. There is an eerie overlap between AI soldier systems and the interface of video games, like Metal Gear Solid, where augmented players blend technology, violence, and moral ambiguity. The more intuitive and immersive the tech becomes, the easier it is to forget that killing is not a simulation.

By framing war through a heads-up display, AI gives troops an almost cinematic sense of control, and in turn, a detachment from their humanity, emotions, and the physical toll of killing. Soldiers with AI-enhanced senses operate through layers of mediated perception, acting on algorithmic prompts rather than their own moral intuition. When soldiers view the world through the lens of a machine, they risk feeling less like humans and more like avatars, designed to win, not to weigh the cost.

The integration of generative AI into national defence systems creates vulnerabilities, ranging from hacking decision-making systems to misaligned AI agents capable of escalating conflicts without human oversight. Ironically, the same guardrails that prevent civilian AI from encouraging violence cannot apply to systems built for lethal missions.

The ethical cost

Generative AI has redefined the nature of warfare, introducing lethal autonomy that challenges the very notion of ethics in combat. In theory, AI systems can uphold Western values and ethical principles, but in practice, the line between assistance and automation is dangerously thin.

When militaries walk this line, outsourcing their decision-making to neural networks, accountability becomes blurred. Without the basic principles and mechanisms of accountability in warfare, states risk the very foundation of rules-based order. AI may evolve the battlefield, but at the cost of diplomatic solutions and compliance with international law.  

AI does not experience fear, hesitation, or empathy, the very qualities that restrain human cruelty. By building systems that increase efficiency and reduce the soldier’s workload through automated targeting and route planning, we risk erasing the psychological distinction that once separated human war from machine-enabled extermination. Ethics, in this new battlescape, become just another setting in the AI control panel. 

The new war industry 

The defence sector is not merely adapting to AI. It is being rebuilt around it. Anduril, Palantir, and other defence tech corporations now compete with traditional military contractors by promising faster innovation through software.

As Anduril’s founder, Palmer Luckey, puts it, the goal is not to give soldiers a tool, but ‘a new teammate.’ The phrasing is telling, as it shifts the moral axis of warfare from command to collaboration between humans and machines.

The human-machine partnership built for lethality suggests that the military-industrial complex is evolving into a military-intelligence complex, where data is the new weapon, and human experience is just another metric to optimise.

The future battlefield 

If the past century’s wars were fought with machines, the next will likely be fought through them. Soldiers are becoming both operators and operated, which promises efficiency in war, but comes with the cost of human empathy.

When soldiers see through AI’s lens, feel through sensors, and act through algorithms, they stop being fully human combatants and start becoming playable characters in a geopolitical simulation. The question is not whether this future is coming; it is already here. 

There is a clear policy path forward, as states remain tethered to their international obligations. Before AI blurs the line between soldier and system, international law could enshrine a human-in-the-loop requirement for all lethal actions, while defence firms are compelled to maintain high ethical transparency standards.

The question now is whether humanity can still recognise itself once war feels like a game, or whether, without safeguards, it will remain present in war at all.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

ChatGPT-5 outperformed by a Chinese startup model

A Chinese company has stunned the AI world after its new open-source model outperformed OpenAI’s ChatGPT-5 and Anthropic’s Claude Sonnet 4.5 in key benchmarks.

Moonshot AI’s Kimi K2 Thinking model achieved the best reasoning and coding scores yet, shaking confidence in American dominance over advanced AI systems.

The Beijing-based startup, backed by Alibaba and Tencent, released Kimi K2 Thinking on 6 November. It scored 44.9 percent in Humanity’s Last Exam and 60.2 percent in BrowseComp, both surpassing leading US models.

Analysts dubbed it another ‘DeepSeek moment ‘, echoing the earlier success of China in breaking AI cost barriers.

Moonshot AI trained the trillion-parameter system for just US$4.6 million (nearly ten times cheaper than GPT-5’s reported costs) using a Mixture-of-Experts structure and advanced quantisation for faster generation.

The fully open-weight model, released under a Modified MIT License, adds commercial flexibility and intensifies competition with US labs.

Industry observers called it a turning point. Hugging Face’s Thomas Wolf said the achievement shows how open-source models can now rival closed systems.

Researchers from the Allen Institute for AI noted that Chinese innovation is narrowing the gap faster than expected, driven by efficiency and high-quality training data rather than raw computing power.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

OpenAI loses German copyright lawsuit over song lyrics reproduction

A Munich regional court has ruled that OpenAI infringed copyright in a landmark case brought by the German rights society GEMA. The court held OpenAI liable for reproducing and memorising copyrighted lyrics without authorisation, rejecting its claim to operate as a non-profit research institute.

The judgement found that OpenAI had violated copyright even in a 15-word passage, setting a low threshold for infringement. Additionally, the court dismissed arguments about accidental reproduction and technical errors, emphasising that both reproduction and memorisation require a licence.

It also denied OpenAI’s request for a grace period to make compliance changes, citing negligence.

Judges concluded that the company could not rely on proportionality defences, noting that licences were available and alternative AI models exist.

OpenAI’s claim that EU copyright law failed to foresee large language models was rejected, as the court reaffirmed that European law ensures a high level of protection for intellectual property.

The ruling marks a significant step for copyright enforcement in the age of generative AI and could shape future litigation across Europe. It also challenges technology companies to adapt their training and licensing practices to comply with existing legal frameworks.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Joint quantum partnership unites Canada and Denmark for global research leadership

Canada and Denmark have signed a joint statement to deepen collaboration in quantum research and innovation.

The agreement, announced at the European Quantum Technologies Conference 2025 in Copenhagen, reflects both countries’ commitment to advancing quantum science responsibly while promoting shared values of openness, ethics and excellence.

Under the partnership, the two nations will enhance research and development ties, encourage open data sharing, and cultivate a skilled talent pipeline. They also aim to boost global competitiveness in quantum technologies, fostering new opportunities for market expansion and secure supply chains.

Canadian Minister Mélanie Joly highlighted that the cooperation showcases a shared ambition to accelerate progress in health care, clean energy and defence.

Denmark’s Minister for Higher Education and Science, Christina Egelund, described Canada as a vital partner in scientific innovation. At the same time, Minister Evan Solomon stressed the agreement’s role in empowering researchers to deliver breakthroughs that shape the future of quantum technologies.

Both Canada and Denmark are recognised as global leaders in quantum science, working together through initiatives such as the NATO Transatlantic Quantum Community.

A partnership that supports Canada’s National Quantum Strategy, launched in 2023, and reinforces its shared goal of driving innovation for sustainable growth and collective security.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Artist secretly hangs AI print Cardiff museum

An AI-generated print by artist Elias Marrow was secretly placed on a gallery wall at the National Museum Cardiff before staff were alerted, and it was removed. The work, titled Empty Plate, shows a young boy in a school uniform holding a plate and was reportedly seen by hundreds of visitors.

Marrow said the piece represents Wales in 2025 and examines how public institutions decide what is worth displaying. He defended the stunt as participatory rather than vandalism, emphasising that AI is a natural evolution of artistic tools.

Visitors photographed the artwork, and some initially thought it was performance art, while the museum confirmed it had no prior knowledge of the piece. Marrow has carried out similar unsanctioned displays at Bristol Museum and Tate Modern, highlighting his interest in challenging traditional curation.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

UN calls for safeguards around emerging neuro-technologies

In a recent statement, the UN highlighted the growing field of neuro-technology, which encompasses devices and software that can measure, access, or manipulate the nervous system, as posing new risks to human rights.

The UN highlighted how such technologies could challenge fundamental concepts like ‘mental integrity’, autonomy and personal identity by enabling unprecedented access to brain data.

It warned that without robust regulation, the benefits of neuro-technology may come with costs such as privacy violations, unequal access and intrusive commercial uses.

The concerns align with broader debates about how advanced technologies, such as AI, are reshaping society, ethics, and international governance.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

Tech giants offer free premium AI in India

In a move that signals a significant shift in global AI strategy, companies such as OpenAI, Google and Perplexity AI are partnering with Indian telecoms and service providers to offer premium AI tools, for example, advanced chatbot access and large-model features, free for millions of users in India.

The offers are not merely promotional but part of a long-term play to dominate the AI ecosystem.

Market analysts quoted by the BBC note that the objective is to ‘get Indians hooked on to generative AI before asking them to pay for it’. The size of India’s digital ecosystem, with its young, mobile-first population and relatively less restrictive regulation, makes it a key battleground for AI firms aiming for global scale.

However, there are risks: free access may raise concerns around privacy and data protection, algorithmic control and whether users are fully informed about how their data is used and when free offers will convert into paid services.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

Salesforce strengthens Agentforce with planned Spindle AI acquisition

Salesforce has signed a definitive agreement to acquire Spindle AI, a company specialising in agentic analytics and machine learning. The deal aims to strengthen Salesforce’s Agentforce platform by integrating Spindle’s advanced data modelling and forecasting technologies.

Spindle AI has developed neuro-symbolic AI agents capable of autonomously generating and optimising scenario models. Its analytics tools enable businesses to simulate and assess complex decisions, from pricing strategies to go-to-market plans, using AI-driven insights.

Salesforce said the acquisition will enhance its focus on Agent Observability and Self-Improvement within Agentforce 360. Executives described Spindle AI’s expertise as critical to building more transparent and reliable agentic systems capable of explaining and refining their own reasoning.

The acquisition, subject to customary closing conditions, is expected to be completed in Salesforce’s fourth fiscal quarter of 2026. Once finalised, Spindle AI will join Agentforce to expand AI-powered analytics, continuous optimisation, and ROI forecasting for enterprise customers worldwide.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

IMY investigates major ransomware attack on Swedish IT supplier

Sweden’s data protection authority, IMY, has opened an investigation into a massive ransomware-related data breach that exposed personal information belonging to 1.5 million people. The breach originated from a cyberattack on IT provider Miljödata in August, which affected roughly 200 municipalities.

Hackers reportedly stole highly sensitive data, including names, medical certificates, and rehabilitation records, much of which has since been leaked on the dark web. Swedish officials have condemned the incident, calling it one of the country’s most serious cyberattacks in recent years.

The IMY said the investigation will examine Miljödata’s data protection measures and the response of several affected public bodies, such as Gothenburg, Älmhult, and Västmanland. The regulator’s goal is to identify security shortcomings for future cyber threats.

Authorities have yet to confirm how the attackers gained access to Miljödata’s systems, and no completion date for the investigation has been announced. The breach has reignited calls for tighter cybersecurity standards across Sweden’s public sector.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!