Google launches Private AI Compute for secure cloud-AI

In a move that underscores the evolving balance between capability and privacy in AI, Google today introduced Private AI Compute. This new cloud-based processing platform supports its most advanced models, such as those in the Gemini family, while maintaining what it describes as on-device-level data security.

The blog post explains that many emerging AI tasks now exceed the capabilities of on-device hardware alone. To solve this, Google built Private AI Compute to offload heavy computation to its cloud, powered by custom Tensor Processing Units (TPUs) and wrapped in a fortified enclave environment called Titanium Intelligence Enclaves (TIE).

The system uses remote attestation, encryption and IP-blinding relays to ensure user data remains private and inaccessible; ot even Google’s supposed to gain access.

Google identifies initial use-cases in its Pixel devices: features such as Magic Cue and Recorder will benefit from the extra compute, enabling more timely suggestions, multilingual summarisation and advanced context-aware assistance.

At the same time, the company says this platform ‘opens up a new set of possibilities for helpful AI experiences’ that go beyond what on-device AI alone can fully achieve.

This announcement is significant from both a digital policy and platform economy perspective. It illustrates how major technology firms are reconciling user privacy demands with the computational intensity of next-generation AI.

For organisations and governments focused on AI governance and digital diplomacy, the move raises questions about data sovereignty, transparency of remote enclaves and the true nature of ‘secure ‘cloud processing.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

OpenAI faces major copyright setback in US court

A US federal judge has ruled that a landmark copyright case against OpenAI can proceed, rejecting the company’s attempt to dismiss claims brought by authors and the Authors Guild.

The authors argue that ChatGPT’s summaries of copyrighted works, including George R.R. Martin’s Game of Thrones, unlawfully replicate the original tone, plot, and characters, raising concerns about AI-generated content infringing on creative rights.

The Publishers Association (PA) welcomed the ruling, warning that generative AI could ‘devastate the market’ for books and other creative works by producing infringing content at scale.

It urged the UK government to strengthen transparency rules to protect authors and publishers, stressing that AI systems capable of reproducing an author’s style could undermine the value of original creation.

The case follows a £1.5bn settlement against Anthropic earlier this year for using pirated books to train its models and comes amid growing scrutiny of AI firms.

In Britain, Stability AI recently avoided a copyright ruling after a claim by Getty Images was dismissed on grounds of jurisdiction. Still, the PA stated that the outcome highlighted urgent gaps in UK copyright law regarding AI training and output.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

Brussels leak signals GDPR and AI Act adjustments

The European Commission is preparing a Digital Package on simplification for 19 November. A leaked draft outlines instruments covering GDPR, ePrivacy, Data Act and AI Act reforms.

Plans include a single breach portal and a higher reporting threshold. Authorities would receive notifications within 96 hours, with standardised forms and narrower triggers. Controllers could reject or charge for data subject access requests used to pursue disputes.

Cookie rules would shift toward browser-level preference signals respected across services. Aggregated measurement and security uses would not require popups, while GDPR lawful bases expand. News publishers could receive limited exemptions recognising reliance on advertising revenues.

Drafting recognises legitimate interest for training AI models on personal data. Narrow allowances are provided for sensitive data during development, along with EU-wide data protection impact assessment templates. Critics warn proposals dilute safeguards and may soften the AI Act.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

Google and Cassava expand Gemini access in Africa

Google announced a partnership with Cassava Technologies to widen access to Gemini across Africa. The deal includes data-free Gemini usage for eligible users coordinated through Cassava’s network partners. The initiative aims to address affordability and adoption barriers for mobile users.

A six-month trial of the Google AI Plus plan is part of the package. Benefits include access to more capable Gemini models and added cloud storage. Coverage by regional tech outlets reported the exact core details.

Education features were highlighted, including NotebookLM for study aids and Gemini in Docs for writing support. Google said the offer aims to help students, teachers, and creators work without worrying about data usage. Reports highlight a focus on youth and skills development.

Cassava’s role aligns with broader investments in AI infrastructure and services across the continent; recent announcements reference model exchanges and planned AI facilities that support regional development. Observers see momentum behind accessible AI tools.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

Northern Ireland teachers reclaim hours with AI

A six-month pilot across Northern Ireland put Gemini and Workspace into classrooms. One hundred teachers participated under the Education Authority’s C2k programme. Reported benefits centred on time savings and practical support for everyday teaching.

Participants said they saved around ten hours per week on routine tasks where freed time was redirected to pupil engagement and professional development. More than six hundred use cases from the one hundred participants were documented during the trial period.

Teachers cited varied applications, from drafting parent letters to generating risk assessments quickly. NotebookLM helped transform curriculum materials into podcasts and interactive mind maps. Inclusive lessons were tailored, including Irish language activities and support for neurodivergent learners.

C2k plans wider training so more Northen Ireland educators can adopt the tools responsibly. Leadership framed AI as collaborative, not a replacement for teachers. Further partnerships are expected to align products with established pedagogical principles.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

Microsoft brings smarter search to Copilot

Microsoft is expanding Copilot with more precise citations that link directly to publisher sources. Users can also open aggregated references for each answer to review context. The emphasis sits on trust, control, and transparent sourcing throughout the experience.

A new dedicated search mode within Copilot delivers more detailed results when queries require specific information.

Summaries appear alongside links, enabling users to verify evidence and make informed decisions quickly. Industry coverage highlights the stronger focus on verifiable sources and publisher visibility.

The right pane offers a ‘Show all’ list of sources used in responses. Source-based citation pills replace opaque markers to aid credibility checks and exploration. Design choices aim to empower people to stay in control while navigating complex topics.

Updates are live across copilot.com, mobile apps, and Copilot in Edge, with more refinements expected. Microsoft positions the changes within a human-centred strategy where AI supports curiosity safely. Broader Copilot enhancements across Windows and Edge continue in parallel roadmaps.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

Private AI Compute by Google blends cloud power with on-device privacy

Google introduced Private AI Compute, a cloud platform that combines the power of Gemini with on-device privacy. It delivers faster AI while ensuring that personal data remains private and inaccessible, even to Google. The system builds on Google’s privacy-enhancing innovations across AI experiences.

As AI becomes more anticipatory, Private AI Compute enables advanced reasoning that exceeds the limits of local devices. It runs on Google’s custom TPUs and Titanium Intelligence Enclaves, securely powering Gemini models in the cloud. The design keeps all user data isolated and encrypted.

Encrypted attestation links a user’s device to sealed processing environments, allowing only the user to access the data. Features like Magic Cue and Recorder on Pixel now perform smarter, multilingual actions privately. Google says this extends on-device protection principles into secure cloud operations.

The platform’s multi-layered safeguards follow Google’s Secure AI Framework and Privacy Principles. Private AI Compute enables enterprises and consumers to utilise Gemini models without exposing sensitive inputs. It reinforces Google’s vision for privacy-centric infrastructure in cloud-enabled AI.

By merging local and cloud intelligence, Google says Private AI Compute opens new paths for private, personalised AI. It will guide the next wave of Gemini capabilities while maintaining transparency and safety. The company positions it as a cornerstone of responsible AI innovation.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

The AI soldier and the ethics of war

The rise of the machine soldier

For decades, Western militaries have led technological revolutions on the battlefield. From bows to tanks to drones, technological innovation has disrupted and redefined warfare for better or worse. However, the next evolution is not about weapons, it is about the soldier.

New AI-integrated systems such as Anduril’s EagleEye Helmet are transforming troops into data-driven nodes, capable of perceiving and responding with machine precision. This fusion of human and algorithmic capabilities is blurring the boundary between human roles and machine learning, redefining what it means to fight and to feel in war.

Today’s ‘AI soldier’ is more than just enhanced. They are networked, monitored, and optimised. Soldiers now have 3D optical displays that give them a god’s-eye view of combat, while real-time ‘guardian angel’ systems make decisions faster than any human brain can process.

Yet in this pursuit of efficiency, the soldier’s humanity and the rules-based order of war risk being sidelined in favour of computational power.

From soldier to avatar

In the emerging AI battlefield, the soldier increasingly resembles a character in a first-person shooter video game. There is an eerie overlap between AI soldier systems and the interface of video games, like Metal Gear Solid, where augmented players blend technology, violence, and moral ambiguity. The more intuitive and immersive the tech becomes, the easier it is to forget that killing is not a simulation.

By framing war through a heads-up display, AI gives troops an almost cinematic sense of control, and in turn, a detachment from their humanity, emotions, and the physical toll of killing. Soldiers with AI-enhanced senses operate through layers of mediated perception, acting on algorithmic prompts rather than their own moral intuition. When soldiers view the world through the lens of a machine, they risk feeling less like humans and more like avatars, designed to win, not to weigh the cost.

The integration of generative AI into national defence systems creates vulnerabilities, ranging from hacking decision-making systems to misaligned AI agents capable of escalating conflicts without human oversight. Ironically, the same guardrails that prevent civilian AI from encouraging violence cannot apply to systems built for lethal missions.

The ethical cost

Generative AI has redefined the nature of warfare, introducing lethal autonomy that challenges the very notion of ethics in combat. In theory, AI systems can uphold Western values and ethical principles, but in practice, the line between assistance and automation is dangerously thin.

When militaries walk this line, outsourcing their decision-making to neural networks, accountability becomes blurred. Without the basic principles and mechanisms of accountability in warfare, states risk the very foundation of rules-based order. AI may evolve the battlefield, but at the cost of diplomatic solutions and compliance with international law.  

AI does not experience fear, hesitation, or empathy, the very qualities that restrain human cruelty. By building systems that increase efficiency and reduce the soldier’s workload through automated targeting and route planning, we risk erasing the psychological distinction that once separated human war from machine-enabled extermination. Ethics, in this new battlescape, become just another setting in the AI control panel. 

The new war industry 

The defence sector is not merely adapting to AI. It is being rebuilt around it. Anduril, Palantir, and other defence tech corporations now compete with traditional military contractors by promising faster innovation through software.

As Anduril’s founder, Palmer Luckey, puts it, the goal is not to give soldiers a tool, but ‘a new teammate.’ The phrasing is telling, as it shifts the moral axis of warfare from command to collaboration between humans and machines.

The human-machine partnership built for lethality suggests that the military-industrial complex is evolving into a military-intelligence complex, where data is the new weapon, and human experience is just another metric to optimise.

The future battlefield 

If the past century’s wars were fought with machines, the next will likely be fought through them. Soldiers are becoming both operators and operated, which promises efficiency in war, but comes with the cost of human empathy.

When soldiers see through AI’s lens, feel through sensors, and act through algorithms, they stop being fully human combatants and start becoming playable characters in a geopolitical simulation. The question is not whether this future is coming; it is already here. 

There is a clear policy path forward, as states remain tethered to their international obligations. Before AI blurs the line between soldier and system, international law could enshrine a human-in-the-loop requirement for all lethal actions, while defence firms are compelled to maintain high ethical transparency standards.

The question now is whether humanity can still recognise itself once war feels like a game, or whether, without safeguards, it will remain present in war at all.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

ChatGPT-5 outperformed by a Chinese startup model

A Chinese company has stunned the AI world after its new open-source model outperformed OpenAI’s ChatGPT-5 and Anthropic’s Claude Sonnet 4.5 in key benchmarks.

Moonshot AI’s Kimi K2 Thinking model achieved the best reasoning and coding scores yet, shaking confidence in American dominance over advanced AI systems.

The Beijing-based startup, backed by Alibaba and Tencent, released Kimi K2 Thinking on 6 November. It scored 44.9 percent in Humanity’s Last Exam and 60.2 percent in BrowseComp, both surpassing leading US models.

Analysts dubbed it another ‘DeepSeek moment ‘, echoing the earlier success of China in breaking AI cost barriers.

Moonshot AI trained the trillion-parameter system for just US$4.6 million (nearly ten times cheaper than GPT-5’s reported costs) using a Mixture-of-Experts structure and advanced quantisation for faster generation.

The fully open-weight model, released under a Modified MIT License, adds commercial flexibility and intensifies competition with US labs.

Industry observers called it a turning point. Hugging Face’s Thomas Wolf said the achievement shows how open-source models can now rival closed systems.

Researchers from the Allen Institute for AI noted that Chinese innovation is narrowing the gap faster than expected, driven by efficiency and high-quality training data rather than raw computing power.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

OpenAI loses German copyright lawsuit over song lyrics reproduction

A Munich regional court has ruled that OpenAI infringed copyright in a landmark case brought by the German rights society GEMA. The court held OpenAI liable for reproducing and memorising copyrighted lyrics without authorisation, rejecting its claim to operate as a non-profit research institute.

The judgement found that OpenAI had violated copyright even in a 15-word passage, setting a low threshold for infringement. Additionally, the court dismissed arguments about accidental reproduction and technical errors, emphasising that both reproduction and memorisation require a licence.

It also denied OpenAI’s request for a grace period to make compliance changes, citing negligence.

Judges concluded that the company could not rely on proportionality defences, noting that licences were available and alternative AI models exist.

OpenAI’s claim that EU copyright law failed to foresee large language models was rejected, as the court reaffirmed that European law ensures a high level of protection for intellectual property.

The ruling marks a significant step for copyright enforcement in the age of generative AI and could shape future litigation across Europe. It also challenges technology companies to adapt their training and licensing practices to comply with existing legal frameworks.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!