In a recent paper, researchers at Stevens Institute of Technology revealed that large language models (LLMs) use a small, specialised subset of their parameters to perform tasks associated with the psychological concept of ‘Theory of Mind’ (ToM), the human ability to infer others’ beliefs, intentions and perspectives.
The study found that although LLMs activate almost their whole network for each input, the ToM-related reasoning appears to rely disproportionately on a narrow internal circuit, particularly shaped by the model’s positional encoding mechanism.
This discovery matters because it highlights a significant efficiency gap between human brains and current AI systems: humans carry out social-cognitive tasks with only a tiny fraction of neural activity, whereas LLMs still consume substantial computational resources even for ‘simple’ reasoning.
The researchers suggest these points as a way to design AI models that are more brain-inspired, selectively activating only those parameters needed for particular tasks.
From a policy and digital-governance perspective, this raises questions about how we interpret AI’s understanding and social cognition.
If AI can exhibit behaviour that resembles human belief-reasoning, oversight frameworks and transparency standards become all the more critical in assessing what AI systems are doing, and what they are capable of.
Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!
In a move that underscores the evolving balance between capability and privacy in AI, Google today introduced Private AI Compute. This new cloud-based processing platform supports its most advanced models, such as those in the Gemini family, while maintaining what it describes as on-device-level data security.
The blog post explains that many emerging AI tasks now exceed the capabilities of on-device hardware alone. To solve this, Google built Private AI Compute to offload heavy computation to its cloud, powered by custom Tensor Processing Units (TPUs) and wrapped in a fortified enclave environment called Titanium Intelligence Enclaves (TIE).
The system uses remote attestation, encryption and IP-blinding relays to ensure user data remains private and inaccessible; ot even Google’s supposed to gain access.
Google identifies initial use-cases in its Pixel devices: features such as Magic Cue and Recorder will benefit from the extra compute, enabling more timely suggestions, multilingual summarisation and advanced context-aware assistance.
At the same time, the company says this platform ‘opens up a new set of possibilities for helpful AI experiences’ that go beyond what on-device AI alone can fully achieve.
This announcement is significant from both a digital policy and platform economy perspective. It illustrates how major technology firms are reconciling user privacy demands with the computational intensity of next-generation AI.
For organisations and governments focused on AI governance and digital diplomacy, the move raises questions about data sovereignty, transparency of remote enclaves and the true nature of ‘secure ‘cloud processing.
Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!
Scientists have successfully tracked a tsunami in real time using ripples in Earth’s atmosphere for the first time.
The breakthrough came after a powerful 8.8 magnitude earthquake struck off Russia’s Kamchatka Peninsula in July 2025, sending waves racing across the Pacific and triggering NASA’s newly upgraded Guardian monitoring system.
Guardian uses AI to detect disruptions in satellite navigation signals caused by atmospheric ripples above the ocean.
These signals revealed the formation and movement of tsunami waves, allowing alerts to be issued up to 40 minutes before they reached Hawaii, potentially giving communities vital time to respond.
Researchers say the innovation could transform global disaster monitoring by enabling earlier warnings for tsunamis, volcanic eruptions, and even nuclear tests.
Although the system is still in development, scientists in Europe are working on similar models that could expand coverage and provide life-saving alerts to remote coastal regions.
Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!
A US federal judge has ruled that a landmark copyright case against OpenAI can proceed, rejecting the company’s attempt to dismiss claims brought by authors and the Authors Guild.
The authors argue that ChatGPT’s summaries of copyrighted works, including George R.R. Martin’s Game of Thrones, unlawfully replicate the original tone, plot, and characters, raising concerns about AI-generated content infringing on creative rights.
The Publishers Association (PA) welcomed the ruling, warning that generative AI could ‘devastate the market’ for books and other creative works by producing infringing content at scale.
It urged the UK government to strengthen transparency rules to protect authors and publishers, stressing that AI systems capable of reproducing an author’s style could undermine the value of original creation.
The case follows a £1.5bn settlement against Anthropic earlier this year for using pirated books to train its models and comes amid growing scrutiny of AI firms.
In Britain, Stability AI recently avoided a copyright ruling after a claim by Getty Images was dismissed on grounds of jurisdiction. Still, the PA stated that the outcome highlighted urgent gaps in UK copyright law regarding AI training and output.
Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!
The European Commission is preparing a Digital Package on simplification for 19 November. A leaked draft outlines instruments covering GDPR, ePrivacy, Data Act and AI Act reforms.
Plans include a single breach portal and a higher reporting threshold. Authorities would receive notifications within 96 hours, with standardised forms and narrower triggers. Controllers could reject or charge for data subject access requests used to pursue disputes.
Cookie rules would shift toward browser-level preference signals respected across services. Aggregated measurement and security uses would not require popups, while GDPR lawful bases expand. News publishers could receive limited exemptions recognising reliance on advertising revenues.
Drafting recognises legitimate interest for training AI models on personal data. Narrow allowances are provided for sensitive data during development, along with EU-wide data protection impact assessment templates. Critics warn proposals dilute safeguards and may soften the AI Act.
Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!
Google announced a partnership with Cassava Technologies to widen access to Gemini across Africa. The deal includes data-free Gemini usage for eligible users coordinated through Cassava’s network partners. The initiative aims to address affordability and adoption barriers for mobile users.
A six-month trial of the Google AI Plus plan is part of the package. Benefits include access to more capable Gemini models and added cloud storage. Coverage by regional tech outlets reported the exact core details.
Education features were highlighted, including NotebookLM for study aids and Gemini in Docs for writing support. Google said the offer aims to help students, teachers, and creators work without worrying about data usage. Reports highlight a focus on youth and skills development.
Cassava’s role aligns with broader investments in AI infrastructure and services across the continent; recent announcements reference model exchanges and planned AI facilities that support regional development. Observers see momentum behind accessible AI tools.
Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!
A six-month pilot across Northern Ireland put Gemini and Workspace into classrooms. One hundred teachers participated under the Education Authority’s C2k programme. Reported benefits centred on time savings and practical support for everyday teaching.
Participants said they saved around ten hours per week on routine tasks where freed time was redirected to pupil engagement and professional development. More than six hundred use cases from the one hundred participants were documented during the trial period.
Teachers cited varied applications, from drafting parent letters to generating risk assessments quickly. NotebookLM helped transform curriculum materials into podcasts and interactive mind maps. Inclusive lessons were tailored, including Irish language activities and support for neurodivergent learners.
C2k plans wider training so more Northen Ireland educators can adopt the tools responsibly. Leadership framed AI as collaborative, not a replacement for teachers. Further partnerships are expected to align products with established pedagogical principles.
Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!
Microsoft is expanding Copilot with more precise citations that link directly to publisher sources. Users can also open aggregated references for each answer to review context. The emphasis sits on trust, control, and transparent sourcing throughout the experience.
A new dedicated search mode within Copilot delivers more detailed results when queries require specific information.
Summaries appear alongside links, enabling users to verify evidence and make informed decisions quickly. Industry coverage highlights the stronger focus on verifiable sources and publisher visibility.
The right pane offers a ‘Show all’ list of sources used in responses. Source-based citation pills replace opaque markers to aid credibility checks and exploration. Design choices aim to empower people to stay in control while navigating complex topics.
Updates are live across copilot.com, mobile apps, and Copilot in Edge, with more refinements expected. Microsoft positions the changes within a human-centred strategy where AI supports curiosity safely. Broader Copilot enhancements across Windows and Edge continue in parallel roadmaps.
Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!
Google introduced Private AI Compute, a cloud platform that combines the power of Gemini with on-device privacy. It delivers faster AI while ensuring that personal data remains private and inaccessible, even to Google. The system builds on Google’s privacy-enhancing innovations across AI experiences.
As AI becomes more anticipatory, Private AI Compute enables advanced reasoning that exceeds the limits of local devices. It runs on Google’s custom TPUs and Titanium Intelligence Enclaves, securely powering Gemini models in the cloud. The design keeps all user data isolated and encrypted.
Encrypted attestation links a user’s device to sealed processing environments, allowing only the user to access the data. Features like Magic Cue and Recorder on Pixel now perform smarter, multilingual actions privately. Google says this extends on-device protection principles into secure cloud operations.
The platform’s multi-layered safeguards follow Google’s Secure AI Framework and Privacy Principles. Private AI Compute enables enterprises and consumers to utilise Gemini models without exposing sensitive inputs. It reinforces Google’s vision for privacy-centric infrastructure in cloud-enabled AI.
By merging local and cloud intelligence, Google says Private AI Compute opens new paths for private, personalised AI. It will guide the next wave of Gemini capabilities while maintaining transparency and safety. The company positions it as a cornerstone of responsible AI innovation.
Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!
Google Photos is introducing prompt-based edits, an ‘Ask’ button, and style templates across iOS and Android. In the US, iPhone users can describe edits by voice or text, with a redesigned editor for faster controls. The rollout builds on the August Pixel 10’s debut of prompt editing.
Personalised edits now recognise people from face groups, so you can issue multi-person requests, such as removing sunglasses or opening eyes. Find it under ‘Help me edit’, where changes apply to each named person. It’s designed for faster, more granular everyday fixes.
A new Ask button serves as a hub for AI requests, from questions about a photo to suggested edits and related moments. The interface surfaces chips that hint at actions users can take. The Ask experience is rolling out in the US on both iOS and Android.
Google is also adding AI templates that turn a single photo into set formats, such as retro portraits or comic-style panels. The company states that its Nano Banana model powers these creative styles and that templates will be available next week under the Create tab on Android in the US and India.
AI search in Google Photos, first launched in the US, is expanding to over 100 countries with support for 17 languages. Markets include Argentina, Australia, Brazil, India, Japan, Mexico, Singapore, and South Africa. Google says this brings natural-language photo search to a far greater number of users.
Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!