New funding round by Meta strengthens local STEAM education

Meta is inviting applications for its 2026 Data Centre Community Action Grants, which support schools, nonprofits and local groups in regions that host the company’s data centres.

The programme has been a core part of Meta’s community investment strategy since 2011, and the latest round expands support to seven additional areas linked to new facilities. The company views the grants as a means of strengthening long-term community vitality, rather than focusing solely on infrastructure growth.

Funding is aimed at projects that use technology for public benefit and improve opportunities in science, technology, engineering, arts and mathematics. More than $ 74 million has been awarded to communities worldwide, with $ 24 million distributed through the grant programme alone.

Recipients can reapply each year, which enables organisations to sustain programmes and increase their impact over time.

Several regions have already demonstrated how the funding can reshape local learning opportunities. Northern Illinois University used grants to expand engineering camps for younger students and to open a STEAM studio that supports after-school programmes and workforce development.

In New Mexico, a middle school used funding to build a STEM centre with advanced tools such as drones, coding kits and 3D printing equipment. In Texas, an enrichment organisation created a digital media and STEM camp for at-risk youth, offering skills that can encourage empowerment instead of disengagement.

Meta presents the programme as part of a broader pledge to deepen education and community involvement around emerging technologies.

The company argues that long-term support for digital learning will strengthen local resilience and create opportunities for young people who want to pursue future careers in technology.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Digital twin technology drives new era in predictive medicine

A new AI model capable of generating digital twins of patients is being hailed as a significant step forward for clinical research. Developed at the University of Melbourne, the system reviews health records to predict how a patient’s condition may change during treatment.

DT-GPT, the model in question, was trained on thousands of records covering Alzheimer’s disease, non-small cell lung cancer, and intensive care admissions. Researchers stated that the model accurately predicted shifts in key clinical indicators, utilising medical literature and patient histories.

Predictions were validated without giving DT-GPT access to actual outcomes, strengthening confidence in its performance.

Lead researcher Associate Professor Michael Menden said the tool not only replicated patient profiles but also outperformed fourteen advanced machine-learning systems.

The ability to simulate clinical trial outcomes could lower costs and accelerate drug development, while enabling clinicians to anticipate deterioration and tailor treatment plans more effectively.

Researchers also noted that DT-GPT’s zero-shot ability to predict medical values it had never been trained on. The team has formed a company with the Royal Melbourne Women’s Hospital to apply the technology to patients with endometriosis, demonstrating wider potential in healthcare.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Disney+ prepares AI tools for user creations

Disney+ is preparing to introduce tools that enable subscribers to create short, AI-generated videos inspired by its characters and franchises. Chief executive Bob Iger described the move as part of a sweeping platform upgrade that marks the service’s most significant technological expansion since its 2019 launch.

Alongside user-generated video features, Disney+ will gain interactive, game-like functions through its collaboration with Epic Games. The company plans to merge storytelling and interactivity, creating a new form of engagement where fans can build or remix short scenes within Disney’s creative universe.

Iger confirmed that Disney has held productive talks with several AI firms to develop responsible tools that safeguard intellectual property. The company aims to ensure that fans’ creations can exist within brand limits, avoiding misuse of iconic characters while opening the door to more creative participation.

Industry analysts suggest that the plan could reshape the streaming industry by blending audience creativity with studio production. Yet creators have expressed caution, urging transparency on rights and moderation.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

NotebookLM gains automated Deep Research tool and wider file support

Google is expanding NotebookLM with Deep Research, a tool designed to handle complex online inquiries and produce structured, source-grounded reports. The feature acts like a dedicated researcher, planning its own process and gathering material across the web.

Users can enter a question, choose a research style, and let Deep Research browse relevant sites before generating a detailed briefing. The tool runs in the background, allowing additional sources to be added without disrupting the workflow or leaving the notebook.

NotebookLM now supports more file types, including Google Sheets, Drive URLs, PDFs stored in Drive, and Microsoft Word documents. Google says this enables tasks such as summarising spreadsheets and quickly importing multiple Drive files for analysis.

The update continues the service’s gradual expansion since its late-2023 launch, which has brought features such as Video Overviews for turning dense materials into visual explainers. These follow earlier additions, such as Audio Overviews, which create podcast-style summaries of shared documents.

Google also released NotebookLM apps for Android and iOS earlier this year, extending access beyond desktop. The company says the latest enhancements should reach all users within a week.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

Embodied AI steps forward with DeepMind’s SIMA 2 research preview

Google DeepMind has released a research preview of SIMA 2, an upgraded generalist agent that draws on Gemini’s language and reasoning strengths. The system moves beyond simple instruction following, aiming to understand user intent and interact more effectively with its environment.

SIMA 1 relied on game data to learn basic tasks across diverse 3D worlds but struggled with complex actions. DeepMind says SIMA 2 represents a step change, completing harder objectives in unfamiliar settings and adapting its behaviour through experience without heavy human supervision.

The agent is powered by the Gemini 2.5 Flash-Lite model and built around the idea of embodied intelligence, where an AI acts through a body and responds to its surroundings. Researchers say this approach supports a deeper understanding of context, goals, and the consequences of actions.

Demos show SIMA 2 describing landscapes, identifying objects, and choosing relevant tasks in titles such as No Man’s Sky. It also reveals its reasoning, interprets clues, uses emojis as instructions, and navigates photorealistic worlds generated by Genie, DeepMind’s own environment model.

Self-improvement comes from Gemini models that create new tasks and score attempts, enabling SIMA 2 to refine its abilities through trial and error. DeepMind sees these advances as groundwork for future general-purpose robots, though the team has not shared timelines for wider deployment.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

Firefox expands AI features with full user choice

Mozilla has outlined its vision for integrating AI into Firefox in a way that protects user choice instead of limiting it. The company argues that AI should be built like the open web, allowing people and developers to use tools on their own terms rather than being pushed into a single ecosystem.

Recent features such as the AI sidebar chatbot and Shake to Summarise on iOS reflect that approach.

The next step is an ‘AI Window’, a controlled space inside Firefox that lets users chat with an AI assistant while browsing. The feature is entirely optional, offers full control, and can be switched off at any time. Mozilla has opened a waitlist so users can test the feature early and help shape its development.

Mozilla believes browsers must adapt as AI becomes a more common interface to the web. The company argues that remaining independent allows it to prioritise transparency, accountability and user agency instead of the closed models promoted by competitors.

The goal is an assistant that enhances browsing and guides users outward to the wider internet rather than trapping them in isolated conversations.

Community involvement remains central to Mozilla’s work. The organisation is encouraging developers and users to contribute ideas and support open-source projects as it works to ensure Firefox stays fast, secure and private while embracing helpful forms of AI.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

CERN unveils AI strategy to advance research and operations

CERN has approved a comprehensive AI strategy to guide its use across research, operations, and administration. The strategy unites initiatives under a coherent framework to promote responsible and impactful AI for science and operational excellence.

It focuses on four main goals: accelerating scientific discovery, improving productivity and reliability, attracting and developing talent, and enabling AI at scale through strategic partnerships with industry and member states.

Common tools and shared experiences across sectors will strengthen CERN’s community and ensure effective deployment.

Implementation will involve prioritised plans and collaboration with EU programmes, industry, and member states to build capacity, secure funding, and expand infrastructure. Applications of AI will support high-energy physics experiments, future accelerators, detectors, and data-driven decision-making.

AI is now central to CERN’s mission, transforming research methodologies and operations. From intelligent automation to scalable computational insight, the technology is no longer optional but a strategic imperative for the organisation.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

New York Times lawsuit prompts OpenAI to strengthen privacy protections

OpenAI says a New York Times demand to hand over 20 million private ChatGPT conversations threatens user privacy and breaks with established security norms. The request forms part of the Times’ lawsuit over alleged misuse of its content.

The company argues the demand would expose highly personal chats from people with no link to the case. It previously resisted broader requests, including one seeking more than a billion conversations, and says the latest move raises similar concerns about proportionality.

OpenAI says it offered privacy-preserving alternatives, such as targeted searches and high-level usage data, but these were rejected. It adds that chats covered by the order are being de-identified and stored in a secure, legally restricted environment.

The dispute arises as OpenAI accelerates its security roadmap, which includes plans for client-side encryption and automated systems that detect serious safety risks without requiring broad human access. These measures aim to ensure private conversations remain inaccessible to external parties.

OpenAI maintains that strong privacy protections are essential as AI tools handle increasingly sensitive tasks. It says it will challenge any attempt to make private conversations public and will continue to update users as the legal process unfolds.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

New AI platforms approved for Surrey Schools classrooms

Surrey Schools has approved MagicSchool, SchoolAI, and TeachAid for classroom use, giving teachers access through the ONE portal with parental consent. The district says the tools are intended to support instruction while maintaining strong privacy and safety safeguards.

Officials say each platform passes rigorous reviews covering educational value, data protection, and technical security before approval. Teachers receive structured guidance on appropriate use, supported by professional development aligned with wider standards for responsible AI in education.

A two-year digital literacy programme helps staff explore online identity, digital habits, and safe technology use as AI becomes more common in lessons. Students use AI to generate ideas, check code, and analyse scientific or mathematical problems, reinforcing critical reasoning.

Educators stress that pupils are taught to question AI outputs rather than accept them at face value. Leaders argue this approach builds judgment and confidence, preparing young people to navigate automated systems with greater agency beyond school settings.

Families and teachers can access AI safety resources through the ONE platform, including videos, podcasts and the ‘Navigating an AI Future’ series. Materials include recordings from earlier workshops and parent sessions, supporting shared understanding of AI’s benefits and risks across the community.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

Coding meets creativity in Minecraft Education’s AI tutorial

Minecraft Education is introducing an AI-powered twist on the classic first night challenge with a new Hour of AI world. Players explore a puzzle-driven environment that turns early survival stress into a guided coding and learning experience.

The activity drops players into a familiar biome and tasks them with building shelter before sunset. Instead of panicking at distant rustles or looming shadows, learners work with an AI agent designed to support planning and problem-solving.

Using MakeCode programming, players teach their agent to recognise patterns, classify resources, and coordinate helper bots. The agent mimics real AI behaviour by learning from examples and occasionally making mistakes that require human correction to improve its decisions.

As the agent becomes more capable, it shifts from a simple tool to a partner that automates key tasks and reduces first-night pressure. The aim is to let players develop creative strategies rather than resort to frantic survival instincts.

Designed for ages seven and up, the experience is free to access through Minecraft Education. It introduces core AI literacy concepts, blending gameplay with lessons on how AI systems learn, adapt, and occasionally fail, all wrapped in a familiar, family-friendly setting.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!