Building trustworthy AI for humanitarian response

A new vision for Humanitarian AI is emerging around a simple idea, and that is that technology should grow from local knowledge if it is to work everywhere. Drawing on the IFRC’s slogan ‘Local, everywhere,’ this approach argues that AI should not be driven by hype or raw computing power, but by the lived experience of communities and humanitarian workers on the ground. With millions of volunteers and staff worldwide, the Red Cross and Red Crescent Movement holds a vast reservoir of practical knowledge that AI can help preserve, organise, and share for more effective crisis response.

In a recent blog post, Jovan Kurbalija explains that this bottom-up approach is not only practical but also ethically sound. AI systems grounded in local humanitarian knowledge can better reflect cultural and social contexts, reduce bias and misinformation, and strengthen trust by being governed by humanitarian organisations rather than opaque commercial platforms. Trust, he argues, lies in people and institutions behind the technology, not in algorithms themselves.

Kurbalija also notes that developing such AI is technically and financially realistic. Open-source models, mobile and edge computing, and domain-specific AI tools enable the deployment to functional systems even in low-resource environments. Most humanitarian tasks, from decision support to translation or volunteer guidance, do not require massive infrastructure, but high-quality, well-structured knowledge rooted in real-world experience.

If developed carefully, Humanitarian AI could also support the IFRC’s broader renewal goals, from strengthening local accountability and collaboration to safeguarding independence and humanitarian principles. Starting with small pilot projects and scaling up gradually, the Movement could transform AI into a shared public good that not only enhances responses to today’s crises but also preserves critical knowledge for future generations.

Would you like to learn more about AI, tech and digital diplomacyIf so, ask our Diplo chatbot!

CES 2026 to feature LG’s new AI-driven in-car platform

LG Electronics will unveil a new AI Cabin Platform at CES 2026 in Las Vegas, positioning the system as a next step beyond today’s software-defined vehicles and toward what the company calls AI-defined mobility.

The platform is designed to run on automotive high-performance computing systems and is powered by Qualcomm Technologies’ Snapdragon Cockpit Elite. LG says it applies generative AI models directly to in-vehicle infotainment, enabling more context-aware and personalised driving experiences.

Unlike cloud-dependent systems, all AI processing occurs on-device within the vehicle. LG says this approach enables real-time responses while improving reliability, privacy, and data security by avoiding communication with external servers.

Using data from internal and external cameras, the system can assess driving conditions and driver awareness to provide proactive alerts. LG also demonstrated adaptive infotainment features, including AI-generated visuals and music suggestions that respond to weather, time, and driving context.

LG will showcase the AI Cabin Platform at a private CES event, alongside a preview of its AI-defined vehicle concept. The company says the platform builds on its expanding partnership with Qualcomm Technologies and on its earlier work integrating infotainment and driver-assistance systems.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

Universities back generative AI but guidance remains uneven

A majority of leading US research universities are encouraging the use of generative AI in teaching, according to a new study analysing institutional policies and guidance documents across higher education.

The research reviewed publicly available policies from 116 R1 universities and found that 63 percent explicitly support the use of generative AI, while 41 percent provide detailed classroom guidance. More than half of the institutions also address ethical considerations linked to AI adoption.

Most guidance focuses on writing-related activities, with far fewer references to coding or STEM applications. The study notes that while many universities promote experimentation, expectations placed on faculty can be demanding, often implying significant changes to teaching practices.

US researchers also found wide variation in how universities approach oversight. Some provide sample syllabus language and assignment design advice, while others discourage the use of AI-detection tools, citing concerns around reliability and academic trust.

The authors caution that policy statements may not reflect real classroom behaviour and say further research is needed to understand how generative AI is actually being used by educators and students in practice.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

Conduit revolutionises neuro-language research with 10,000-hour dataset

A San Francisco start-up, named Conduit, has spent six months building what it claims is the largest neural language dataset ever assembled, capturing around 10,000 hours of non-invasive brain recordings from thousands of participants.

The project aims to train thought-to-text AI systems that interpret semantic intent from brain activity moments before speech or typing occurs.

Participants take part in extended conversational sessions instead of rigid laboratory tasks, interacting freely with large language models through speech or simplified keyboards.

Engineers found that natural dialogue produced higher quality data, allowing tighter alignment between neural signals, audio and text while increasing overall language output per session.

Conduit developed its own sensing hardware after finding no commercial system capable of supporting large-scale multimodal recording.

Custom headsets combine multiple neural sensing techniques within dense training rigs, while future inference devices will be simplified once model behaviour becomes clearer.

Power systems and data pipelines were repeatedly redesigned to balance signal clarity with scalability, leading to improved generalisation across users and environments.

As data volume increased, operational costs fell through automation and real time quality control, allowing continuous collection across long daily schedules.

With data gathering largely complete, the focus has shifted toward model training, raising new questions about the future of neural interfaces, AI-mediated communication and cognitive privacy.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Europe risks falling behind without telecom scale, Telefónica says

Telefónica has called for a shift in Europe’s telecommunications policy, arguing that market fragmentation is undermining investment, digital competitiveness, and the continent’s technological sovereignty, according to a new blog post from the company.

In the post, Telefónica says Europe’s emphasis on maximising retail competition has produced a highly fragmented operator landscape. It cites industry data showing the average European operator serves around five million customers, far fewer than peers in the United States or China.

The company argues that this lack of scale explains Europe’s lower per-capita investment in telecoms infrastructure and is slowing the rollout of technologies such as standalone 5G, fibre networks, and sovereign cloud and AI platforms.

Telefónica points to recent reports by Mario Draghi and Enrico Letta as signs of a policy shift, with EU institutions placing greater weight on investment capacity, resilience, and dynamic efficiency alongside traditional competition objectives.

The blog post concludes that Europe faces a strategic choice between preserving fragmented markets or enabling responsible consolidation. Telefónica says carefully regulated mergers could support sustainability, reduce regional digital divides, and strengthen Europe’s digital infrastructure.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

New law requires AI disclosure in advertising in the US

A new law in New York, US, will require advertisers to disclose when AI-generated people appear in commercial content. Governor Kathy Hochul said the measure brings transparency and protects consumers as synthetic avatars become more widespread.

A second law now requires consent from heirs or executors when using a deceased person’s likeness for commercial purposes. The rule updates the state’s publicity rights, which previously lacked clarity in the context of the generative AI era.

Industry groups welcomed the move, saying it addresses the risks posed by unregulated AI usage, particularly for actors in the film and television industries. The disclosure must be conspicuous when an avatar does not correspond to a real human.

Specific expressive works such as films, games and shows are exempt when the avatar matches its use in the work. The laws arrive as national debate intensifies and President-elect Donald Trump signals potential attempts to limit state-level AI regulation.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

AI-powered grid pilot aims to cut energy costs in Ottawa

Canada has announced new federal funding to pilot AI tools on the electricity grid, backing a project designed to improve reliability, affordability and efficiency as energy demand grows.

The government of Canada will provide $6 million to Hydro Ottawa under the Ottawa Distributed Energy Resource Accelerator programme. The initiative will utilise AI-enhanced predictive analytics to forecast peak demand and help balance electricity supply and demand in near real-time.

The project will turn customer-owned technologies such as smart thermostats, electric vehicle chargers and home batteries into responsive grid resources. By aggregating them, Hydro Ottawa aims to manage local constraints and reduce costly network upgrades, starting in areas like Kanata North that are experiencing rapid growth.

Officials say the programme will give households more control over energy use while strengthening grid resilience. The pilot is also intended to serve as a model that could be scaled across other neighbourhoods and electricity systems.

The funding comes through the Energy Innovation Program, which supports innovative grid demonstrations and AI-driven energy projects. Ottawa says such initiatives are key to modernising Canada’s electricity system and supporting the transition to a low-carbon economy.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

OpenAI outlines safeguards as AI cyber capabilities advance

Cyber capabilities in advanced AI models are improving rapidly, delivering clear benefits for cyberdefence while introducing new dual-use risks that require careful management, according to OpenAI’s latest assessment.

The company points to sharp gains in capture-the-flag performance, with success rates rising from 27 percent in August to 76 percent by November 2025. OpenAI says future models could reach high cyber capability, including assistance with sophisticated intrusion techniques.

To address this, OpenAI says it is prioritising defensive use cases, investing in tools that help security teams audit code, patch vulnerabilities, and respond more effectively to threats. The goal is to give defenders an advantage in an often under-resourced environment.

OpenAI argues that cybersecurity cannot be governed through a single safeguard, as defensive and offensive techniques overlap. Instead, it applies a defence-in-depth approach that combines access controls, monitoring, detection systems, and extensive red teaming to limit misuse.

Alongside these measures, the company plans new initiatives, including trusted access programmes for defenders, agent-based security tools in private testing, and the creation of a Frontier Risk Council. OpenAI says these efforts reflect a long-term commitment to cyber resilience.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

Disney backs OpenAI with $1bn investment and licensing pact

The Walt Disney Company has struck a landmark agreement with OpenAI, becoming the first major content licensing partner on Sora, the AI company’s short-form generative video platform.

Under the three-year deal, Sora will generate short videos using more than 200 animated and creature characters from Disney, Pixar, Marvel, and Star Wars. The licence also covers ChatGPT Images, excluding talent likenesses and voices.

Beyond licensing, Disney will become a major OpenAI customer, using its APIs to develop new products and experiences, including for Disney+, while deploying ChatGPT internally across its workforce. Disney will also make a $1 billion equity investment in OpenAI and receive warrants for additional shares.

Both companies frame the partnership as a test case for responsible AI in creative industries. Executives say the agreement is designed to expand storytelling possibilities while protecting creators’ rights, user safety, and intellectual property across platforms.

Subject to final approvals, Sora-generated Disney content is expected to begin rolling out in early 2026. Curated selections may appear on Disney+, marking a new phase in how established entertainment brands engage with generative AI tools.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

Tiiny AI unveils the Pocket Lab supercomputer

Tiiny AI has revealed the Pocket Lab, a palm-sized device recognised as the world’s smallest personal AI supercomputer. Guinness World Records confirmed the title, noting its ability to run models with up to 120 billion parameters.

The Pocket Lab uses an ARM v9.2 CPU, a discrete NPU delivering 190 TOPS and 80GB of LPDDR5X memory. Popular open-source models such as GPT-OSS, Llama, Qwen, Mistral, DeepSeek and Phi are supported. Tiiny AI says its hardware makes large-scale reasoning possible in a handheld format.

Two in-house technologies enhance efficiency by distributing workloads and reducing unnecessary activations. TurboSparse manages sparse neuron activity to preserve capability while improving speed, and PowerInfer splits computation across the CPU and NPU.

Tiiny AI plans a full showcase at CES 2026, with pricing and release information still pending. Analysts want to see how the device performs in real-world tasks compared with much larger systems. The company believes the Pocket Lab will shift expectations for personal AI hardware.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot