Coca-Cola enhances its AI-powered Christmas ad to fix last year’s visual flaws

Coca-Cola has released an improved AI-generated Christmas commercial after last year’s debut campaign drew criticism for its unsettling visuals.

The latest ‘Holidays Are Coming’ ads, developed in part by San Francisco-based Silverside, showcase more natural animation and a wider range of festive creatures, instead of the overly lifelike characters that previously unsettled audiences.

The new version avoids the ‘uncanny valley’ effect that plagued 2024’s ads. The use of generative AI by Coca-Cola reflects a wider advertising trend focused on speed and cost efficiency, even as creative professionals warn about its potential impact on traditional jobs.

Despite the efficiency gains, AI-assisted advertising remains labour-intensive. Teams of digital artists refine the content frame by frame to ensure realistic and emotionally engaging visuals.

Industry data show that 30% of commercials and online videos in 2025 were created or enhanced using generative AI, compared with 22% in 2023.

Coca-Cola’s move follows similar initiatives by major firms, including Google’s first fully AI-generated ad spot launched last month, signalling that generative AI is now becoming a mainstream creative tool across global marketing.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

How Google uses AI to support teachers and inspire students

Google is redefining education with AI designed to enhance learning, rather than replace teachers. The company has unveiled new tools grounded in learning science to support both educators and students, aiming to make learning more effective, efficient and engaging.

Through its Gemini platform, users can follow guided learning paths that encourage discovery rather than passive answers.

YouTube and Search now include conversational features that allow students to ask questions as they learn, while NotebookLM can transform personal materials into quizzes or immersive study aids.

Instructors can also utilise Google Classroom’s free AI tools for lesson planning and administrative support, thereby freeing up time for direct student engagement.

Google emphasises that its goal is to preserve the human essence of education while using AI to expand understanding. The company also addresses challenges linked to AI in learning, such as cheating, fairness, accuracy and critical thinking.

It is exploring assessment models that cannot be easily replicated by AI, including debates, projects, and oral examinations.

The firm pledges to develop its tools responsibly by collaborating with educators, parents and policymakers. By combining the art of teaching with the science of AI-driven learning, Google seeks to make education more personal, equitable and inspiring for all.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

OpenAI outlines roadmap for AI safety, accountability and global cooperation

New recommendations have been published by OpenAI for managing rapid advances in AI, stressing the need for shared safety standards, public accountability, and resilience frameworks.

The company warned that while AI systems are increasingly capable of solving complex problems and accelerating discovery, they also pose significant risks that must be addressed collaboratively.

According to OpenAI, the next few years could bring systems capable of discoveries once thought centuries away.

The firm expects AI to transform health, materials science, drug development and education, while acknowledging that economic transitions may be disruptive and could require a rethinking of social contracts.

To ensure safe development, OpenAI proposed shared safety principles among frontier labs, new public oversight mechanisms proportional to AI capabilities, and the creation of a resilience ecosystem similar to cybersecurity.

It also called for regular reporting on AI’s societal impact to guide evidence-based policymaking.

OpenAI reiterated that the goal should be to empower individuals by making advanced AI broadly accessible, within limits defined by society, and to treat access to AI as a foundational public utility in the years ahead.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

India’s AI roadmap could add $500 billion to economy by 2035

According to the Business Software Alliance, India could add over $500 billion to its economy by 2035 through the widespread adoption of AI.

At the BSA AI Pre-Summit Forum in Delhi, the group unveiled its ‘Enterprise AI Adoption Agenda for India’, which aligns with the goals of the India–AI Impact Summit 2026 and the government’s vision for a digitally advanced economy by 2047.

The agenda outlines a comprehensive policy framework across three main areas: talent and workforce, infrastructure and data, and governance.

It recommends expanding AI training through national academies, fostering industry–government partnerships, and establishing innovation hubs with global companies to strengthen talent pipelines.

BSA also urged greater government use of AI tools, reforms to data laws, and the adoption of open industry standards for content authentication. It called for coordinated governance measures to ensure responsible AI use, particularly under the Digital Personal Data Protection Act.

BSA has introduced similar policy roadmaps in other major markets, apart from India, including the US, Japan, and ASEAN countries, as part of its global effort to promote trusted and inclusive AI adoption.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Snap brings Perplexity’s answer engine into Chat for nearly a billion users

Starting in early 2026, Perplexity’s AI will be integrated into Snapchat’s Chat, accessible to nearly 1 billion users. Snapchatters can ask questions and receive concise, cited answers in-app. Snap says the move reinforces its position as a trusted, mobile-first AI platform.

Under the deal, Perplexity will pay Snap $400 million in cash and equity over a one-year period, tied to the global rollout. Revenue contribution is expected to begin in 2026. Snap points to its 943 million MAUs and reaches over 75% of 13–34-year-olds in 25+ countries.

Perplexity frames the move as meeting curiosity where it occurs, within everyday conversations. Evan Spiegel says Snap aims to make AI more personal, social, and fun, woven into friendships and conversations. Both firms pitch the partnership as enhancing discovery and learning on Snapchat.

Perplexity joins, rather than replaces, Snapchat’s existing My AI. Messages sent to Perplexity will inform personalisation on Snapchat, similar to My AI’s current behaviour. Snap claims the approach is privacy-safe and designed to provide credible, real-time answers from verifiable sources.

Snap casts this as a first step toward a broader AI partner platform inside Snapchat. The companies plan creative, trusted ways for leading AI providers to reach Snap’s global community. The integration aims to enable seamless, in-chat exploration while keeping users within Snapchat’s product experience.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

How GEMS turns Copilot time savings into personalised teaching at scale

GEMS Education is rolling out Microsoft 365 Copilot to cut admin and personalise learning, with clear guardrails and transparency. Teachers spend less time on preparation and more time with pupils. The aim is augmentation, not replacement.

Copilot serves as a single workspace for plans, sources, and visuals. Differentiated materials arrive faster for struggling and advanced learners. More time goes to feedback and small groups.

Student projects are accelerating. A Grade 8 pupil built a smart-helmet prototype, using AI to guide circuitry, code, and documentation. The idea to build functionally moved quickly.

The School of Research and Innovation opened in August 2025 as a living lab, hosting educator training, research partners, and student incubation. A Microsoft-backed stack underpins the campus.

Teachers are co-creating lightweight AI agents for curriculum and analytics. Expert oversight and safety patterns stay central. The focus is on measurable time savings and real-world learning.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

New AI tool helps identify suicide-risk individuals

Researchers at Touro University have found that an AI tool can identify suicide risk that standard diagnostic methods often miss. The study, published in the Journal of Personality Assessment, shows that LLMs can analyse speech to detect patterns linked to perceived suicide risk.

Current assessment methods, such as multiple-choice questionnaires, often fail to capture the nuances of an individual’s experience.

The study used Claude 3.5 Sonnet to analyse 164 participants’ audio responses, examining future self-continuity, a key factor linked to suicide risk. The AI detected subtle cues in speech, including coherence, emotional tone, and detail, which traditional tools overlooked.

While the research focused on perceived risk rather than actual suicide attempts, identifying individuals who feel at risk is crucial for timely intervention. LLM predictions could be used in hospitals, hotlines, or therapy sessions as a new tool for mental health professionals.

Beyond suicide risk, large language models may also help detect other mental health conditions such as depression and anxiety, providing faster, more nuanced insights into patients’ mental well-being and supporting early intervention strategies.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

Material-level AI emerges in MIT–DeRucci sleep science collaboration

MIT’s Sensor and Ambient Intelligence group, led by Joseph Paradiso, unveiled ‘FiberCircuits’, a smart-fibre platform co-developed with DeRucci. It embeds sensing, edge inference, and feedback directly in fibres to create ‘weavable intelligence’. The aim is natural, low-intrusion human–computer interaction.

Teams embedded AI micro-sensors and sub-millimetre ICs to capture respiration, movement, skin conductance, and temperature, running tinyML locally for privacy. Feedback via light, sound, or micro-stimulation closes the loop while keeping power and data exposure low.

Sleep science prototypes included a mattress with distributed sensors for posture recognition, an eye mask combining PPG and EMG, and an IMU-enabled pillow. Prototypes were used to validate signal parsing and human–machine coupling across various sleep scenarios.

Edge-first design places most inference on the fibre to protect user data and reduce interference, according to DeRucci’s CTO, Chen Wenze. Collaboration covered architecture, algorithms, and validation, with early results highlighting comfort, durability, and responsiveness suitable for bedding.

Partners plan to expand cohorts and scenarios into rehabilitation and non-invasive monitoring, and to release selected algorithms and test protocols. Paradiso framed material-level intelligence as a path to gentler interfaces that blend into everyday environments.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

AI brain atlas reveals unprecedented detail in MRI scans

Researchers at University College London have developed NextBrain, an AI-assisted brain atlas that visualises the human brain in unprecedented detail. The tool links microscopic tissue imaging with MRI, enabling rapid and precise analysis of living brain scans.

NextBrain maps 333 brain regions using high-resolution post-mortem tissue data, which is combined into a digital 3D model with the aid of AI. The atlas was created over the course of six years by dissecting, photographing, and digitally reconstructing five human brains.

AI played a crucial role in aligning microscope images with MRI scans, ensuring accuracy while significantly reducing the time required for manual labelling. The atlas detects subtle changes in brain sub-regions, such as the hippocampus, crucial for studying diseases like Alzheimer’s.

Testing on thousands of MRI scans demonstrated that NextBrain reliably identifies brain regions across different scanners and imaging conditions, enabling detailed analysis of ageing patterns and early signs of neurodegeneration.

All data, tools, and annotations are openly available through the FreeSurfer neuroimaging platform. The public release of NextBrain aims to accelerate research, support diagnosis, and improve treatment for neurological conditions worldwide.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Tinder tests AI feature that analyses photos for better matches

Tinder is introducing an AI feature called Chemistry, designed to better understand users through interactive questions and optional access to their Camera Roll. The system analyses personal photos and responses to infer hobbies and preferences, offering more compatible match suggestions.

The feature is being tested in New Zealand and Australia ahead of a broader rollout as part of Tinder’s 2026 product revamp. Match Group CEO Spencer Rascoff said Chemistry will become a central pillar in the app’s evolving AI-driven experience.

Privacy concerns have surfaced as the feature requests permission to scan private photos, similar to Meta’s recent approach to AI-based photo analysis. Critics argue that such expanded access offers limited benefits to users compared to potential privacy risks.

Match Group expects a short-term financial impact, projecting a $14 million revenue decline due to Tinder’s testing phase. The company continues to face user losses despite integrating AI tools for safer messaging, better profile curation and more interactive dating experiences.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot