OpenAI unveils Teen Safety Blueprint for responsible AI

OpenAI has launched the Teen Safety Blueprint to guide responsible AI use for young people. The roadmap guides policymakers and developers on age-appropriate design, safeguards, and research to protect teen well-being and promote opportunities.

The company is implementing these principles across its products without waiting for formal regulation. Recent measures include stronger safeguards, parental controls, and an age-prediction system to customise AI experiences for under-18 users.

OpenAI emphasises that protecting teens is an ongoing effort. Collaboration with parents, experts, and young people will help improve AI safety continuously while shaping how technology can support teens responsibly over the long term.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

Coca-Cola enhances its AI-powered Christmas ad to fix last year’s visual flaws

Coca-Cola has released an improved AI-generated Christmas commercial after last year’s debut campaign drew criticism for its unsettling visuals.

The latest ‘Holidays Are Coming’ ads, developed in part by San Francisco-based Silverside, showcase more natural animation and a wider range of festive creatures, instead of the overly lifelike characters that previously unsettled audiences.

The new version avoids the ‘uncanny valley’ effect that plagued 2024’s ads. The use of generative AI by Coca-Cola reflects a wider advertising trend focused on speed and cost efficiency, even as creative professionals warn about its potential impact on traditional jobs.

Despite the efficiency gains, AI-assisted advertising remains labour-intensive. Teams of digital artists refine the content frame by frame to ensure realistic and emotionally engaging visuals.

Industry data show that 30% of commercials and online videos in 2025 were created or enhanced using generative AI, compared with 22% in 2023.

Coca-Cola’s move follows similar initiatives by major firms, including Google’s first fully AI-generated ad spot launched last month, signalling that generative AI is now becoming a mainstream creative tool across global marketing.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

How Google uses AI to support teachers and inspire students

Google is redefining education with AI designed to enhance learning, rather than replace teachers. The company has unveiled new tools grounded in learning science to support both educators and students, aiming to make learning more effective, efficient and engaging.

Through its Gemini platform, users can follow guided learning paths that encourage discovery rather than passive answers.

YouTube and Search now include conversational features that allow students to ask questions as they learn, while NotebookLM can transform personal materials into quizzes or immersive study aids.

Instructors can also utilise Google Classroom’s free AI tools for lesson planning and administrative support, thereby freeing up time for direct student engagement.

Google emphasises that its goal is to preserve the human essence of education while using AI to expand understanding. The company also addresses challenges linked to AI in learning, such as cheating, fairness, accuracy and critical thinking.

It is exploring assessment models that cannot be easily replicated by AI, including debates, projects, and oral examinations.

The firm pledges to develop its tools responsibly by collaborating with educators, parents and policymakers. By combining the art of teaching with the science of AI-driven learning, Google seeks to make education more personal, equitable and inspiring for all.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Meta rejects French ruling over gender bias in Facebook job ads

Meta has rejected a decision by France’s Défenseur des Droits that found its Facebook algorithm discriminates against users based on gender in job advertising. The case was brought by Global Witness and women’s rights groups Fondation des Femmes and Femmes Ingénieures, who argued that Meta’s ad system violates French anti-discrimination law.

The regulator ruled that Facebook’s system treats users differently according to gender when displaying job opportunities, amounting to indirect discrimination. It recommended Meta Ireland and Facebook France make adjustments within three months to prevent gender-based bias.

A Meta spokesperson said the company disagrees with the finding and is ‘assessing its options.’ The complainants welcomed the decision, saying it confirms that platforms are not exempt from laws prohibiting gender-based distinctions in recruitment advertising.

Lawyer Josephine Shefet, representing the groups, said the ruling marks a key precedent. ‘The decision sends a strong message to all digital platforms: they will be held accountable for such bias,’ she said.

Would you like to learn more about AI, tech and digital diplomacyIf so, ask our Diplo chatbot!

OpenAI outlines roadmap for AI safety, accountability and global cooperation

New recommendations have been published by OpenAI for managing rapid advances in AI, stressing the need for shared safety standards, public accountability, and resilience frameworks.

The company warned that while AI systems are increasingly capable of solving complex problems and accelerating discovery, they also pose significant risks that must be addressed collaboratively.

According to OpenAI, the next few years could bring systems capable of discoveries once thought centuries away.

The firm expects AI to transform health, materials science, drug development and education, while acknowledging that economic transitions may be disruptive and could require a rethinking of social contracts.

To ensure safe development, OpenAI proposed shared safety principles among frontier labs, new public oversight mechanisms proportional to AI capabilities, and the creation of a resilience ecosystem similar to cybersecurity.

It also called for regular reporting on AI’s societal impact to guide evidence-based policymaking.

OpenAI reiterated that the goal should be to empower individuals by making advanced AI broadly accessible, within limits defined by society, and to treat access to AI as a foundational public utility in the years ahead.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Snap brings Perplexity’s answer engine into Chat for nearly a billion users

Starting in early 2026, Perplexity’s AI will be integrated into Snapchat’s Chat, accessible to nearly 1 billion users. Snapchatters can ask questions and receive concise, cited answers in-app. Snap says the move reinforces its position as a trusted, mobile-first AI platform.

Under the deal, Perplexity will pay Snap $400 million in cash and equity over a one-year period, tied to the global rollout. Revenue contribution is expected to begin in 2026. Snap points to its 943 million MAUs and reaches over 75% of 13–34-year-olds in 25+ countries.

Perplexity frames the move as meeting curiosity where it occurs, within everyday conversations. Evan Spiegel says Snap aims to make AI more personal, social, and fun, woven into friendships and conversations. Both firms pitch the partnership as enhancing discovery and learning on Snapchat.

Perplexity joins, rather than replaces, Snapchat’s existing My AI. Messages sent to Perplexity will inform personalisation on Snapchat, similar to My AI’s current behaviour. Snap claims the approach is privacy-safe and designed to provide credible, real-time answers from verifiable sources.

Snap casts this as a first step toward a broader AI partner platform inside Snapchat. The companies plan creative, trusted ways for leading AI providers to reach Snap’s global community. The integration aims to enable seamless, in-chat exploration while keeping users within Snapchat’s product experience.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

How GEMS turns Copilot time savings into personalised teaching at scale

GEMS Education is rolling out Microsoft 365 Copilot to cut admin and personalise learning, with clear guardrails and transparency. Teachers spend less time on preparation and more time with pupils. The aim is augmentation, not replacement.

Copilot serves as a single workspace for plans, sources, and visuals. Differentiated materials arrive faster for struggling and advanced learners. More time goes to feedback and small groups.

Student projects are accelerating. A Grade 8 pupil built a smart-helmet prototype, using AI to guide circuitry, code, and documentation. The idea to build functionally moved quickly.

The School of Research and Innovation opened in August 2025 as a living lab, hosting educator training, research partners, and student incubation. A Microsoft-backed stack underpins the campus.

Teachers are co-creating lightweight AI agents for curriculum and analytics. Expert oversight and safety patterns stay central. The focus is on measurable time savings and real-world learning.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

Tinder tests AI feature that analyses photos for better matches

Tinder is introducing an AI feature called Chemistry, designed to better understand users through interactive questions and optional access to their Camera Roll. The system analyses personal photos and responses to infer hobbies and preferences, offering more compatible match suggestions.

The feature is being tested in New Zealand and Australia ahead of a broader rollout as part of Tinder’s 2026 product revamp. Match Group CEO Spencer Rascoff said Chemistry will become a central pillar in the app’s evolving AI-driven experience.

Privacy concerns have surfaced as the feature requests permission to scan private photos, similar to Meta’s recent approach to AI-based photo analysis. Critics argue that such expanded access offers limited benefits to users compared to potential privacy risks.

Match Group expects a short-term financial impact, projecting a $14 million revenue decline due to Tinder’s testing phase. The company continues to face user losses despite integrating AI tools for safer messaging, better profile curation and more interactive dating experiences.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

Social media platforms ordered to enforce minimum age rules in Australia

Australia’s eSafety Commissioner has formally notified major social media platforms, including Facebook, Instagram, TikTok, Snapchat, and YouTube, that they must comply with new minimum age restrictions from 10 December.

The rule will require these services to prevent social media users under 16 from creating accounts.

eSafety determined that nine popular services currently meet the definition of age-restricted platforms since their main purpose is to enable online social interaction. Platforms that fail to take reasonable steps to block underage users may face enforcement measures, including fines of up to 49.5 million dollars.

The agency clarified that the list of age-restricted platforms will not remain static, as new services will be reviewed and reassessed over time. Others, such as Discord, Google Classroom, and WhatsApp, are excluded for now as they do not meet the same criteria.

Commissioner Julie Inman Grant said the new framework aims to delay children’s exposure to social media and limit harmful design features such as infinite scroll and opaque algorithms.

She emphasised that age limits are only part of a broader effort to build safer, more age-appropriate online environments supported by education, prevention, and digital resilience.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

EU conference highlights the need for collaboration in digital safety and growth

European politicians and experts gathered in Billund for the conference ‘Towards a Safer and More Innovative Digital Europe’, hosted by the Danish Parliament.

The discussions centred on how to protect citizens online while strengthening Europe’s technological competitiveness.

Lisbeth Bech-Nielsen, Chair of the Danish Parliament’s Digitalisation and IT Committee, stated that the event demonstrated the need for the EU to act more swiftly to harness its collective digital potential.

She emphasised that only through cooperation and shared responsibility can the EU match the pace of global digital transformation and fully benefit from its combined strengths.

The first theme addressed online safety and responsibility, focusing on the enforcement of the Digital Services Act, child protection, and the accountability of e-commerce platforms importing products from outside the EU.

Participants highlighted the importance of listening to young people and improving cross-border collaboration between regulators and industry.

The second theme examined Europe’s competitiveness in emerging technologies such as AI and quantum computing. Speakers called for more substantial investment, harmonised digital skills strategies, and better support for businesses seeking to expand within the single market.

A Billund conference emphasised that Europe’s digital future depends on striking a balance between safety, innovation, and competitiveness, which can only be achieved through joint action and long-term commitment.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!