Oracle and Ci4CC join forces to advance AI in cancer research

Oracle Health and Life Sciences has announced a strategic collaboration with the Cancer Center Informatics Society (Ci4CC) to accelerate AI innovation in oncology. The partnership unites Oracle’s healthcare technology with Ci4CC’s national network of cancer research institutions.

The two organisations plan to co-develop an electronic health record system tailored to oncology, integrating clinical and genomic data for more effective personalised medicine. They also aim to explore AI-driven drug development to enhance research and patient outcomes.

Oracle executives said the collaboration represents an opportunity to use advanced AI applications to transform cancer research. The Ci4CC President highlighted the importance of collective innovation, noting that progress in oncology relies on shared data and cross-institution collaboration.

The agreement, announced at Ci4CC’s annual symposium in Miami Beach US, remains non-binding but signals growing momentum in AI-driven precision medicine. Both organisations see the initiative as a step towards turning medical data into actionable insights that could redefine oncology care.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

University of Athens partners with Google to boost AI education

The National and Kapodistrian University of Athens has announced a new partnership with Google to enhance university-level education in AI. The collaboration grants all students free 12-month access to Google’s AI Pro programme, a suite of advanced learning and research tools.

Through the initiative, students can use Gemini 2.5 Pro, Google’s latest AI model, along with Deep Research and NotebookLM for academic exploration and study organisation. The offer also includes 2 TB of cloud storage and access to Veo 3 for video creation and Jules for coding support.

The programme aims to expand digital literacy and increase hands-on engagement with generative and research-driven AI tools. By integrating these technologies into everyday study, the university hopes to cultivate a new generation of AI-experienced graduates.

University officials view the collaboration as a milestone in Greek AI-driven education, following recent national initiatives to introduce AI programmes in schools and healthcare. The partnership marks a significant step in aligning higher education with the global digital economy.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

Anthropic strengthens European growth through Paris and Munich offices

AI firm Anthropic is expanding its European presence by opening new offices in Paris and Munich, strengthening its footprint alongside existing hubs in London, Dublin, and Zurich.

An expansion that follows rapid growth across the EMEA region, where the company has tripled its workforce and seen a ninefold increase in annual run-rate revenue.

The move comes as European businesses increasingly rely on Claude for critical enterprise tasks. Companies such as L’Oréal, BMW, SAP, and Sanofi are using the AI model to enhance software, improve workflows, and ensure operational reliability.

Germany and France, both among the top 20 countries in Claude usage per capita, are now at the centre to Anthropic’s strategic expansion.

Anthropic is also strengthening its leadership team across Europe. Guillaume Princen will oversee startups and digital-native businesses, while Pip White and Thomas Remy will lead the northern and southern EMEA regions, respectively.

A new head will soon be announced for Central and Eastern Europe, reflecting the company’s growing regional reach.

Beyond commercial goals, Anthropic is partnering with European institutions to promote AI education and culture. It collaborates with the Light Art Space in Berlin, supports student hackathons through TUM.ai, and works with the French organisation Unaite to advance developer training.

These partnerships reinforce Anthropic’s long-term commitment to responsible AI growth across the continent.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Meta invests $600 billion to expand AI data centres across the US

A $600 billion investment aimed at boosting innovation, job creation, and sustainability is being launched in the US by Meta to expand its AI infrastructure.

Instead of outsourcing development, the company is building its new generation of AI data centres domestically, reinforcing America’s leadership in technology and supporting local economies.

Since 2010, Meta’s data centre projects have supported more than 30,000 skilled trade jobs and 5,000 operational roles, generating $20 billion in business for US subcontractors. These facilities are designed to power Meta’s AI ambitions while driving regional economic growth.

The company emphasises responsible development by investing heavily in renewable energy and water efficiency. Its projects have added 15 gigawatts of new energy to US power grids, upgraded local infrastructure, and helped restore water systems in surrounding communities.

Meta aims to become fully water positive by 2030.

Beyond infrastructure, Meta has channelled $58 million into community grants for schools, nonprofits, and local initiatives, including STEM education and veteran training programmes.

As AI grows increasingly central to digital progress, Meta’s continued investment in sustainable, community-focused data centres underscores its vision for a connected, intelligent future built within the US.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Inside OpenAI’s battle to protect AI from prompt injection attacks

OpenAI has identified prompt injection as one of the most pressing new challenges in AI security. As AI systems gain the ability to browse the web, handle personal data and act on users’ behalf, they become targets for malicious instructions hidden within online content.

These attacks, known as prompt injections, can trick AI models into taking unintended actions or revealing sensitive information.

To counter the issue, OpenAI has adopted a multi-layered defence strategy that combines safety training, automated monitoring and system-level security protections. The company’s research into ‘Instruction Hierarchy’ aims to help models distinguish between trusted and untrusted commands.

Continuous red-teaming and automated detection systems further strengthen resilience against evolving threats.

OpenAI also provides users with greater control, featuring built-in safeguards such as approval prompts before sensitive actions, sandboxing for code execution, and ‘Watch Mode’ when operating on financial or confidential sites.

These measures ensure that users remain aware of what actions AI agents perform on their behalf.

While prompt injection remains a developing risk, OpenAI expects adversaries to devote significant resources to exploiting it. The company continues to invest in research and transparency, aiming to make AI systems as secure and trustworthy as a cautious, well-informed human colleague.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Cars.com launches Carson AI to transform online car shopping

The US tech company, Cars.com, has unveiled Carson, a multilingual AI search engine designed to revolutionise the online car shopping experience.

Instead of relying on complex filters, Carson interprets natural language queries such as ‘a reliable car for a family of five’ or ‘a used truck under $30,000’, instantly producing targeted results tailored to each shopper’s needs.

A new AI feature that already powers around 15% of all web and mobile searches on Cars.com, with early data showing that users engaging with Carson return to the site twice as often and save three times more vehicles.

They also generate twice as many leads and convert 30% more frequently from search to vehicle detail pages.

Cars.com aims to simplify decision-making for its 25 million monthly shoppers, 70% of whom begin their search without knowing which brand or model to choose.

Carson helps these undecided users explore lifestyle, emotional and practical preferences while guiding them through Cars.com’s award-winning listings.

Further updates will introduce AI-generated summaries, personalised comparisons and search refinement suggestions.

Cars.com’s parent company, Cars Commerce, plans to expand its use of AI-driven tools to strengthen its role at the forefront of automotive retail innovation, offering a more efficient and intelligent marketplace for both consumers and dealerships.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Meta launches AI app in Europe with new Vibes video feed

Meta has launched its new AI app across Europe, featuring Vibes, an interactive feed dedicated to creating and sharing short AI-generated videos. The platform brings together media generation, remixing and collaboration tools designed to encourage creativity and social expression.

Vibes first debuted in the US, where Meta reported a tenfold rise in AI media creation since launch. European users can now use text prompts to generate, edit and animate videos, or remix existing clips by adding music, visuals and personalised styles.

The app also serves as a central hub for users’ Meta AI assistants and connected AI glasses. People can chat with the assistant, receive creative ideas, or enhance their photos and animations using advanced AI-powered editing tools integrated within the same experience.

Meta said the rollout marks a new stage in its effort to make AI-driven creativity more accessible. The company plans to expand the app’s capabilities further, promising additional features that combine entertainment, collaboration and real-time content generation.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

Coca-Cola enhances its AI-powered Christmas ad to fix last year’s visual flaws

Coca-Cola has released an improved AI-generated Christmas commercial after last year’s debut campaign drew criticism for its unsettling visuals.

The latest ‘Holidays Are Coming’ ads, developed in part by San Francisco-based Silverside, showcase more natural animation and a wider range of festive creatures, instead of the overly lifelike characters that previously unsettled audiences.

The new version avoids the ‘uncanny valley’ effect that plagued 2024’s ads. The use of generative AI by Coca-Cola reflects a wider advertising trend focused on speed and cost efficiency, even as creative professionals warn about its potential impact on traditional jobs.

Despite the efficiency gains, AI-assisted advertising remains labour-intensive. Teams of digital artists refine the content frame by frame to ensure realistic and emotionally engaging visuals.

Industry data show that 30% of commercials and online videos in 2025 were created or enhanced using generative AI, compared with 22% in 2023.

Coca-Cola’s move follows similar initiatives by major firms, including Google’s first fully AI-generated ad spot launched last month, signalling that generative AI is now becoming a mainstream creative tool across global marketing.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

How Google uses AI to support teachers and inspire students

Google is redefining education with AI designed to enhance learning, rather than replace teachers. The company has unveiled new tools grounded in learning science to support both educators and students, aiming to make learning more effective, efficient and engaging.

Through its Gemini platform, users can follow guided learning paths that encourage discovery rather than passive answers.

YouTube and Search now include conversational features that allow students to ask questions as they learn, while NotebookLM can transform personal materials into quizzes or immersive study aids.

Instructors can also utilise Google Classroom’s free AI tools for lesson planning and administrative support, thereby freeing up time for direct student engagement.

Google emphasises that its goal is to preserve the human essence of education while using AI to expand understanding. The company also addresses challenges linked to AI in learning, such as cheating, fairness, accuracy and critical thinking.

It is exploring assessment models that cannot be easily replicated by AI, including debates, projects, and oral examinations.

The firm pledges to develop its tools responsibly by collaborating with educators, parents and policymakers. By combining the art of teaching with the science of AI-driven learning, Google seeks to make education more personal, equitable and inspiring for all.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

UNESCO adopts first global ethical framework for neurotechnology

UNESCO has approved the world’s first global framework on the ethics of neurotechnology, setting new standards to ensure that advances in brain science respect human rights and dignity. The Recommendation, adopted by member states and entering into force on 12 November, establishes safeguards to ensure neurotechnological innovation benefits those in need without compromising mental privacy.

Launched in 2019 under Director-General Audrey Azoulay, the initiative builds on UNESCO’s earlier work on AI ethics. Azoulay described neurotechnology as a ‘new frontier of human progress’ that demands strict ethical boundaries to protect the inviolability of the human mind. The framework reflects UNESCO’s belief that technology should serve humanity responsibly and inclusively.

Neurotechnology, which enables direct interaction with the nervous system, is rapidly expanding, with investment in the sector rising by 700% between 2014 and 2021. While medical uses, such as deep brain stimulation and brain–computer interfaces, offer hope for people with Parkinson’s disease or disabilities, consumer devices that read neural data pose serious privacy concerns. Many users unknowingly share sensitive information about their emotions or mental states through everyday gadgets.

The Recommendation calls on governments to regulate these technologies, ensure they remain accessible, and protect vulnerable groups, especially children and workers. It urges bans on non-therapeutic use in young people and warns against monitoring employees’ mental activity or productivity without explicit consent.

UNESCO also stresses the need for transparency and better regulation of products that may alter behaviour or foster addiction.

Developed after consultations with over 8,000 contributors from academia, industry, and civil society, the framework was drafted by an international group of experts led by scientists Hervé Chneiweiss and Nita Farahany. UNESCO will now help countries translate the principles into national laws, as it has done with its 2021 AI ethics framework.

The Recommendation’s adoption, finalised at the General Conference in Samarkand, marks a new milestone in the global governance of emerging technologies.

Would you like to learn more about AI, tech and digital diplomacyIf so, ask our Diplo chatbot!