BBVA deepens AI partnership with OpenAI

OpenAI and BBVA have agreed on a multi-year strategic collaboration designed to embed artificial intelligence across the global banking group.

An initiative that will expand the use of ChatGPT Enterprise to all 120,000 BBVA employees, marking one of the largest enterprise deployments of generative AI in the financial sector.

The programme focuses on transforming customer interactions, internal workflows and decision making.

BBVA plans to co-develop AI-driven solutions with OpenAI to support bankers, streamline risk analysis and redesign processes such as software development and productivity support, instead of relying on fragmented digital tools.

The rollout follows earlier deployments that demonstrated strong engagement and measurable efficiency gains, with employees saving hours each week on routine tasks.

ChatGPT Enterprise will be implemented with enterprise grade security and privacy safeguards, ensuring compliance within a highly regulated environment.

Beyond internal operations, BBVA is accelerating its shift toward AI native banking by expanding customer facing services powered by OpenAI models.

The collaboration reflects a broader move among major financial institutions to integrate AI at the core of products, operations and personalised banking experiences.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

AI reshapes cybercrime investigations in India

Maharashtra police are expanding the use of an AI-powered investigation platform developed with Microsoft to tackle the rapid growth of cybercrime.

MahaCrimeOS AI, already in use across Nagpur district, will now be deployed to more than 1,100 police stations statewide, significantly accelerating case handling and investigation workflows.

The system acts as an investigation copilot, automating complaint intake, evidence extraction and legal documentation across multiple languages.

Officers can analyse transaction trails, request data from banks and telecom providers and follow standardised investigation pathways, instead of relying on slow manual processes.

Built using Microsoft Foundry and Azure OpenAI Service, MahaCrimeOS AI integrates policing protocols, criminal law references and open-source intelligence.

Investigators report major efficiency gains, handling several cases monthly where only one was previously possible, while maintaining procedural accuracy and accountability.

The initiative highlights how responsible AI deployment can strengthen public institutions.

By reducing administrative burden and improving investigative capacity, the platform allows officers to focus on victim support and crime resolution, marking a broader shift toward AI-assisted governance in India.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Mercedes-Benz nominates new supervisory board members to drive AI and sustainability

Mercedes-Benz Group AG has announced planned changes to its Supervisory Board, proposing the appointment of Katharina Beumelburg and Rashmi Misra at the company’s 2026 Annual General Meeting.

The move is intended to strengthen the board’s expertise in sustainability, industrial transformation, and AI, reflecting the company’s strategic focus on decarbonisation and digital innovation.

Beumelburg brings extensive experience in global sustainability and energy transition from roles at Heidelberg Materials, SLB, and Siemens. At the same time, Misra brings deep expertise in AI and emerging technologies, having held senior positions at Analogue Devices and Microsoft.

They will succeed Dame Polly Courtice and Prof. Dr Helene Svahn, who will step down in April 2026 after contributing to Mercedes-Benz’s strategic development in recent years.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Chinese tech giant bolsters AI ambitions with new foundation model division

Huawei Technologies is intensifying its AI strategy with the establishment of a dedicated foundation model unit within its 2012 Laboratories research arm, reflecting the heightened competition among China’s major tech companies to develop advanced AI systems.

A recruitment advertisement posted in October signals that the Shenzhen-based telecom and tech giant is proactively wooing global AI talent to assemble a world-class team focused on foundational model development.

Huawei has confirmed the establishment of the unit but has offered few operational details.

Richard Yu Chengdong, head of Huawei’s consumer group and newly appointed chairman of the Investment Review Board overseeing AI strategy, has personally promoted the drive on social media, urging young engineers to help ‘make the world’s most powerful AI.’

This movement underscores Huawei’s broader ambition to challenge both domestic rivals and Western AI leaders in core areas of generative AI technology.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

How AI is powering smarter digital maps for commercial fleets

AI is increasingly embedded in digital mapping systems used by commercial fleets, transforming static navigation tools into adaptive decision-making platforms.

These AI-powered systems ingest real-time data from vehicles, traffic feeds, weather, and sensors to optimise routes and operations continuously.

For fleet operators, this enables more accurate arrival times, reduced fuel consumption, and faster responses to disruptions such as congestion or road closures. AI models can also anticipate problems before they occur by identifying patterns in historical and live data.

Smarter maps support broader fleet intelligence, including predictive maintenance, driver behaviour analysis, and compliance monitoring. Mapping platforms are becoming core operational infrastructure rather than auxiliary navigation tools.

As logistics networks become increasingly complex, AI-driven mapping is emerging as a competitive necessity for commercial fleets seeking efficiency, resilience, and scalability.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

OpenAI outlines safeguards as AI cyber capabilities advance

Cyber capabilities in advanced AI models are improving rapidly, delivering clear benefits for cyberdefence while introducing new dual-use risks that require careful management, according to OpenAI’s latest assessment.

The company points to sharp gains in capture-the-flag performance, with success rates rising from 27 percent in August to 76 percent by November 2025. OpenAI says future models could reach high cyber capability, including assistance with sophisticated intrusion techniques.

To address this, OpenAI says it is prioritising defensive use cases, investing in tools that help security teams audit code, patch vulnerabilities, and respond more effectively to threats. The goal is to give defenders an advantage in an often under-resourced environment.

OpenAI argues that cybersecurity cannot be governed through a single safeguard, as defensive and offensive techniques overlap. Instead, it applies a defence-in-depth approach that combines access controls, monitoring, detection systems, and extensive red teaming to limit misuse.

Alongside these measures, the company plans new initiatives, including trusted access programmes for defenders, agent-based security tools in private testing, and the creation of a Frontier Risk Council. OpenAI says these efforts reflect a long-term commitment to cyber resilience.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

Disney backs OpenAI with $1bn investment and licensing pact

The Walt Disney Company has struck a landmark agreement with OpenAI, becoming the first major content licensing partner on Sora, the AI company’s short-form generative video platform.

Under the three-year deal, Sora will generate short videos using more than 200 animated and creature characters from Disney, Pixar, Marvel, and Star Wars. The licence also covers ChatGPT Images, excluding talent likenesses and voices.

Beyond licensing, Disney will become a major OpenAI customer, using its APIs to develop new products and experiences, including for Disney+, while deploying ChatGPT internally across its workforce. Disney will also make a $1 billion equity investment in OpenAI and receive warrants for additional shares.

Both companies frame the partnership as a test case for responsible AI in creative industries. Executives say the agreement is designed to expand storytelling possibilities while protecting creators’ rights, user safety, and intellectual property across platforms.

Subject to final approvals, Sora-generated Disney content is expected to begin rolling out in early 2026. Curated selections may appear on Disney+, marking a new phase in how established entertainment brands engage with generative AI tools.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

YouTube offers creators payments in PayPal stablecoin

YouTube has introduced a new payment option for US-based creators, allowing them to receive earnings in PayPal’s stablecoin, PYUSD. The move adds another major tech company experimenting with crypto-linked payments, while simplifying the process for content creators.

PayPal manages the conversion and custody of the stablecoin, meaning YouTube does not directly handle any crypto. The feature uses YouTube’s existing payout system and follows PayPal’s broader PYUSD rollout earlier this year.

Stablecoins have gained attention among tech firms following the signing of the GENIUS Act in July 2025, which provides a federal framework for these assets. Stripe and Google are exploring stablecoins for faster settlements, reflecting rising interest in regulated digital payments.

PYUSD, which reached a market capitalisation of nearly $4 billion, is already integrated into several PayPal products, including Venmo and merchant tools. For now, the payout option is limited to US creators, with no timeline announced for expansion to other regions.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

AI use grows among EU enterprises in 2025

In 2025, one in five EU enterprises with at least ten employees reported using AI technologies, marking a significant rise from 13.5% in 2024. AI adoption has more than doubled since 2021, showing its increasing use in business across the EU.

Nordic countries led the way, with Denmark at 42%, Finland at 37.8%, and Sweden at 35%. In contrast, Romania, Poland, and Bulgaria had the lowest adoption rates, ranging from 5.2% to 8.5%.

Almost all EU member states recorded increases compared with the previous year, with Denmark, Finland, and Lithuania showing the most significant gains.

Enterprises mainly used AI to analyse text, generate multimedia, produce language, and convert speech into machine-readable formats. Analysing written language saw the most significant growth in 2025, followed by content generation, highlighting AI’s expanding role in communication and data processing.

Rising AI adoption is also linked to efficiency gains and innovation across EU businesses. Companies report using AI to streamline operations, support decision-making, and enhance customer engagement, signalling broader economic and technological impacts.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Trump signs order blocking individual US states from enforcing AI rules

US President Donald Trump has signed an executive order aimed at preventing individual US states from enforcing their own AI regulations, arguing that AI oversight should be handled at the federal level. Speaking at the White House, Trump said a single national framework would avoid fragmented rules, while his AI adviser, David Sacks, added that the administration would push back against what it views as overly burdensome state laws, except for measures focused on child safety.

The move is welcomed by major technology companies, which have long warned that a patchwork of state-level regulations could slow innovation and weaken the US position in the global AI race, particularly in comparison to China. Industry groups say a unified national approach would provide clarity for companies investing billions of dollars in AI development and help maintain US leadership in the sector.

However, the executive order has sparked strong backlash from several states, most notably California. Governor Gavin Newsom criticised the decision as an attempt to undermine state protections, pointing to California’s own AI law that requires large developers to address potential risks posed by their models.

Other states, including New York and Colorado, have also enacted AI regulations, arguing that state action is necessary in the absence of comprehensive federal safeguards.

Critics warn that blocking state laws could leave consumers exposed if federal rules are weak or slow to emerge, while some legal experts caution that a national framework will only be effective if it offers meaningful protections. Despite these concerns, tech lobby groups have praised the order and expressed readiness to work with the White House and Congress to establish nationwide AI standards.

Would you like to learn more about AI, tech and digital diplomacyIf so, ask our Diplo chatbot!