BBVA deepens AI partnership with OpenAI

OpenAI and BBVA have agreed on a multi-year strategic collaboration designed to embed artificial intelligence across the global banking group.

An initiative that will expand the use of ChatGPT Enterprise to all 120,000 BBVA employees, marking one of the largest enterprise deployments of generative AI in the financial sector.

The programme focuses on transforming customer interactions, internal workflows and decision making.

BBVA plans to co-develop AI-driven solutions with OpenAI to support bankers, streamline risk analysis and redesign processes such as software development and productivity support, instead of relying on fragmented digital tools.

The rollout follows earlier deployments that demonstrated strong engagement and measurable efficiency gains, with employees saving hours each week on routine tasks.

ChatGPT Enterprise will be implemented with enterprise grade security and privacy safeguards, ensuring compliance within a highly regulated environment.

Beyond internal operations, BBVA is accelerating its shift toward AI native banking by expanding customer facing services powered by OpenAI models.

The collaboration reflects a broader move among major financial institutions to integrate AI at the core of products, operations and personalised banking experiences.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

AI reshapes cybercrime investigations in India

Maharashtra police are expanding the use of an AI-powered investigation platform developed with Microsoft to tackle the rapid growth of cybercrime.

MahaCrimeOS AI, already in use across Nagpur district, will now be deployed to more than 1,100 police stations statewide, significantly accelerating case handling and investigation workflows.

The system acts as an investigation copilot, automating complaint intake, evidence extraction and legal documentation across multiple languages.

Officers can analyse transaction trails, request data from banks and telecom providers and follow standardised investigation pathways, instead of relying on slow manual processes.

Built using Microsoft Foundry and Azure OpenAI Service, MahaCrimeOS AI integrates policing protocols, criminal law references and open-source intelligence.

Investigators report major efficiency gains, handling several cases monthly where only one was previously possible, while maintaining procedural accuracy and accountability.

The initiative highlights how responsible AI deployment can strengthen public institutions.

By reducing administrative burden and improving investigative capacity, the platform allows officers to focus on victim support and crime resolution, marking a broader shift toward AI-assisted governance in India.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

AI-powered grid pilot aims to cut energy costs in Ottawa

Canada has announced new federal funding to pilot AI tools on the electricity grid, backing a project designed to improve reliability, affordability and efficiency as energy demand grows.

The government of Canada will provide $6 million to Hydro Ottawa under the Ottawa Distributed Energy Resource Accelerator programme. The initiative will utilise AI-enhanced predictive analytics to forecast peak demand and help balance electricity supply and demand in near real-time.

The project will turn customer-owned technologies such as smart thermostats, electric vehicle chargers and home batteries into responsive grid resources. By aggregating them, Hydro Ottawa aims to manage local constraints and reduce costly network upgrades, starting in areas like Kanata North that are experiencing rapid growth.

Officials say the programme will give households more control over energy use while strengthening grid resilience. The pilot is also intended to serve as a model that could be scaled across other neighbourhoods and electricity systems.

The funding comes through the Energy Innovation Program, which supports innovative grid demonstrations and AI-driven energy projects. Ottawa says such initiatives are key to modernising Canada’s electricity system and supporting the transition to a low-carbon economy.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

OpenAI outlines safeguards as AI cyber capabilities advance

Cyber capabilities in advanced AI models are improving rapidly, delivering clear benefits for cyberdefence while introducing new dual-use risks that require careful management, according to OpenAI’s latest assessment.

The company points to sharp gains in capture-the-flag performance, with success rates rising from 27 percent in August to 76 percent by November 2025. OpenAI says future models could reach high cyber capability, including assistance with sophisticated intrusion techniques.

To address this, OpenAI says it is prioritising defensive use cases, investing in tools that help security teams audit code, patch vulnerabilities, and respond more effectively to threats. The goal is to give defenders an advantage in an often under-resourced environment.

OpenAI argues that cybersecurity cannot be governed through a single safeguard, as defensive and offensive techniques overlap. Instead, it applies a defence-in-depth approach that combines access controls, monitoring, detection systems, and extensive red teaming to limit misuse.

Alongside these measures, the company plans new initiatives, including trusted access programmes for defenders, agent-based security tools in private testing, and the creation of a Frontier Risk Council. OpenAI says these efforts reflect a long-term commitment to cyber resilience.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

UNODC and INTERPOL announce Global Fraud Summit in 2026

The United Nations Office on Drugs and Crime (UNODC), in cooperation with the International Criminal Police Organization (INTERPOL), will convene the Global Fraud Summit 2026 at the Vienna International Centre, Austria, from 16 to 17 March 2026.

UNODC and INTERPOL invite applications for participation from private sector entities, civil society organisations, and academic institutions. Applications must be submitted by 12 December 2025.

The Summit will provide a platform for discussion on current trends, risks, and responses related to fraud, including its digital and cross-border dimensions. Discussions will address challenges associated with detection, investigation, prevention, and international cooperation in fraud-related cases.

The objectives of the Summit include:

  • Facilitating coordination among national and international stakeholders
  • Supporting information exchange across sectors and jurisdictions
  • Sharing policy, operational, and technical approaches to fraud prevention and response
  • Identifying areas for further cooperation and capacity-building

The ministerial-level meeting will bring together senior representatives from governments, international and regional organisations, law enforcement authorities, the private sector, academia, and civil society. Participating institutions are encouraged to nominate delegates at an appropriate senior level.

The Summit is supported by a financial contribution from the Government of the United Kingdom of Great Britain and Northern Ireland.

Applications must be submitted through the application at the official website.

Would you like to learn more about AI, tech and digital diplomacyIf so, ask our Diplo chatbot!

International Criminal Court (ICC) issues policy on cyber-enabled crimes

The Office of the Prosecutor (OTP) of the International Criminal Court (ICC) has issued a Policy on Cyber-Enabled Crimes under the Rome Statute. The Policy sets out how the OTP interprets and applies the existing ICC legal framework to conduct that is committed or facilitated through digital and cyber means.

The Policy clarifies that the ICC’s jurisdiction remains limited to crimes defined in the Rome Statute: genocide, crimes against humanity, war crimes, the crime of aggression, and offences against the administration of justice. It does not extend to ordinary cybercrimes under domestic law, such as hacking, fraud, or identity theft, unless such conduct forms part of or facilitates one of the crimes within the Court’s jurisdiction.

According to the Policy, the Rome Statute is technology-neutral. This means that the legal assessment of conduct depends on whether the elements of a crime are met, rather than on the specific tools or technologies used.

As a result, cyber means may be relevant both to the commission of Rome Statute crimes and to the collection and assessment of evidence related to them.

The Policy outlines how cyber-enabled conduct may relate to each category of crimes under the Rome Statute. Examples include cyber operations affecting essential civilian services, the use of digital platforms to incite or coordinate violence, cyber activities causing indiscriminate effects in armed conflict, cyber operations linked to inter-State uses of force, and digital interference with evidence, witnesses, or judicial proceedings before the ICC.

The Policy was developed through consultations with internal and external legal and technical experts, including the OTP’s Special Adviser on Cyber-Enabled Crimes, Professor Marko Milanović. It does not modify or expand the ICC’s jurisdiction, which remains governed exclusively by the Rome Statute.

Currently, there are no publicly known ICC cases focused specifically on cyber-enabled crimes. However, the issuance of the Policy reflects the OTP’s assessment that digital conduct may increasingly be relevant to the commission, facilitation, and proof of crimes within the Court’s mandate.

Would you like to learn more about AI, tech and digital diplomacyIf so, ask our Diplo chatbot!

EU supports Germany’s semiconductor expansion

The European Commission has approved €623 million in German support for two first-of-a-kind semiconductor factories in Dresden and Erfurt.

A funding that will help GlobalFoundries expand its site to create new wafer capacity and will assist X-FAB in building an open foundry designed for advanced micro-electromechanical systems.

Both projects aim to increase Europe’s strategic autonomy in chip production, rather than allowing dependence on non-European suppliers to deepen.

The facility planned by GlobalFoundries will adapt technologies developed under the IPCEI Microelectronics and Communication Technologies framework for dual-use needs in aerospace, defence and critical infrastructure.

The manufacturing process will take place entirely within the EU to meet strict security and reliability demands. X-FAB’s project will offer services that European firms, including start-ups and small companies, currently source from abroad.

A new plant that is expected to begin commercial operation by 2029 and will introduce manufacturing capabilities not yet available in Europe.

In return for public support, both companies will pursue innovation programmes, strengthen cross-border cooperation, and apply priority-rated orders during supply shortages, in line with the European Chips Act.

They will also develop training schemes to expand the pool of skilled workers, rather than relying on the limited existing capacity. Each company has committed to seeking recognition for its facilities as Open EU Foundries.

The Commission concluded that the aid packages comply with the EU State aid rules because they encourage essential economic activity, show apparent incentive effects and remain proportionate to funding gaps identified during assessment.

These measures form part of Europe’s broader shift toward a more resilient semiconductor ecosystem and follow earlier decisions supporting similar investments across member states.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Reddit challenges Australia’s teen social media ban

The US social media company, Reddit, has launched legal action in Australia as the country enforces the world’s first mandatory minimum age for social media access.

Reddit argues that banning users under 16 prevents younger Australians from taking part in political debate, instead of empowering them to learn how to navigate public discussion.

Lawyers representing the company argue that the rule undermines the implied freedom of political communication and could restrict future voters from understanding the issues that will shape national elections.

Australia’s ban took effect on December 10 and requires major platforms to block underage users or face penalties that can reach nearly 50 million Australian dollars.

Companies are relying on age inference and age estimation technologies to meet the obligation, although many have warned that the policy raises privacy concerns in addition to limiting online expression.

The government maintains that the law is designed to reduce harm for younger users and has confirmed that the list of prohibited platforms may expand as new safety issues emerge.

Reddit’s filing names the Commonwealth of Australia and Communications Minister Anika Wells. The minister’s office says the government intends to defend the law and will prioritise the protection of young Australians, rather than allowing open access to high-risk platforms.

The platform’s challenge follows another case brought by an internet rights group that claims the legislation represents an unfair restriction on free speech.

A separate list identifies services that remain open for younger users, such as Roblox, Pinterest and YouTube Kids. At the same time, platforms including Instagram, TikTok, Snapchat, Reddit and X are blocked for those under sixteen.

The case is expected to shape future digital access rights in Australia, as online communities become increasingly central to political education and civic engagement among emerging voters.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

US approaches universal 5G as global adoption surges

New data from Omdia and 5G Americas showed rapid global growth in wireless connectivity during the third quarter of 2025, with nearly three billion 5G connections worldwide.

North America remained the most advanced region in terms of adoption, reaching penetration levels that almost match its population.

The US alone recorded 341 million 5G connections, marking one of the highest per capita adoption rates in the world, compared to the global average, which remains far lower.

Analysts noted that strong device availability and sustained investment continue to reinforce the region’s leadership. Enhanced features such as improved uplink performance and integrated sensing are expected to accelerate the shift towards early 5G-Advanced capabilities.

Growth in cellular IoT also remained robust. North America supported more than 270 million connected devices and is forecast to reach nearly half a billion by 2030 as sectors such as manufacturing and utilities expand their use of connected systems.

AI is becoming central to these deployments by managing traffic, automating operations and enabling more innovative industrial applications.

Future adoption is set to intensify as regional 5G connections are projected to surpass 8.6 billion by 2030.

Rising interest in fixed wireless access is driving multi-device usage, offering high-speed connectivity for households and small firms instead of relying solely on fibre networks that remain patchy in many areas.

Globally, the sector has reached more than 78 million connections, with strong annual growth. Analysts believe that expanding infrastructure will support demand for low-latency connectivity, and the addition of satellite-based systems is expected to extend coverage to remote locations.

By mid-November 2025, operators had launched 379 commercial 5G networks worldwide, including seventeen in North America. A similar number of LTE networks operated across the region.

Industry observers said that expanding terrestrial and non-terrestrial networks will form a layered architecture that strengthens resilience, supports emergency response and improves service continuity across land, sea and air.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

RightsX Summit 2025: Governing technology through human rights

Human Rights Day takes place on 10 December each year to commemorate the Universal Declaration of Human Rights (UDHR), adopted by the UN in 1948. It functions as a reminder of shared international commitments to dignity, equality and freedom, and seeks to reaffirm the relevance of these principles to contemporary challenges.

In 2025, the theme ‘Human Rights: Our Everyday Essentials’ aimed to reconnect people with how rights shape daily life, emphasising that rights remain both positive and practical foundations for individual and collective well-being.

 Text, Newspaper, Adult, Male, Man, Person, Accessories, Jewelry, Necklace, Eleanor Roosevelt

Human Rights Day also serves as a moment for reflection and action. In a world shaped by rapid technological change, geopolitical instability and social inequalities, the day encourages institutions, governments and civil society to coordinate on priorities that respond to contemporary threats and opportunities.

In this context, the RightsX Summit was strategically scheduled. By centring discussions on human rights, technology, data and innovation around Human Rights Day, the event reinforced that digital governance issues are central to rights protection in the twenty-first century. The alignment elevated technology from a technical topic to a political and ethical concern within human rights debates.

The RightsX Summit 2025

 Advertisement, Poster, Purple

The summit brought together governments, the UN system, civil society, private sector partners and innovators to explore how technology can advance human rights in the digital age. Its aim was to produce practical insights, solution-focused dialogues and discussions that could inform a future human rights toolbox shaped by technology, data, foresight and partnerships.

Central themes included AI, data governance, predictive analytics, digital security, privacy and other emerging technologies. Discussions analysed how these tools can be responsibly used to anticipate risks, improve monitoring, and support evidence-based decision-making in complex rights contexts.

The summit also examined the challenge of aligning technological deployment with internationally recognised human rights norms, exploring the mechanisms by which innovation can reinforce equity, justice and accountability in digital governance.

The summit emphasised that technological innovation is inseparable from global leadership in human rights. Aligning emerging tools with established norms was highlighted as critical to ensure that digital systems do not exacerbate existing inequalities or create new risks.

Stakeholders were encouraged to consider not only technical capabilities but also the broader social, legal and ethical frameworks within which technology operates.

The 30x30x30 Campaign

 Astronomy, Outer Space, Planet, Globe, Earth, Sphere

The 30x30x30 initiative represents an ambitious attempt to operationalise human rights through innovation. Its objective is to deliver 30 human rights innovations for 30 communities by 2030, aligned with the 30 articles of the UDHR.

The campaign emphasises multistakeholder collaboration by uniting countries, companies and communities as co-creators of solutions that are both technologically robust and socially sensitive. A distinctive feature of 30x30x30 is its focus on scalable, real-world tools that address complex rights challenges.

Examples include AI-based platforms for real-time monitoring, disaster tracking systems, digital storytelling tools and technologies for cyber peace. These tools are intended to serve both institutional responders and local communities, demonstrating how technology can amplify human agency in rights contexts.

The campaign also highlights the interdependence of innovation and human rights. Traditional approaches alone cannot address multidimensional crises such as climate displacement, conflict, or systemic inequality, and innovation without human-rights grounding risks reinforcing existing disparities.

‘Innovation is Political’

 Body Part, Finger, Hand, Person, Baby, Network, Accessories

Volker Türk, UN High Commissioner for Human Rights, emphasised that ‘innovation is political’. He noted that the development and deployment of technology shape who benefits and how, and that decisions regarding access, governance and application of technological tools carry significant implications for equity, justice and human dignity.

This framing highlights the importance of integrating human rights considerations into innovation policy. By situating human rights at the centre of technological development, the summit promoted governance approaches that ensure innovation contributes positively to societal outcomes.

It encouraged multistakeholder responsibility, including governments, companies and civil society, to guide technology in ways that respect and advance human rights.

Human Rights Data Exchange (HRDx)

HRDx is a proposed global platform intended to improve the ethical management of human rights data. It focuses on creating systems where information is governed responsibly, ensuring that privacy, security and protection of personal data are central to its operation.

The platform underlines that managing data is not only a technical issue but also a matter of governance and ethics. By prioritising transparency, accountability and data protection, it aims to provide a framework that supports the responsible use of information without compromising human rights.

Through these principles, HRDx highlights the importance of embedding ethical oversight into technological tools. Its success relies on maintaining the balance between utilising data to inform decision-making and upholding the rights and dignity of individuals. That approach ensures that technology can contribute to human rights protection while adhering to rigorous ethical standards.

Trustworthy AI in human rights

The government has withdrawn the mandate for Sanchar Saathi, responding to public backlash and industry resistance.

AI offers significant opportunities to enhance human rights monitoring and protection. For example, AI can help to analyse large datasets to detect trends, anticipate crises, and identify violations of fundamental freedoms. Predictive analytics can support human rights foresight, enabling early interventions to prevent conflicts, trafficking, or discrimination.

At the same time, trust in AI for decision-making remains a significant challenge. AI systems trained on biassed or unrepresentative data can produce discriminatory outcomes, undermine privacy and erode public trust.

These risks are especially acute in applications where algorithmic decisions affect access to services or determine individual liberties. That requires governance frameworks that ensure transparency, accountability and ethical oversight.

In the human rights context, trustworthy AI means designing systems that are explainable, auditable and accountable. Human oversight remains essential, particularly in decisions with serious implications for individuals’ rights.

The Summit highlighted the importance of integrating human rights principles such as non-discrimination, equality and procedural fairness into AI development and deployment processes.

Ethics, Accountability and Governance

AI, justice, law,

Aligning technology with human rights necessitates robust ethical frameworks, effective governance, and transparent accountability. Digital systems must uphold fairness, transparency, inclusivity, and human dignity throughout their lifecycle, from design to deployment and ongoing operation.

Human rights impact assessments at the design stage help identify potential risks and guide responsible development. Engaging users and affected communities ensures technologies meet real needs.

Continuous monitoring and audits maintain compliance with ethical standards and highlight areas for improvement.

Effective governance ensures responsibilities are clearly defined, decisions are transparent, and corrective actions can be taken when rights are compromised. By combining ethical principles with robust governance and accountability, technology can actively protect and support human rights.

Future pathways for rights-centred innovation

Image of UN Human Rights Council

The integration of human rights into technology represents a long-term project. Establishing frameworks that embed accountability, transparency and ethical oversight ensures that emerging tools enhance freedom, equality and justice.

Digital transformation, when guided by human rights, creates opportunities to address complex challenges. RightsX 2025 demonstrated that innovation, governance and ethical foresight can converge to shape a digital ecosystem that safeguards human dignity while fostering progress.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!