BBVA deepens AI partnership with OpenAI

OpenAI and BBVA have agreed on a multi-year strategic collaboration designed to embed artificial intelligence across the global banking group.

An initiative that will expand the use of ChatGPT Enterprise to all 120,000 BBVA employees, marking one of the largest enterprise deployments of generative AI in the financial sector.

The programme focuses on transforming customer interactions, internal workflows and decision making.

BBVA plans to co-develop AI-driven solutions with OpenAI to support bankers, streamline risk analysis and redesign processes such as software development and productivity support, instead of relying on fragmented digital tools.

The rollout follows earlier deployments that demonstrated strong engagement and measurable efficiency gains, with employees saving hours each week on routine tasks.

ChatGPT Enterprise will be implemented with enterprise grade security and privacy safeguards, ensuring compliance within a highly regulated environment.

Beyond internal operations, BBVA is accelerating its shift toward AI native banking by expanding customer facing services powered by OpenAI models.

The collaboration reflects a broader move among major financial institutions to integrate AI at the core of products, operations and personalised banking experiences.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

New law requires AI disclosure in advertising in the US

A new law in New York, US, will require advertisers to disclose when AI-generated people appear in commercial content. Governor Kathy Hochul said the measure brings transparency and protects consumers as synthetic avatars become more widespread.

A second law now requires consent from heirs or executors when using a deceased person’s likeness for commercial purposes. The rule updates the state’s publicity rights, which previously lacked clarity in the context of the generative AI era.

Industry groups welcomed the move, saying it addresses the risks posed by unregulated AI usage, particularly for actors in the film and television industries. The disclosure must be conspicuous when an avatar does not correspond to a real human.

Specific expressive works such as films, games and shows are exempt when the avatar matches its use in the work. The laws arrive as national debate intensifies and President-elect Donald Trump signals potential attempts to limit state-level AI regulation.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

Time honours leading AI architects worldwide

Time magazine has named the so-called architects of AI as its Person of the Year, recognising leading technologists reshaping global industries. Figures highlighted include Sam Altman, Jensen Huang, Elon Musk, Mark Zuckerberg, Lisa Su, Demis Hassabis, Dario Amodei and Fei-Fei Li.

Time emphasises that major AI developers have placed enormous bets on infrastructure and capability. Their competition and collaboration have accelerated rapid adoption across businesses and households.

The magazine also examined negative consequences linked to rapid deployment, including mental health concerns and reported chatbot-related lawsuits. Economists warn of significant labour disruption as companies adopt automated systems widely.

The editorial team framed 2025 as a tipping point when AI moved into everyday life. The publication resisted using AI-generated imagery for its cover, choosing traditional artists instead. Industry observers say the selection reflects AI’s central role in shaping economic and social priorities throughout the year.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

OpenAI launches GPT‑5.2 for professional knowledge work

OpenAI has introduced GPT‑5.2, its most advanced model series to date, designed to enhance professional knowledge work. Users report significant time savings, with daily reductions of 40-60 minutes and more than 10 hours per week for heavy users.

The new model excels at generating spreadsheets, presentations, and code, while also handling complex, multi-step projects with improved speed and accuracy.

Performance benchmarks show GPT‑5.2 surpasses industry professionals on GDPval tasks across 44 occupations, producing outputs over eleven times faster and at a fraction of the cost.

Coding abilities have also reached a new standard, encompassing debugging, refactoring, front-end UI work, and multi-language software engineering tasks, providing engineers with a more reliable daily assistant.

GPT‑5.2 Thinking improves long-context reasoning, vision, and tool-calling capabilities. It accurately interprets long documents, charts, and graphical interfaces while coordinating multi-agent workflows.

The model also demonstrates enhanced factual accuracy and fewer hallucinations, making it more dependable for research, analysis, and decision-making.

The rollout includes ChatGPT Instant, Thinking, and Pro plans, as well as API access for developers. Early tests show GPT‑5.2 accelerates research, solves complex problems, and improves professional workflows, setting a new benchmark for real-world AI tasks.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Reddit challenges Australia’s teen social media ban

The US social media company, Reddit, has launched legal action in Australia as the country enforces the world’s first mandatory minimum age for social media access.

Reddit argues that banning users under 16 prevents younger Australians from taking part in political debate, instead of empowering them to learn how to navigate public discussion.

Lawyers representing the company argue that the rule undermines the implied freedom of political communication and could restrict future voters from understanding the issues that will shape national elections.

Australia’s ban took effect on December 10 and requires major platforms to block underage users or face penalties that can reach nearly 50 million Australian dollars.

Companies are relying on age inference and age estimation technologies to meet the obligation, although many have warned that the policy raises privacy concerns in addition to limiting online expression.

The government maintains that the law is designed to reduce harm for younger users and has confirmed that the list of prohibited platforms may expand as new safety issues emerge.

Reddit’s filing names the Commonwealth of Australia and Communications Minister Anika Wells. The minister’s office says the government intends to defend the law and will prioritise the protection of young Australians, rather than allowing open access to high-risk platforms.

The platform’s challenge follows another case brought by an internet rights group that claims the legislation represents an unfair restriction on free speech.

A separate list identifies services that remain open for younger users, such as Roblox, Pinterest and YouTube Kids. At the same time, platforms including Instagram, TikTok, Snapchat, Reddit and X are blocked for those under sixteen.

The case is expected to shape future digital access rights in Australia, as online communities become increasingly central to political education and civic engagement among emerging voters.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

RightsX Summit 2025: Governing technology through human rights

Human Rights Day takes place on 10 December each year to commemorate the Universal Declaration of Human Rights (UDHR), adopted by the UN in 1948. It functions as a reminder of shared international commitments to dignity, equality and freedom, and seeks to reaffirm the relevance of these principles to contemporary challenges.

In 2025, the theme ‘Human Rights: Our Everyday Essentials’ aimed to reconnect people with how rights shape daily life, emphasising that rights remain both positive and practical foundations for individual and collective well-being.

 Text, Newspaper, Adult, Male, Man, Person, Accessories, Jewelry, Necklace, Eleanor Roosevelt

Human Rights Day also serves as a moment for reflection and action. In a world shaped by rapid technological change, geopolitical instability and social inequalities, the day encourages institutions, governments and civil society to coordinate on priorities that respond to contemporary threats and opportunities.

In this context, the RightsX Summit was strategically scheduled. By centring discussions on human rights, technology, data and innovation around Human Rights Day, the event reinforced that digital governance issues are central to rights protection in the twenty-first century. The alignment elevated technology from a technical topic to a political and ethical concern within human rights debates.

The RightsX Summit 2025

 Advertisement, Poster, Purple

The summit brought together governments, the UN system, civil society, private sector partners and innovators to explore how technology can advance human rights in the digital age. Its aim was to produce practical insights, solution-focused dialogues and discussions that could inform a future human rights toolbox shaped by technology, data, foresight and partnerships.

Central themes included AI, data governance, predictive analytics, digital security, privacy and other emerging technologies. Discussions analysed how these tools can be responsibly used to anticipate risks, improve monitoring, and support evidence-based decision-making in complex rights contexts.

The summit also examined the challenge of aligning technological deployment with internationally recognised human rights norms, exploring the mechanisms by which innovation can reinforce equity, justice and accountability in digital governance.

The summit emphasised that technological innovation is inseparable from global leadership in human rights. Aligning emerging tools with established norms was highlighted as critical to ensure that digital systems do not exacerbate existing inequalities or create new risks.

Stakeholders were encouraged to consider not only technical capabilities but also the broader social, legal and ethical frameworks within which technology operates.

The 30x30x30 Campaign

 Astronomy, Outer Space, Planet, Globe, Earth, Sphere

The 30x30x30 initiative represents an ambitious attempt to operationalise human rights through innovation. Its objective is to deliver 30 human rights innovations for 30 communities by 2030, aligned with the 30 articles of the UDHR.

The campaign emphasises multistakeholder collaboration by uniting countries, companies and communities as co-creators of solutions that are both technologically robust and socially sensitive. A distinctive feature of 30x30x30 is its focus on scalable, real-world tools that address complex rights challenges.

Examples include AI-based platforms for real-time monitoring, disaster tracking systems, digital storytelling tools and technologies for cyber peace. These tools are intended to serve both institutional responders and local communities, demonstrating how technology can amplify human agency in rights contexts.

The campaign also highlights the interdependence of innovation and human rights. Traditional approaches alone cannot address multidimensional crises such as climate displacement, conflict, or systemic inequality, and innovation without human-rights grounding risks reinforcing existing disparities.

‘Innovation is Political’

 Body Part, Finger, Hand, Person, Baby, Network, Accessories

Volker Türk, UN High Commissioner for Human Rights, emphasised that ‘innovation is political’. He noted that the development and deployment of technology shape who benefits and how, and that decisions regarding access, governance and application of technological tools carry significant implications for equity, justice and human dignity.

This framing highlights the importance of integrating human rights considerations into innovation policy. By situating human rights at the centre of technological development, the summit promoted governance approaches that ensure innovation contributes positively to societal outcomes.

It encouraged multistakeholder responsibility, including governments, companies and civil society, to guide technology in ways that respect and advance human rights.

Human Rights Data Exchange (HRDx)

HRDx is a proposed global platform intended to improve the ethical management of human rights data. It focuses on creating systems where information is governed responsibly, ensuring that privacy, security and protection of personal data are central to its operation.

The platform underlines that managing data is not only a technical issue but also a matter of governance and ethics. By prioritising transparency, accountability and data protection, it aims to provide a framework that supports the responsible use of information without compromising human rights.

Through these principles, HRDx highlights the importance of embedding ethical oversight into technological tools. Its success relies on maintaining the balance between utilising data to inform decision-making and upholding the rights and dignity of individuals. That approach ensures that technology can contribute to human rights protection while adhering to rigorous ethical standards.

Trustworthy AI in human rights

The government has withdrawn the mandate for Sanchar Saathi, responding to public backlash and industry resistance.

AI offers significant opportunities to enhance human rights monitoring and protection. For example, AI can help to analyse large datasets to detect trends, anticipate crises, and identify violations of fundamental freedoms. Predictive analytics can support human rights foresight, enabling early interventions to prevent conflicts, trafficking, or discrimination.

At the same time, trust in AI for decision-making remains a significant challenge. AI systems trained on biassed or unrepresentative data can produce discriminatory outcomes, undermine privacy and erode public trust.

These risks are especially acute in applications where algorithmic decisions affect access to services or determine individual liberties. That requires governance frameworks that ensure transparency, accountability and ethical oversight.

In the human rights context, trustworthy AI means designing systems that are explainable, auditable and accountable. Human oversight remains essential, particularly in decisions with serious implications for individuals’ rights.

The Summit highlighted the importance of integrating human rights principles such as non-discrimination, equality and procedural fairness into AI development and deployment processes.

Ethics, Accountability and Governance

AI, justice, law,

Aligning technology with human rights necessitates robust ethical frameworks, effective governance, and transparent accountability. Digital systems must uphold fairness, transparency, inclusivity, and human dignity throughout their lifecycle, from design to deployment and ongoing operation.

Human rights impact assessments at the design stage help identify potential risks and guide responsible development. Engaging users and affected communities ensures technologies meet real needs.

Continuous monitoring and audits maintain compliance with ethical standards and highlight areas for improvement.

Effective governance ensures responsibilities are clearly defined, decisions are transparent, and corrective actions can be taken when rights are compromised. By combining ethical principles with robust governance and accountability, technology can actively protect and support human rights.

Future pathways for rights-centred innovation

Image of UN Human Rights Council

The integration of human rights into technology represents a long-term project. Establishing frameworks that embed accountability, transparency and ethical oversight ensures that emerging tools enhance freedom, equality and justice.

Digital transformation, when guided by human rights, creates opportunities to address complex challenges. RightsX 2025 demonstrated that innovation, governance and ethical foresight can converge to shape a digital ecosystem that safeguards human dignity while fostering progress.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

EU survey shows strong public backing for digital literacy in schools

A new Eurobarometer survey finds that Europeans want digital skills to hold the same status in schools as reading, mathematics and science.

Citizens view digital competence as essential for learning, future employment and informed participation in public life.

Nine in ten respondents believe that schools should guide pupils on how to handle the harmful effects of digital technologies on their mental health and well-being, rather than treating such issues as secondary concerns.

Most Europeans also support a more structured approach to online information. Eight in ten say digital literacy helps them avoid misinformation, while nearly nine in ten want teachers to be fully prepared to show students how to recognise false content.

A majority continues to favour restrictions on smartphones in schools, yet an even larger share supports the use of digital tools specifically designed for learning.

More than half find that AI brings both opportunities and risks for classrooms, which they believe should be examined in greater depth.

Almost half want the EU to shape standards for the use of educational technologies, including rules on AI and data protection.

The findings will inform the European Commission’s 2030 Roadmap on digital education and skills, scheduled for release next year as part of the Union of Skills initiative.

A survey carried out across all member states reflects a growing expectation that digital education should become a central pillar of Europe’s teaching systems, rather than an optional enhancement.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Users gain new control with Instagram feed algorithm

Instagram has unveiled a new AI-powered feature called ‘Your Algorithm’, giving users control over the topics shown in their Reels feed. The tool analyses viewing history and allows users to indicate which subjects they want to see more or less of.

The feature displays a summary of each user’s top interests and allows typing in specific topics to fine-tune recommendations in real-time. Instagram plans to expand the tool beyond Reels to Explore and other areas of the app.

Launch started in the US, with a global rollout in English expected soon. The initiative comes amid growing calls for social media platforms to provide greater transparency over algorithmic content and avoid echo chambers.

By enabling users to adjust their feeds directly, Instagram aims to offer more personalised experiences while responding to regulatory pressures and societal concerns over harmful content.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

AI teachers and deepfakes tested to ease UK teacher shortages

Amid a worsening recruitment and retention crisis in UK education, some schools are trialling AI-based teaching solutions, including remote teachers delivered via video links and even proposals for deepfake avatars to give lessons.

These pilots are part of efforts to maintain educational provision where qualified staff are scarce, with proponents arguing that technology can help reduce teacher workload and address gaps in core subjects, such as mathematics.

However, many teachers and unions remain sceptical or critical. Some educators argue that remote or AI-led instruction cannot replace the human presence, interpersonal support and contextual knowledge provided by in-room teachers.

Union activity and petitions opposing virtual teaching arrangements reflect broader concerns about the implications for job security, education quality and the potential de-professionalisation of teaching.

The BBC’s reporting highlighted specific examples, such as a Lancashire secondary school bringing in a remote maths teacher based hundreds of miles away, a move that sparked debate among local teachers who emphasise the irreplaceable role of in-person interaction in effective teaching.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

ChatGPT tops Apple’s 2025 app downloads in the US

Apple has released its annual ranking of the most downloaded apps and games, with ChatGPT taking the top spot among free iPhone apps in the United States for 2025, marking a major moment for AI in mainstream consumer use.

The OpenAI chatbot rose from fourth place last year, surpassing established social platforms and everyday utilities. Its ascent highlights how quickly AI-driven tools have become embedded in daily habits and how they may challenge the dominance of traditional search apps on mobile devices.

Apple’s charts show broader shifts across categories. Threads, Google, TikTok, WhatsApp, and Instagram also ranked highly among free iPhone downloads, while Google’s Gemini entered the top ten, reflecting the growing presence of competing AI assistants in the mobile ecosystem.

Gaming trends remained strong. Block Blast! led the US free iPhone games list, while Minecraft held its position as the top paid title across devices. ChatGPT also became the second-most downloaded free app on iPad, signalling consistent demand for AI across screens.

Apple says the rankings reflect the evolving mix of entertainment, creativity, and productivity tools shaping the App Store landscape, as AI continues to influence how people search, work, and play across its platforms.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!