AI governance talks deepen as BRICS aligns national approaches

BRICS countries are working to harmonise their approaches to AI, though it remains too early to speak of a unified AI framework for the bloc, according to Deputy Foreign Minister Sergey Ryabkov.

Speaking as Russia’s BRICS sherpa, Ryabkov said discussions are focused on aligning national positions and shared principles rather than establishing binding standards, noting that no common BRICS AI rules have yet taken shape.

He highlighted the adoption of a standalone leaders’ declaration on global AI governance at the Rio de Janeiro summit, describing it as a milestone for the organisation and a first for the grouping.

BRICS members, including Russia, view cooperation on AI as a way to manage emerging risks, build capacity and help narrow the digital divide, particularly for developing countries.

Ryabkov added that the group supports a central coordinating role for the United Nations, with AI governance anchored in national legislation, respect for sovereignty, data protection and human rights.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

Generative AI use grows across the EU

In 2025, nearly a third of people aged 16–74 across the European Union reported using generative AI tools, according to Eurostat. Most respondents used AI for personal tasks, while fewer applied it for work or education.

The survey data illustrate how generative AI is becoming a part of daily life for millions of Europeans, offering new ways to interact with technology and access creative tools that were once limited to specialists.

Generative AI tools are capable of producing new content, including text, images, videos, programming code, or other forms of data, based on patterns learned from existing examples. Users provide input or prompts, such as instructions or questions, which the AI then uses to generate tailored outputs.

This accessibility is helping people across the EU experiment with technology for both practical and recreational purposes, from drafting documents to designing visuals or exploring creative ideas, demonstrating the growing influence of AI on digital culture and personal productivity.

Adoption of generative AI varies significantly across the EU. Denmark, Estonia, and Malta recorded the highest usage, with nearly half of residents actively engaging with these tools, while Romania, Italy, and Bulgaria showed the lowest uptake, with fewer than a quarter of the population using AI.

These differences may reflect variations in digital infrastructure, education, and public awareness, as well as cultural attitudes toward emerging technologies.

Overall, the Eurostat data provide a snapshot of a digital landscape in transition, reflecting how Europeans are adapting to a new era of intelligent technology.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

OpenAI’s GPT-5 shows a breakthrough in wet lab biology

New research has been published by OpenAI, examining whether advanced AI models can accelerate biological research within the wet lab, rather than just supporting theoretical science.

Working with biosecurity firm Red Queen Bio, researchers tested GPT-5 within a tightly controlled molecular cloning system designed to measure practical laboratory improvements.

Across multiple experimental rounds, GPT-5 independently proposed protocol modifications, analysed results and refined its approach using experimental feedback.

The model introduced a previously unexplored enzymatic mechanism that combines RecA and gp32 proteins, along with adjustments to reaction timing and temperature, resulting in a 79-fold increase in cloning efficiency compared to the baseline protocol.

OpenAI emphasises that all experiments were carried out under strict biosecurity safeguards and still relied on human scientists to execute laboratory work.

Even so, the findings suggest AI systems could work alongside researchers to reduce costs, accelerate experimentation and improve scientific productivity while informing future safety and governance frameworks.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Denmark pushes digital identity beyond authentication

Digital identity has long focused on proving that the same person returns each time they log in. The function still matters, yet online representation increasingly happens through faces, voices and mannerisms embedded in media rather than credentials alone.

As synthetic media becomes easier to generate and remix, identity shifts from an access problem to a problem of media authenticity.

The ‘Own Your Face’ proposal by Denmark reflects the shift by treating personal likeness as something that should be controllable in the same way accounts are controlled.

Digital systems already verify who is requesting access, yet lack a trusted middle layer to manage what is being shown when media claims to represent a real person. The proxy model illustrates how an intermediary layer can bring structure, consistency and trust to otherwise unmanageable flows.

Efforts around content provenance point toward a practical path forward. By attaching machine-verifiable history to media at creation and preserving it as content moves, identity extends beyond login to representation.

Broad adoption would not eliminate deception, yet it would raise the baseline of trust by replacing visual guesswork with evidence, helping digital identity evolve for an era shaped by synthetic media.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

OpenAI upgrades ChatGPT with faster AI images

The US tech company OpenAI has rolled out a significant update to ChatGPT with the launch of GPT Images 1.5, strengthening its generative image capabilities.

A new model that produces photorealistic images using text prompts at speeds up to four times faster than earlier versions, reflecting OpenAI’s push to make visual generation more practical for everyday use.

Users can upload existing photos and modify them through natural language instructions, allowing objects to be added, removed, combined or blended with minimal effort.

OpenAI highlights applications such as clothing and hairstyle try-ons, alongside stylistic filters designed to support creative experimentation while preserving realistic visual quality.

The update also introduces a redesigned ChatGPT interface, including a dedicated Images section available via the sidebar on both mobile apps and the web.

GPT Images 1.5 is now accessible to regular users, while Business and Enterprise subscribers are expected to receive enhanced access and additional features in the coming weeks.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

AI models reach human-level language analysis

Researchers found that some large language models can analyse language like a human linguistics graduate. The models diagram sentences, resolve ambiguities and process recursive structures, showing advanced metalinguistic abilities.

The study used specially crafted sentences and invented mini-languages to prevent memorisation. OpenAI’s o1 model correctly applied complex syntactic and phonological rules for entirely new languages.

Experts say the results challenge long-held assumptions about human uniqueness in language. The models have yet to produce original insights, but their reasoning skills match graduate-level performance.

Findings suggest AI may eventually surpass humans in linguistic analysis. Researchers believe continued progress will enable models to generalise better, learn from less data, and handle language creatively.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

Language models impress but miss real-world understanding

Leading AI researcher Yann LeCun has argued that large language models only simulate understanding rather than genuinely comprehending the world. Their intelligence, he said, lacks grounding in physical reality and everyday common sense.

Despite being trained on vast amounts of online text, LLMs struggle with unfamiliar situations, according to LeCun. Real-world experience, he noted, provides richer learning than language alone ever could.

Drawing on decades in AI research, LeCun warned that enthusiasm around LLMs mirrors earlier hype cycles that promised human-level intelligence. Similar claims have repeatedly failed to deliver since the 1950s.

Instead of further scaling language models, LeCun urged greater investment in ‘world models’ that can reason about actions and consequences. He also cautioned that current funding patterns risk sidelining alternative approaches to AI.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

Microsoft outlines how AI is shifting from tools to partners in 2026

AI is entering a new phase, with 2026 expected to mark a shift from experimentation to real-world collaboration. Microsoft executives describe AI as an emerging partner that amplifies human expertise rather than replacing it.

Microsoft says the impact is becoming visible across healthcare, software development, and scientific research. AI tools embedded in Microsoft products are supporting diagnosis, coding, and research workflows.

With the expansion of AI agents across all platforms, organisations are strengthening safeguards to manage new risks. Security leaders argue agents will require clear identities, restricted access, and continuous monitoring.

Microsoft also points to changes in the infrastructure powering AI. The company says future systems will prioritise efficiency and intelligence output, supported by distributed and hybrid cloud architectures.

Looking further ahead, the convergence of AI, supercomputing, and quantum technologies stands out as the main highlight. Hybrid approaches, the company says, are bringing practical quantum advantage closer for applications in materials science, medicine, and research.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

AI-driven Christmas scams surge online

Cybersecurity researchers are urging greater caution as Christmas approaches, warning that seasonal scams are multiplying rapidly. Check Point has recorded over 33,500 festive phishing emails and more than 10,000 deceptive social ads within two weeks.

AI tools are helping criminals craft convincing messages that mirror trusted brands and local languages. Attackers are also deploying fake e-commerce sites with AI chatbots, as well as deepfake audio and scripted calls to strengthen vishing attempts.

Smishing alerts imitating delivery firms are becoming more widespread, with recent months showing a marked rise in fraudulent parcel scams. Victims are often tricked into sharing payment details through links that imitate genuine logistics updates.

Experts say fake shops and giveaway scams remain persistent risks, frequently launched from accounts created within the past three months. Users are being advised to ignore unsolicited links, verify retailers and treat unexpected offers with scepticism.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

AI reporting playbook published by Google

Google has released a new AI playbook aimed at helping organisations streamline and improve sustainability reporting, sharing lessons learned from integrating AI into its own environmental disclosure processes.

In a blog post published on The Keyword, Google states that corporate sustainability reporting is often hindered by fragmented data and labour-intensive workflows. After two years of using AI internally, the company is now open-sourcing its approach to help others reduce reporting burdens.

The AI Playbook for Sustainability Reporting is presented as a practical, implementation-focused toolkit. It includes a structured framework for auditing reporting processes, along with ready-made prompt templates for common sustainability reporting tasks.

Google also highlights real-world examples that demonstrate how tools such as Gemini and NotebookLM can be used to validate sustainability claims, respond to information requests, and support internal review, moving AI use beyond experimentation.

The company says the playbook is intended to support transparency and strategic decision-making, and has invited organisations and practitioners to explore the resource and provide feedback.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!