Leading tech companies deepen AI competition with new capabilities

AI competition among leading AI developers intensified in early 2026 as major companies expanded their models, platforms, and partnerships. Companies including Google, OpenAI, Anthropic, and xAI are introducing new capabilities and integrating AI systems into broader ecosystems.

Google has continued to expand its Gemini model family with updates to Gemini 3.1 Pro and 3.1 Flash, designed to support complex tasks across applications. The company is also integrating Gemini into services such as Docs, Sheets, Slides, and Drive, allowing users to generate documents and analyse data across multiple Google services.

Gemini has also been embedded into the Chrome browser and integrated with Samsung’s Galaxy devices, expanding its distribution across consumer platforms as AI competition among major developers accelerates.

Anthropic has focused on advancing the Claude model family while positioning the system for enterprise and professional use. Recent updates include Claude Sonnet 4.6, which introduces improvements in reasoning and coding capabilities alongside an expanded context window currently in beta. The company has also launched a limited preview of the Claude Marketplace, allowing organisations to use third-party tools built on Claude through partnerships with several software companies.

OpenAI has continued to update ChatGPT with the release of the GPT-5 series, including GPT-5.2 and GPT-5.4. The newer models combine reasoning, coding, and agent-based workflows, while also introducing computer-use capabilities that allow the system to interact with applications directly.

OpenAI has also introduced additional services, including ChatGPT Health and integrations designed to assist with spreadsheet modelling and data analysis, further intensifying AI competition across enterprise and consumer tools.

Meanwhile, xAI has expanded development of its Grok models while increasing computing infrastructure. The company has reported growth in Grok usage through integration with the X platform and other applications. Recent announcements include upgrades to Grok’s voice and multimodal capabilities, as well as continued training of future models.

Across the industry, developers are increasingly positioning their systems not only as conversational assistants but also as tools integrated into enterprise workflows, creative production, and software development. New releases in 2026 reflect a broader shift toward multimodal systems, agent-based capabilities, and deeper integration with existing digital platforms, highlighting how AI competition is shaping the next phase of AI development.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

Google outlines roadmap for safer generative AI for young users

Google has presented a strategy for developing generative AI systems designed to protect younger users better better while supporting learning and creativity.

The approach emphasises building conversational AI experiences that balance innovation with safeguards tailored to children and teenagers.

The company’s framework rests on three pillars: protecting young people online, respecting the role of families in digital environments and enabling youth to explore AI technologies responsibly.

According to Google, safety policies prohibit harmful content, including material linked to child exploitation, violent extremism and self-harm, while additional restrictions target age-inappropriate topics.

Safeguards are integrated throughout the AI development lifecycle, from user input to model responses. Systems use specialised classifiers to detect potentially harmful queries and prevent inappropriate outputs.

These protections are also applied to models such as Gemini, which incorporates defences against prompt manipulation and cyber misuse.

Beyond preventing harm, Google aims to support responsible AI adoption through educational initiatives.

Resources designed for families encourage discussions about responsible technology use, while tools such as Guided Learning in Gemini seek to help students explore complex topics through structured explanations and interactive learning support.

Would you like to learn more about AI, tech and digital diplomacyIf so, ask our Diplo chatbot!

Spain expands digital oversight of online hate

Spain has launched a digital system designed to track hate speech and disinformation across social media platforms. Prime Minister Pedro Sánchez presented the tool in Madrid as part of a wider effort to improve oversight of online platforms.

The platform known as HODIO will analyse public posts and measure the spread and reach of hateful content. Authorities in Spain say the project will publish regular reports examining how platforms respond to harmful material.

The monitoring initiative is managed by Spain’s Observatory on Racism and Xenophobia. Officials in Spain say the data will help citizens understand the scale of online hate and assess how social networks address abusive content.

The initiative forms part of a broader digital policy agenda in Spain that also includes measures to protect minors online. Policymakers in Spain have discussed proposals such as restrictions on social media use by children under 16.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

EU updates voluntary code for labelling AI-generated content

The European Commission has released a second draft of its voluntary Code of Practice on marking and labelling AI-generated content, designed to support compliance with transparency rules under the Artificial Intelligence Act.

Published on 5 March, the updated draft reflects feedback from hundreds of stakeholders, including industry groups, academic researchers, policymakers, and civil society organisations.

Revisions follow consultations held in early 2026 as part of the broader rollout of the EU’s AI regulatory framework.

The proposed code outlines technical approaches for identifying AI-generated material. A two-layered system using secure metadata and digital watermarking is recommended, with optional fingerprinting, logging, and verification to improve detection.

Guidelines also address how platforms and publishers should label deepfakes and AI-generated text related to matters of public interest. Public feedback is open until 30 March, with the final code expected in early June before transparency rules take effect on 2 August 2026.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

EU lawmakers call for stronger copyright safeguards in AI training

The European Parliament has adopted a report urging policymakers to establish a long-term framework protecting copyrighted works used in AI training.

These recommendations aim to ensure that creative industries retain transparency and fair treatment as generative AI technologies expand.

Among the central proposals is the creation of a European register managed by the European Union Intellectual Property Office. The database would list copyrighted works used to train AI systems and identify creators who have chosen to exclude their content from such use.

Lawmakers in the EU are also calling for greater transparency from AI developers, including disclosure of the websites from which training data has been collected. According to the report, failing to meet transparency requirements could raise questions about compliance with existing copyright rules.

The recommendations have received mixed reactions from industry stakeholders.

Organisations representing creators argue that stronger safeguards are necessary to ensure fair remuneration and legal clarity, while technology sector groups caution that additional requirements could create complexity for companies developing AI systems.

The report is not legally binding but signals the political direction of ongoing European discussions on copyright and AI governance.

Would you like to learn more about AI, tech and digital diplomacyIf so, ask our Diplo chatbot!  

Lawmakers urged to rethink rules on private messaging

Policymakers are being urged to rethink the regulation of private messaging platforms as disinformation campaigns increasingly spread through closed digital networks. Researchers say messaging apps now play a major role in political communication and crisis information flows.

Evidence from elections and conflicts highlights the challenge. During Brazil’s 2024 municipal elections, manipulated political content spread widely through WhatsApp groups, while authorities in Ukraine reported Telegram being used for both emergency communication and disinformation.

Experts argue that current laws often fail to address messaging platforms, such as Telegram, because regulation typically targets public social media spaces. Analysts say modern messaging services combine private chats with broadcast channels and other features that allow content to reach large audiences.

Policy specialists propose regulating specific platform features rather than entire services. Governments and technology companies are also encouraged to protect encryption while expanding transparency tools, media literacy programmes and user safeguards.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

AI deepfakes detection expands on YouTube for politicians and journalists

YouTube is expanding its likeness-detection technology designed to identify AI-generated deepfakes, extending access to a pilot group of government officials, political candidates, and journalists.

The tool allows participants to detect unauthorised AI-generated videos that simulate their faces and request removal if the content violates YouTube policies. The system builds on technology launched last year for around four million creators in the YouTube Partner Program.

Similar to YouTube’s Content ID system, which detects copyrighted material in uploaded videos, the likeness detection feature scans for AI-generated faces created with deepfake tools. Such technologies are increasingly used to spread misinformation or manipulate public perception by making prominent figures appear to say or do things they never did.

According to YouTube, the pilot programme aims to balance free expression with safeguards against AI impersonation, particularly in sensitive civic contexts.

‘This expansion is really about the integrity of the public conversation,’ said Leslie Miller, YouTube’s vice president of Government Affairs and Public Policy. ‘We know that the risks of AI impersonation are particularly high for those in the civic space. But while we are providing this new shield, we’re also being careful about how we use it.’

Removal requests will be assessed individually under YouTube’s privacy policy rules to determine whether the content constitutes parody or political critique, which remain protected forms of expression. Participants must verify their identity by uploading a selfie and a government-issued ID before accessing the tool. Once verified, they can review detected matches and submit removal requests for content they believe violates policy.

YouTube also said it supports the proposed NO FAKES Act in the United States, which aims to regulate the unauthorised use of an individual’s voice or visual likeness in AI-generated media. AI-generated videos on the platform are already labelled, though label placement varies depending on the topic’s sensitivity.

‘There’s a lot of content that’s produced with AI, but that distinction’s actually not material to the content itself,’ said Amjad Hanif, YouTube’s vice president of Creator Products. The company said it plans to expand the technology over time to detect AI-generated voices and other intellectual property.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

EU explores AI image generation safeguards

The Council of the European Union is examining a compromise proposal that could introduce restrictions on certain AI systems capable of generating sensitive synthetic images.

The discussions form part of ongoing adjustments to the EU AI Act.

A proposed measure that would primarily address AI tools that generate illegal material, particularly content involving the exploitation of minors.

Policymakers are considering ways to prevent the development or deployment of systems that could produce such material while maintaining proportionate rules for legitimate AI applications.

Early indications suggest the proposal may not apply to images depicting people in standard clothing contexts, such as swimwear. The distinction reflects policymakers’ effort to define the scope of restrictions without imposing unnecessary limits on common image-generation uses.

The debate highlights broader regulatory challenges linked to generative AI technologies. European institutions are seeking to strengthen protections against harmful uses of AI while preserving space for innovation and lawful digital services.

Further negotiations among the EU institutions are expected as lawmakers continue refining how these provisions could fit within the broader European framework governing AI.

Would you like to learn more about AI, tech and digital diplomacyIf so, ask our Diplo chatbot!

UK rejects social media ban for under-16s

A proposed social media ban for under-16s has been rejected by UK MPs, with 307 voting against and 173 in favour. The measure, introduced as an amendment to the Children’s Wellbeing and Schools Bill, aimed to protect children from online harms.

Instead, the UK government secured support for giving ministers flexible powers, enforceable after a consultation on online safety concludes. The technology secretary, Liz Kendall, could limit social media access and VPN use, turn off addictive features, and raise the UK’s digital consent age.

Supporters of a full ban argued parents face an ‘impossible position’ managing online risks for their children. Campaigners, bereaved parents, and organisations such as Mumsnet and the National Education Union called for immediate action.

Critics, including the NSPCC, warned that a blanket ban could push teenagers towards unregulated online spaces.

The government consultation will examine minimum age requirements and the removal of features such as autoplay. MPs emphasised that any policy must balance safety with preparing children for responsible online engagement.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot