EU considers further action against Grok over AI nudification concerns

The European Commission has signalled readiness to escalate action against Elon Musk’s AI chatbot Grok, following concerns over the spread of non-consensual sexualised images on the social media platform X.

The EU tech chief Henna Virkkunen told Members of the European Parliament that existing digital rules allow regulators to respond to risks linked to AI-driven nudification tools.

Grok has been associated with the circulation of digitally altered images depicting real people, including women and children, without consent. Virkkunen described such practices as unacceptable and stressed that protecting minors online remains a central priority for the EU enforcement under the Digital Services Act.

While no formal investigation has yet been launched, the Commission is examining whether X may breach the DSA and has already ordered the platform to retain internal information related to Grok until the end of 2026.

Commission President Ursula von der Leyen has also publicly condemned the creation of sexualised AI images without consent.

The controversy has intensified calls from EU lawmakers to strengthen regulation, with several urging an explicit ban on AI-powered nudification under the forthcoming AI Act.

A debate that reflects wider international pressure on governments to address the misuse of generative AI technologies and reinforce safeguards across digital platforms.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

UK watchdogs warned over AI risks in financial services

UK regulators and the Treasury face MP criticism over their approach to AI, amid warnings of risks to consumers and financial stability. A new Treasury Select Committee report says authorities have been overly cautious as AI use rapidly expands across financial services.

More than 75% of UK financial firms are already using AI, according to evidence reviewed by the committee, with insurers and international banks leading uptake.

Applications range from automating back-office tasks to core functions such as credit assessments and insurance claims, increasing AI’s systemic importance within the sector.

MPs acknowledge AI’s benefits but warn that readiness for large-scale failures remains insufficient. The committee urges the Bank of England and the FCA to introduce AI-specific stress tests to gauge resilience to AI-driven market shocks.

Further recommendations include more explicit regulatory guidance on AI accountability and faster use of the Critical Third Parties Regime. No AI or cloud providers have been designated as critical, prompting calls for stronger oversight to limit operational and systemic risk.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

Strong growth pushes OpenAI past $20 billion in annualised revenue

OpenAI’s annualised revenue has surpassed $20 billion in 2025, up from $6 billion a year earlier. The company’s computing capacity and user numbers have also continued to grow.

The company recently confirmed it will begin showing advertisements in ChatGPT to some users in the United States. The move is part of a broader effort to generate additional revenue to cover the high costs of developing and running advanced AI systems.

OpenAI’s platform now spans text, images, voice, code, and application programming interfaces. CFO Sarah Friar said the next phase of development will focus on agents and workflow automation that can operate continuously, retain context over time, and take action across multiple tools.

Looking ahead to 2026, the company plans to prioritise what it calls ‘practical adoption’, with a particular emphasis on health, science, and enterprise use cases. The aim is to move beyond experimentation and embed AI more deeply into real-world applications.

Friar also said OpenAI intends to maintain a ‘light’ balance sheet by partnering with external providers rather than owning infrastructure outright. Contracts will remain flexible across hardware types and suppliers as the company continues to scale its operations.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

Forced labour data opened to the public

Exiger has launched a free online tool designed to help organisations identify links to forced labour in global supply chains. The platform, called forcedlabor.ai, was unveiled during the annual meeting of the World Economic Forum in Davos.

The tool allows users to search suppliers and companies to assess potential exposure to state-sponsored forced labour, with an initial focus on risks linked to China. Exiger says the database draws on billions of records and is powered by proprietary AI to support compliance and ethical sourcing.

US lawmakers and human rights groups have welcomed the initiative, arguing that companies face growing legal and reputational risks if their supply chains rely on forced labour. The platform highlights risks linked to US import restrictions and enforcement actions.

Exiger says making the data freely available aims to level the playing field for smaller firms with limited compliance budgets. The company argues that greater transparency can help reduce modern slavery across industries, from retail to agriculture.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

UNESCO links AI development with climate responsibility

UNESCO has renewed calls for stronger international cooperation to ensure AI supports rather than undermines climate goals, as environmental pressures linked to AI continue to grow.

The message was delivered at the Adopt AI Summit in Paris, where sustainability and ethics featured prominently in discussions on future AI development.

At a Grand Palais panel, policymakers, industry leaders, and UN officials examined AI’s growing energy, water, and computing demands. The discussion focused on balancing AI’s climate applications with the need to reduce its environmental footprint.

Public sector representatives highlighted policy tools such as funding priorities and procurement rules to encourage more resource-efficient AI.

UNESCO officials stressed that energy-efficient AI must remain accessible to lower-income regions, mainly for water management and climate resilience.

Industry voices highlighted practical steps to improve AI efficiency while supporting internal sustainability goals. Participants agreed that coordinated action among governments, businesses, international organisations, and academia is essential for meaningful environmental impact.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

OECD says generative AI reshapes education with mixed results

Generative AI has rapidly entered classrooms worldwide, with students using chatbots for assignments and teachers adopting AI tools for lesson planning. Adoption has been rapid, driven by easy access, intuitive design, and minimal technical barriers.

A new OECD Digital Education Outlook 2026 highlights both opportunities and risks linked to this shift. AI can support learning when aligned with clear goals, but replacing productive struggle may weaken deep understanding and student focus.

Research cited in the report suggests that general-purpose AI tools may improve the quality of written work without boosting exam performance. Education-specific AI grounded in learning science appears more effective as a collaborative partner or research assistant.

Early trials also indicate that GenAI-powered tutoring tools can enhance teacher capacity and improve student outcomes, particularly in mathematics. Policymakers are urged to prioritise pedagogically sound AI that is rigorously evaluated to strengthen learning.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

European Parliament moves to force AI companies to pay news publishers

Lawmakers in the EU are moving closer to forcing technology companies to pay news publishers for the use of journalistic material in model training, according to a draft copyright report circulating in the European Parliament.

The text forms part of a broader effort to update copyright enforcement as automated content systems expand across media and information markets.

Compromise amendments also widen the scope beyond payment obligations, bringing AI-generated deepfakes and synthetic manipulation into sharper focus.

MEPs argue that existing legal tools fail to offer sufficient protection for publishers, journalists and citizens when automated systems reproduce or distort original reporting.

The report reflects growing concern that platform-driven content extraction undermines the sustainability of professional journalism. Lawmakers are increasingly framing compensation mechanisms as a corrective measure rather than as voluntary licensing or opaque commercial arrangements.

If adopted, the position of the Parliament would add further regulatory pressure on large technology firms already facing tighter scrutiny under the Digital Markets Act and related digital legislation, reinforcing Europe’s push to assert control over data use, content value and democratic safeguards.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

AI firms fall short of EU transparency rules on training data

Several major AI companies appear slow to meet EU transparency obligations, raising concerns over compliance with the AI Act.

Under the regulation, developers of large foundation models must disclose information about training data sources, allowing creators to assess whether copyrighted material has been used.

Such disclosures are intended to offer a minimal baseline of transparency, covering the use of public datasets, licensed material and scraped websites.

While open-source providers such as Hugging Face have already published detailed templates, leading commercial developers have so far provided only broad descriptions of data usage instead of specific sources.

Formal enforcement of the rules will not begin until later in the year, extending a grace period for companies that released models after August 2025.

The European Commission has indicated willingness to impose fines if necessary, although it continues to assess whether newer models fall under immediate obligations.

The issue is likely to become politically sensitive, as stricter enforcement could affect US-based technology firms and intensify transatlantic tensions over digital regulation.

Transparency under the AI Act may therefore test both regulatory resolve and international relations as implementation moves closer.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Anthropic report shows AI is reshaping work instead of replacing jobs

A new report by Anthropic suggests fears that AI will replace jobs remain overstated, with current use showing AI supporting workers rather than eliminating roles.

Analysis of millions of anonymised conversations with the Claude assistant indicates technology is mainly used to assist with specific tasks rather than full job automation.

The research shows AI affects occupations unevenly, reshaping work depending on role and skill level. Higher-skilled tasks, particularly in software development, dominate use, while some roles automate simpler activities rather than core responsibilities.

Productivity gains remain limited when tasks grow more complex, as reliability declines and human correction becomes necessary.

Geographic differences also shape adoption. Wealthier countries tend to use AI more frequently for work and personal activities, while lower-income economies rely more heavily on AI for education. Such patterns reflect different stages of adoption instead of a uniform global transformation.

Anthropic argues that understanding how AI is used matters as much as measuring adoption rates. The report suggests future economic impact will depend on experimentation, regulation and the balance between automation and collaboration, rather than widespread job displacement.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

South Korea faces mounting pressure from US AI chip tariffs

New US tariffs on advanced AI chips are drawing scrutiny over their impact on global supply chains, with South Korea monitoring potential effects on its semiconductor industry.

The US administration has approved a 25 percent tariff on advanced chips that are imported into the US and then re-exported to third countries. The measure is widely seen as aimed at restricting the flow of AI accelerators to China.

The tariff thresholds are expected to cover processors such as Nvidia’s H200 and AMD’s MI325X, which rely on high-bandwidth memory supplied by Samsung Electronics and SK hynix.

Industry officials say most memory exports from South Korea to the US are used in domestic data centres, which are exempt under the proclamation, reducing direct exposure for suppliers.

South Korea’s trade ministry has launched consultations with industry leaders and US counterparts to assess risks and ensure Korean firms receive equal treatment to competitors in Taiwan, Japan and the EU.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!