EU urges stronger AI oversight after Grok controversy

A recent incident involving Grok, the AI chatbot developed by xAI, has reignited European Union calls for stronger oversight of advanced AI systems.

Comments generated by Grok prompted criticism from policymakers and civil society groups, leading to renewed debate over AI governance and voluntary compliance mechanisms.

The chatbot’s responses, which circulated earlier this week, included highly controversial language and references to historical figures. In response, xAI stated that the content was removed and that technical steps were being taken to prevent similar outputs from appearing in the future.

European policymakers said the incident highlights the importance of responsible AI development. Brando Benifei, an Italian lawmaker who co-led the EU AI Act negotiations, said the event illustrates the systemic risks the new regulation seeks to mitigate.

Christel Schaldemose, a Danish member of the European Parliament and co-lead on the Digital Services Act, echoed those concerns. She emphasised that such incidents underline the need for clear and enforceable obligations for developers of general-purpose AI models.

The European Commission is preparing to release guidance aimed at supporting voluntary compliance with the bloc’s new AI legislation. This code of practice, which has been under development for nine months, is expected to be published this week.

Earlier drafts of the guidance included provisions requiring developers to share information on how they address systemic risks. Reports suggest that some of these provisions may have been weakened or removed in the final version.

A group of five lawmakers expressed concern over what they described as the last-minute removal of key transparency and risk mitigation elements. They argue that strong guidelines are essential for fostering accountability in the deployment of advanced AI models.

The incident also brings renewed attention to the Digital Services Act and its enforcement, as X, the social media platform where Grok operates, is currently under EU investigation for potential violations related to content moderation.

General-purpose AI systems, such as OpenAI’s GPT, Google’s Gemini and xAI’s Grok, will be subject to additional requirements under the EU AI Act beginning 2 August. Obligations include disclosing training data sources, addressing copyright compliance, and mitigating systemic risks.

While these requirements are mandatory, their implementation is expected to be shaped by the Commission’s voluntary code of practice. Industry groups and international stakeholders have voiced concerns over regulatory burdens, while policymakers maintain that safeguards are critical for public trust.

The debate over Grok’s outputs reflects broader challenges in balancing AI innovation with the need for oversight. The EU’s approach, combining binding legislation with voluntary guidance, seeks to offer a measured path forward amid growing public scrutiny of generative AI technologies.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Italy’s Piracy Shield sparks EU scrutiny over digital rights

Italy’s new anti-piracy system, Piracy Shield, has come under scrutiny from the European Commission over potential breaches of the Digital Services Act.

The tool, launched by the Italian communications regulator AGCOM, allows authorities to block suspicious websites within 30 minutes — a feature praised by sports rights holders for minimising illegal streaming losses.

However, its speed and lack of judicial oversight have raised legal concerns. Critics argue that individuals are denied the right to defend themselves before action.

A recent glitch linked to Google’s CDN disrupted access to platforms like YouTube and Google Drive, deepening public unease.

Another point of contention is Piracy Shield’s governance. SP Tech, a company owned by Lega Serie A, manages the system, which directly benefits from anti-piracy enforcement.

The Computer & Communications Industry Association was prompted to file a complaint, citing a conflict of interest and calling for greater transparency.

While AGCOM Commissioner Massimiliano Capitanio insists the tool places Italy at the forefront of the fight against illegal streaming, growing pressure from digital rights groups and EU regulators suggests a clash between national enforcement and European law.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Capgemini invests in AI-driven operations with WNS

Capgemini has announced it will acquire Indian IT firm WNS for $3.3 billion to accelerate its leadership in agentic AI. The acquisition will significantly enhance Capgemini’s business process services (BPS) by integrating advanced AI capabilities into core operations.

The boards of both companies have approved the deal, which offers WNS shareholders a 28% premium over the 90-day average share price. Completion is expected by the end of 2025, pending regulatory approvals.

The company sees strong potential in embedding AI into enterprise operations, with BPS becoming a key showcase. The integration will strengthen Capgemini’s US presence and unlock cross-selling opportunities across the combined client networks.

Both firms emphasised a shared vision of intelligent operations powered by agentic AI, aiming to help clients shift from automation to AI-driven autonomy. Capgemini’s existing partnerships with tech giants like Microsoft, Google and NVIDIA will support this vision.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

EU rejects delay for AI Act rollout

The EU has confirmed it will enforce its originally scheduled AI Act, despite growing calls from American and European tech firms to delay the rollout.

Major companies, including Alphabet, Meta, ASML and Mistral, have urged the European Commission to push back the timeline by several years, citing concerns over compliance costs.

Rejecting the pressure, a Commission spokesperson clarified there would be no pause or grace period. The legislation’s deadlines remain, with general-purpose AI rules taking effect this August and stricter requirements for high-risk systems following August 2026.

The AI Act represents the EU’s effort to regulate AI across various sectors, aiming to balance innovation and public safety. While tech giants argue that the rules are too demanding, the EU insists legal certainty is vital and the framework must move forward as planned.

The Commission intends to simplify the process later in the year, such as easing reporting demands for smaller businesses. Yet the core structure and deadlines of the AI Act will not be altered.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

BRICS calls for AI data regulations amid challenges with de-dollarisation

BRICS leaders in Rio de Janeiro have called for stricter global rules on how AI uses data, demanding fair compensation for content used without permission.

The group’s draft statement highlights growing frustration with tech giants using vast amounts of unlicensed content to train AI models.

Despite making progress on digital policy, BRICS once again stalled on a long-standing ambition to reduce reliance on the US dollar.

After a decade of talks, the bloc’s cross-border payments system remains in limbo. Member nations continue to debate infrastructure, governance and how to work around non-convertible currencies and sanctions.

China is moving independently, expanding the yuan’s international use and launching domestic currency futures.

Meanwhile, the rest of the bloc struggles with legal, financial and technical hurdles, leaving the dream of a unified alternative to the dollar on hold. Even a proposed New Investment Platform remains mired in internal disagreements.

In response to rising global debt concerns, BRICS introduced a Multilateral Guarantees Initiative within the New Development Bank. It aims to improve credit access across the Global South without needing new capital, especially for countries struggling to borrow in dollar-dominated markets.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Council of Europe picks Jylo to power AI platform

The Council of Europe has chosen Jylo, a European enterprise AI provider, to support over 3,000 users across its organisation.

The decision followed a competitive selection process involving multiple AI vendors, with Jylo standing out for its regulatory compliance and platform adaptability.

As Europe’s leading human rights body, the Council aims to use AI responsibly to support its legal and policy work. Jylo’s platform will streamline document-based workflows and reduce administrative burdens, helping staff focus on critical democratic and legal missions.

Leaders from both Jylo and the Council praised the collaboration. Jylo CEO Shawn Curran said the partnership reflects shared values around regulatory compliance and innovation.

The Council’s CIO, John Hunter, described Jylo’s commitment to secure AI as a perfect fit for the institution’s evolving digital strategy.

Jylo’s AI Assistant and automation features are designed specifically for knowledge-driven organisations. The rollout is expected to strengthen the Council’s internal efficiency and reinforce Jylo’s standing as a trusted AI partner across the European public and legal sectors.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Spotify hit by AI band hoax controversy

A band called The Velvet Sundown has gone viral on Spotify, gaining over 850,000 monthly listeners, yet almost nothing is known about the people behind it.

With no live performances, interviews, or social media presence for its supposed members, the group has fuelled growing speculation that both it and its music may be AI-generated.

The mystery deepened after Rolling Stone first reported that a spokesperson had admitted the tracks were made using an AI tool called Suno, only to later reveal the spokesperson himself was fake.

The band denies any connection to the individual, stating on Spotify that the account impersonating them on X is also false.

AI detection tools have added to the confusion. Rival platform Deezer flagged the music as ‘100% AI-generated’, although Spotify has remained silent.

While CEO Daniel Ek has said AI music isn’t banned from the platform, he expressed concerns about mimicking real artists.

The case has reignited industry fears over AI’s impact on musicians. Experts warn that public trust in online content is weakening.

Musicians and advocacy groups argue that AI is undercutting creativity by training on human-made songs without permission. As copyright battles continue, pressure is mounting for stronger government regulation.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

xAI gets Memphis approval to run 15 gas turbines

xAI, Elon Musk’s AI company, has secured permits to operate 15 natural gas turbines at its Memphis data centre, despite facing legal threats over alleged Clean Air Act violations.

The Shelby County Health Department approved the generators, which can produce up to 247 megawatts, provided specific emissions controls are in place.

Environmental lawyers say xAI had already been running as many as 35 generators without permits. The Southern Environmental Law Center (SELC), acting on behalf of the NAACP, has accused the company of serious pollution and is preparing to sue.

Even under the new permit, xAI is allowed to emit substantial pollutants annually, including nearly 10 tons of formaldehyde — a known carcinogen.

Community concerns about the health impact remain strong. A local group pledged $250,000 for an independent air quality study, and although the City of Memphis carried out its own tests, the SELC questioned their validity.

The tests missed ozone levels and were reportedly conducted in favourable wind conditions, with equipment placed too close to buildings.

Officials previously argued that the turbines were exempt from regulation due to their ‘mobile’ status, a claim the SELC refuted as legally flawed. Meanwhile, xAI has recently raised $10 billion, split between debt and equity, highlighting its rapid expansion, even as regulatory scrutiny grows.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Europe must break free from US tech giants

For years, a few US tech giants have dominated Europe’s digital infrastructure, threatening both its economy and democracy. Despite talk of ‘tech sovereignty,’ leaked reports suggest EU enforcement may be weakened in trade talks, risking public backing.

Surveys show strong support across the EU for tougher regulation of Big Tech, even at the cost of US tensions. The Digital Markets Act provides tools to challenge monopolies like Google, but enforcement remains slow and under-resourced.

Europe must take coordinated action: break up monopolies harming local media and jobs, strengthen enforcement, and invest in homegrown digital platforms. Redirecting funds from tech giants could empower startups and businesses dependent on these platforms.

Decisive political will is essential to turn tech sovereignty from rhetoric into reality. Effective regulation and strategic investment can restore Europe’s control over its digital future.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

Canada’s telecoms face a key choice between competition and investment

Canada is preparing to finalise a critical policy decision regarding internet affordability and competition. The core policy, reaffirmed by the Canadian Radio-television and Telecommunications Commission (CRTC), mandates that the country’s three major telecom providers, Bell, Telus, and Rogers, must grant wholesale access to their fibre optic networks to smaller internet service providers (ISPs).

The ruling aims to increase consumer choice and stimulate competition by allowing smaller players to use existing infrastructure rather than building their own. The policy also notably expands Telus’s ability to enter new markets, such as Ontario and Quebec, without additional infrastructure investment.

Following concerns raised by major telecom companies, the federal government has been asked to review and potentially overturn the decision. The CRTC warns that reversing the policy could undo competition gains and limit future ISP options.

Meanwhile, Telus and other supporters argue that maintaining the ruling protects regulatory independence and encourages further investment by creating market certainty. Major telecom companies in Canada argue that this policy discourages investment and creates unfair competition, with Bell reporting significant cuts to planned infrastructure spending.

Smaller providers worry about losing market share as big players expand using shared networks. The decision will strongly influence Canada’s future internet competition and investment landscape.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!