EU urges stronger AI oversight after Grok controversy

A recent incident involving Grok, the AI chatbot developed by xAI, has reignited European Union calls for stronger oversight of advanced AI systems.

Comments generated by Grok prompted criticism from policymakers and civil society groups, leading to renewed debate over AI governance and voluntary compliance mechanisms.

The chatbot’s responses, which circulated earlier this week, included highly controversial language and references to historical figures. In response, xAI stated that the content was removed and that technical steps were being taken to prevent similar outputs from appearing in the future.

European policymakers said the incident highlights the importance of responsible AI development. Brando Benifei, an Italian lawmaker who co-led the EU AI Act negotiations, said the event illustrates the systemic risks the new regulation seeks to mitigate.

Christel Schaldemose, a Danish member of the European Parliament and co-lead on the Digital Services Act, echoed those concerns. She emphasised that such incidents underline the need for clear and enforceable obligations for developers of general-purpose AI models.

The European Commission is preparing to release guidance aimed at supporting voluntary compliance with the bloc’s new AI legislation. This code of practice, which has been under development for nine months, is expected to be published this week.

Earlier drafts of the guidance included provisions requiring developers to share information on how they address systemic risks. Reports suggest that some of these provisions may have been weakened or removed in the final version.

A group of five lawmakers expressed concern over what they described as the last-minute removal of key transparency and risk mitigation elements. They argue that strong guidelines are essential for fostering accountability in the deployment of advanced AI models.

The incident also brings renewed attention to the Digital Services Act and its enforcement, as X, the social media platform where Grok operates, is currently under EU investigation for potential violations related to content moderation.

General-purpose AI systems, such as OpenAI’s GPT, Google’s Gemini and xAI’s Grok, will be subject to additional requirements under the EU AI Act beginning 2 August. Obligations include disclosing training data sources, addressing copyright compliance, and mitigating systemic risks.

While these requirements are mandatory, their implementation is expected to be shaped by the Commission’s voluntary code of practice. Industry groups and international stakeholders have voiced concerns over regulatory burdens, while policymakers maintain that safeguards are critical for public trust.

The debate over Grok’s outputs reflects broader challenges in balancing AI innovation with the need for oversight. The EU’s approach, combining binding legislation with voluntary guidance, seeks to offer a measured path forward amid growing public scrutiny of generative AI technologies.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Perplexity launches AI browser to challenge Google Chrome

Perplexity AI, backed by Nvidia and other major investors, has launched Comet, an AI-driven web browser designed to rival Google Chrome.

The browser uses ‘agentic AI’ that performs tasks, makes decisions, and simplifies workflows in real time, offering users an intelligent alternative to traditional search and navigation.

Comet’s assistant can compare products, summarise articles, book meetings, and handle research queries through a single interface. Initially available to subscribers of Perplexity Max at US$200 per month, Comet will gradually roll out more broadly via invite during the summer.

The launch signals Perplexity’s move into the competitive browser space, where Chrome currently dominates with a 68 per cent global market share.

The company aims to challenge not only Google’s and Microsoft’s browsers but also compete with OpenAI, which recently introduced search to ChatGPT. Unlike many AI tools, Comet stores data locally and does not train on personal information, positioning itself as a privacy-first solution.

Still, Perplexity has faced criticism for using content from major media outlets without permission. In response, it launched a publisher partnership program to address concerns and build collaborative relationships with news organisations like Forbes and Dow Jones.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

X CEO Yaccarino resigns as AI controversy and Musk’s influence grow

Linda Yaccarino has stepped down as CEO of X, ending a turbulent two-year tenure marked by Musk’s controversial leadership and ongoing transformation of the social media company.

Her resignation came just one day after a backlash over offensive posts by Grok, the AI chatbot created by Musk’s xAI, which had been recently integrated into the platform.

Yaccarino, who was previously a top advertising executive at NBCUniversal, was brought on in 2023 to help stabilise the company following Musk’s $44bn acquisition.

In her farewell post, she cited efforts to improve user safety and rebuild advertiser trust, but did not provide a clear reason for her departure.

Analysts suggest growing tensions with Musk’s management style, particularly around AI moderation, may have prompted the move.

Her exit adds to the mounting challenges facing Musk’s empire.

Tesla is suffering from slumping sales and executive departures, while X remains under pressure from heavy debts and legal battles with advertisers.

Yaccarino had spearheaded ambitious initiatives, including payment partnerships with Visa and plans for an X-branded credit or debit card.

Despite these developments, X continues to face scrutiny for its rightward political shift and reliance on controversial AI tools.

Whether the company can fulfil Musk’s vision of becoming an ‘everything app’ without Yaccarino remains to be seen.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Nvidia nears $4 trillion milestone as AI boom continues

Nvidia has made financial history by nearly reaching a $4 trillion market valuation, a milestone highlighting investor confidence in AI as a powerful economic force.

Shares briefly peaked at $164.42 before closing slightly lower at $162.88, just under the record threshold. The rise underscores Nvidia’s position as the leading supplier of AI chips amid soaring demand from major tech firms.

Led by CEO Jensen Huang, the company now holds a market value larger than the economies of Britain, France, or India.

Nvidia’s growth has helped lift the Nasdaq to new highs, aided in part by improved market sentiment following Donald Trump’s softened stance on tariffs.

However, trade barriers with China continue to pose risks, including export restrictions that cost Nvidia $4.5 billion in the first quarter of 2025.

Despite those challenges, Nvidia secured a major AI infrastructure deal in Saudi Arabia during Trump’s visit in May. Innovations such as the next-generation Blackwell GPUs and ‘real-time digital twins’ have helped maintain investor confidence.

The company’s stock has risen over 21% in 2025, far outpacing the Nasdaq’s 6.7% gain. Nvidia chips are also being used by the US administration as leverage in global tech diplomacy.

While competition from Chinese AI firms like DeepSeek briefly knocked $600 billion off Nvidia’s valuation, Huang views rivalry as essential to progress. With the growing demand for complex reasoning models and AI agents, Nvidia remains at the forefront.

Still, the fast pace of AI adoption raises concerns about job displacement, with firms like Ford and JPMorgan already reporting workforce impacts.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

xAI unveils Grok 4 with top benchmark scores

Elon Musk’s AI company, xAI, has launched its latest flagship model, Grok 4, alongside an ultra-premium $300 monthly plan named SuperGrok Heavy.

Grok 4, which competes with OpenAI’s ChatGPT and Google’s Gemini, can handle complex queries and interpret images. It is now integrated more deeply into the social media platform X, which Musk also owns.

Despite recent controversy, including antisemitic responses generated by Grok’s official X account, xAI focused on showcasing the model’s performance.

Musk claimed Grok 4 is ‘better than PhD level’ in all academic subjects and revealed a high-performing version called Grok 4 Heavy, which uses multiple AI agents to solve problems collaboratively.

The models scored strongly on benchmark exams, including a 25.4% score for Grok 4 on Humanity’s Last Exam, outperforming major rivals. With tools enabled, Grok 4 Heavy reached 44.4%, nearly doubling OpenAI’s and Google’s results.

It also achieved a leading score of 16.2% on the ARC-AGI-2 pattern recognition test, nearly double that of Claude Opus 4.

xAI is targeting developers through its API and enterprise partnerships while teasing upcoming tools: an AI coding model in August, a multi-modal agent in September, and video generation in October.

Yet the road ahead may be rocky, as the company works to overcome trust issues and position Grok as a serious rival in the AI arms race.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

AI scam targets donors with fake orphan images

Cambodian authorities have warned the public about increasing online scams using AI-generated images to deceive donors. The scams often show fabricated scenes of orphaned children or grieving families, with QR codes attached to collect money.

One Facebook account, ‘Khmer Khmer’, was named in an investigation by the Anti-Cyber Crime Department for spreading false stories and deepfake images to solicit charity donations. These included claims of a wife unable to afford a coffin and false fundraising efforts near the Thai border.

The department confirmed that AI-generated realistic visuals are designed to manipulate emotions and lure donations. Cambodian officials continue investigations and have promised legal action if evidence of criminal activity is confirmed.

Authorities reminded the public to remain cautious and to only contribute to verified and officially recognised campaigns. While AI’s ability to create realistic content has many uses, it also opens the door to dangerous forms of fraud and misinformation when abused.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Activision pulls game after PC hacking reports

Activision has removed Call of Duty: WWII from the Microsoft Store and PC Game Pass following reports that hackers exploited a serious vulnerability in the game. Only the PC versions from Microsoft’s platforms are affected, while the game remains accessible via Steam and consoles.

The decision came after several players reported their computers being hijacked during gameplay. Streamed footage showed remote code execution attacks, where malicious code was deployed through the game to seize control of victims’ devices.

AN outdated and insecure build of the game, which had previously been patched elsewhere, was uploaded to the Microsoft platforms. Activision has yet to restore access and continues to investigate the issue.

Call of Duty: WWII was only added to Game Pass in June. The vulnerability highlights the dangers of pushing old game builds without sufficient review, exposing users to significant cybersecurity risks.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Kurbalija: Digital tools are reshaping diplomacy

Once the global stage for peace negotiations and humanitarian accords, Geneva finds itself at the heart of a new kind of diplomacy shaped by algorithms, data flows, and AI. Jovan Kurbalija, Executive Director of Diplo and Head of the Geneva Internet Platform, believes this transformation reflects Geneva’s long tradition of engaging with science, technology, and global governance. He explained this in an interview with Léman Bleu.

Diplo, a Swiss-Maltese foundation, supports diplomats and international professionals as they navigate the increasingly complex landscape of digital governance.

‘Where we once trained them to understand the internet,’ Kurbalija explains, ‘we now help them grasp and negotiate issues around AI and digital tools.’

The foundation not only aids diplomats in addressing cyber threats and data privacy but also equips them with AI-enhanced tools for negotiation, public communication, and consular protection.

According to Kurbalija, digital governance touches everyone. From how our phones are built to how data moves across borders, nearly 50 distinct issues—from cybersecurity and e-commerce to data protection and digital standards—are debated in the corridors of International Geneva. These debates are no longer reserved for specialists because they affect the everyday lives of billions.

Kurbalija draws a fascinating connection between Geneva’s philosophical heritage and today’s technological dilemmas. Writers like Mary Shelley, Voltaire, and Borges, each with ties to Geneva, grappled with themes eerily relevant today: unchecked scientific ambition, the tension between freedom and control, and the challenge of processing vast amounts of knowledge. He dubs this tradition ‘EspriTech de Genève,’ a spirit of intellectual inquiry that still echoes in debates over AI and its impact on society.

AI, Kurbalija warns, is both a marvel and a potential menace.

‘It’s not exactly Frankenstein,’ he says, ‘but without proper governance, it could become one.’

As technology evolves, so must international mechanisms ensure it serves humanity rather than endangers it.

Diplomacy, meanwhile, is being reshaped not just in terms of content but in method. Digital tools allow diplomats to engage more directly with the public and make negotiations more transparent. Yet, the rise of social media has its downsides. Public broadcasting of diplomatic proceedings risks undermining the very privacy and trust needed to reach a compromise.

‘Diplomacy,’ Kurbalija notes, ‘needs space to breathe—to think, negotiate, resolve.’

He also cautions against the growing concentration of AI and data power in the hands of a few corporations.

‘We risk having our collective knowledge privatised, commodified, and sold back to us,’ he says.

The antidote? A push for more inclusive, bottom-up AI development that empowers individuals, communities, and nations.

As Geneva continues its historic role in shaping the future, Kurbalija’s message is clear: managing technology wisely is not just a diplomatic challenge—it’s a global necessity.

Would you like to learn more about AI, tech and digital diplomacyIf so, ask our Diplo chatbot!

Over 2.3 million users hit by Chrome and Edge extension malware

A stealthy browser hijacking campaign has infected over 2.3 million users through Chrome and Edge extensions that appeared safe and even displayed Google’s verified badge.

According to cybersecurity researchers at Koi Security, the campaign, dubbed RedDirection, involves 18 malicious extensions offering legitimate features like emoji keyboards and VPN tools, while secretly tracking users and backdooring their browsers.

One of the most popular extensions — a colour picker developed by ‘Geco’ — continues to be available on the Chrome and Edge stores with thousands of positive reviews.

While it works as intended, the extension also hijacks sessions, records browsing activity, and sends data to a remote server controlled by attackers.

What makes the campaign more insidious is how the malware was delivered. The extensions began as clean, valuable tools, but malicious code was quietly added during later updates.

Due to how Google and Microsoft handle automatic updates, most users receive spyware without taking action or clicking anything.

Koi Security’s Idan Dardikman describes the campaign as one of the largest documented. Users are advised to uninstall any affected extensions, clear browser data, and monitor accounts for unusual activity.

Despite the serious breach, Google and Microsoft have not responded publicly.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Grok AI chatbot suspended in Turkey following court order

A Turkish court has issued a nationwide ban on Grok, the AI chatbot developed by Elon Musk’s company xAI, following recent developments involving the platform.

The ruling, delivered on Wednesday by a criminal court in Ankara, instructed Turkey’s telecommunications authority to block access to the chatbot across the country. The decision came after public filings under Turkey’s internet law prompted a judicial review.

Grok, which is integrated into the X platform (formerly Twitter), recently rolled out an update to make the system more open and responsive. The update has sparked broader global discussions about the challenges of moderating AI-generated content in diverse regulatory environments.

In a brief statement, X acknowledged the situation and confirmed that appropriate content moderation measures had been implemented in response. The ban places Turkey among many countries examining the role of generative AI tools and the standards that govern their deployment.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!