EU urges stronger AI oversight after Grok controversy

A recent incident involving Grok, the AI chatbot developed by xAI, has reignited European Union calls for stronger oversight of advanced AI systems.

Comments generated by Grok prompted criticism from policymakers and civil society groups, leading to renewed debate over AI governance and voluntary compliance mechanisms.

The chatbot’s responses, which circulated earlier this week, included highly controversial language and references to historical figures. In response, xAI stated that the content was removed and that technical steps were being taken to prevent similar outputs from appearing in the future.

European policymakers said the incident highlights the importance of responsible AI development. Brando Benifei, an Italian lawmaker who co-led the EU AI Act negotiations, said the event illustrates the systemic risks the new regulation seeks to mitigate.

Christel Schaldemose, a Danish member of the European Parliament and co-lead on the Digital Services Act, echoed those concerns. She emphasised that such incidents underline the need for clear and enforceable obligations for developers of general-purpose AI models.

The European Commission is preparing to release guidance aimed at supporting voluntary compliance with the bloc’s new AI legislation. This code of practice, which has been under development for nine months, is expected to be published this week.

Earlier drafts of the guidance included provisions requiring developers to share information on how they address systemic risks. Reports suggest that some of these provisions may have been weakened or removed in the final version.

A group of five lawmakers expressed concern over what they described as the last-minute removal of key transparency and risk mitigation elements. They argue that strong guidelines are essential for fostering accountability in the deployment of advanced AI models.

The incident also brings renewed attention to the Digital Services Act and its enforcement, as X, the social media platform where Grok operates, is currently under EU investigation for potential violations related to content moderation.

General-purpose AI systems, such as OpenAI’s GPT, Google’s Gemini and xAI’s Grok, will be subject to additional requirements under the EU AI Act beginning 2 August. Obligations include disclosing training data sources, addressing copyright compliance, and mitigating systemic risks.

While these requirements are mandatory, their implementation is expected to be shaped by the Commission’s voluntary code of practice. Industry groups and international stakeholders have voiced concerns over regulatory burdens, while policymakers maintain that safeguards are critical for public trust.

The debate over Grok’s outputs reflects broader challenges in balancing AI innovation with the need for oversight. The EU’s approach, combining binding legislation with voluntary guidance, seeks to offer a measured path forward amid growing public scrutiny of generative AI technologies.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Perplexity launches AI browser to challenge Google Chrome

Perplexity AI, backed by Nvidia and other major investors, has launched Comet, an AI-driven web browser designed to rival Google Chrome.

The browser uses ‘agentic AI’ that performs tasks, makes decisions, and simplifies workflows in real time, offering users an intelligent alternative to traditional search and navigation.

Comet’s assistant can compare products, summarise articles, book meetings, and handle research queries through a single interface. Initially available to subscribers of Perplexity Max at US$200 per month, Comet will gradually roll out more broadly via invite during the summer.

The launch signals Perplexity’s move into the competitive browser space, where Chrome currently dominates with a 68 per cent global market share.

The company aims to challenge not only Google’s and Microsoft’s browsers but also compete with OpenAI, which recently introduced search to ChatGPT. Unlike many AI tools, Comet stores data locally and does not train on personal information, positioning itself as a privacy-first solution.

Still, Perplexity has faced criticism for using content from major media outlets without permission. In response, it launched a publisher partnership program to address concerns and build collaborative relationships with news organisations like Forbes and Dow Jones.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

X CEO Yaccarino resigns as AI controversy and Musk’s influence grow

Linda Yaccarino has stepped down as CEO of X, ending a turbulent two-year tenure marked by Musk’s controversial leadership and ongoing transformation of the social media company.

Her resignation came just one day after a backlash over offensive posts by Grok, the AI chatbot created by Musk’s xAI, which had been recently integrated into the platform.

Yaccarino, who was previously a top advertising executive at NBCUniversal, was brought on in 2023 to help stabilise the company following Musk’s $44bn acquisition.

In her farewell post, she cited efforts to improve user safety and rebuild advertiser trust, but did not provide a clear reason for her departure.

Analysts suggest growing tensions with Musk’s management style, particularly around AI moderation, may have prompted the move.

Her exit adds to the mounting challenges facing Musk’s empire.

Tesla is suffering from slumping sales and executive departures, while X remains under pressure from heavy debts and legal battles with advertisers.

Yaccarino had spearheaded ambitious initiatives, including payment partnerships with Visa and plans for an X-branded credit or debit card.

Despite these developments, X continues to face scrutiny for its rightward political shift and reliance on controversial AI tools.

Whether the company can fulfil Musk’s vision of becoming an ‘everything app’ without Yaccarino remains to be seen.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Nvidia nears $4 trillion milestone as AI boom continues

Nvidia has made financial history by nearly reaching a $4 trillion market valuation, a milestone highlighting investor confidence in AI as a powerful economic force.

Shares briefly peaked at $164.42 before closing slightly lower at $162.88, just under the record threshold. The rise underscores Nvidia’s position as the leading supplier of AI chips amid soaring demand from major tech firms.

Led by CEO Jensen Huang, the company now holds a market value larger than the economies of Britain, France, or India.

Nvidia’s growth has helped lift the Nasdaq to new highs, aided in part by improved market sentiment following Donald Trump’s softened stance on tariffs.

However, trade barriers with China continue to pose risks, including export restrictions that cost Nvidia $4.5 billion in the first quarter of 2025.

Despite those challenges, Nvidia secured a major AI infrastructure deal in Saudi Arabia during Trump’s visit in May. Innovations such as the next-generation Blackwell GPUs and ‘real-time digital twins’ have helped maintain investor confidence.

The company’s stock has risen over 21% in 2025, far outpacing the Nasdaq’s 6.7% gain. Nvidia chips are also being used by the US administration as leverage in global tech diplomacy.

While competition from Chinese AI firms like DeepSeek briefly knocked $600 billion off Nvidia’s valuation, Huang views rivalry as essential to progress. With the growing demand for complex reasoning models and AI agents, Nvidia remains at the forefront.

Still, the fast pace of AI adoption raises concerns about job displacement, with firms like Ford and JPMorgan already reporting workforce impacts.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Greece seizes crypto tied to record Bybit hack

Greek authorities have successfully seized digital assets linked to a major international cybercrime case, marking the country’s first-ever recovery of cryptocurrency. The operation followed a months-long investigation into suspicious blockchain activity in collaboration with blockchain analytics firm Chainalysis.

The recovered funds are part of a record-breaking $1.5 billion theft from crypto exchange Bybit earlier this year. In February, hackers exploited a vulnerability in one of the platform’s Ethereum wallets, transferring the entire contents to an unknown address.

The incident, considered one of the largest crypto heists in history, has been widely attributed to North Korea’s Lazarus Group.

A suspect wallet was identified and frozen, cutting off access to the assets and transferring the case to prosecutors for further legal proceedings.

Officials hailed the move as a significant advance in combating digital crime. Analysts say the operation shows how blockchain transparency and forensic tools, combined with international cooperation, can disrupt even the most complex laundering networks.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Bitcoin hits new all-time high as institutional demand surges

Bitcoin has broken past its previous record, trading above $111,970 in a move that defied technical indicators and widespread scepticism. The rally, fuelled by institutional flows and growing corporate adoption, forced short sellers to capitulate after building up $35 billion in open interest.

Bitcoin’s latest breakout is driven by spot ETF inflows and corporate adoption, rather than retail speculation or halving narratives. In the second quarter alone, ETF providers absorbed 245,000 BTC—around 1% of the total supply—tightening liquidity and amplifying price pressure.

Analysts now view this as a structural shift where institutional demand outpaces miner issuance by a factor of three.

Stronger-than-expected US job data and fading hopes for a July rate cut failed to dent the crypto rally. The broader equity market also gained, with the S&P 500, Nasdaq, and Dow posting solid advances.

Bitcoin’s parallel rise suggests it is no longer merely a high-risk asset but increasingly seen as a liquidity hedge in uncertain conditions.

Geopolitical risks are quietly building. The Trump administration introduced new tariffs against six countries, potentially escalating global trade tensions. Historically, such moves have weighed on risk assets, but Bitcoin has remained resilient.

Analysts warn, however, that the situation could change by August if the tariffs are implemented.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

xAI unveils Grok 4 with top benchmark scores

Elon Musk’s AI company, xAI, has launched its latest flagship model, Grok 4, alongside an ultra-premium $300 monthly plan named SuperGrok Heavy.

Grok 4, which competes with OpenAI’s ChatGPT and Google’s Gemini, can handle complex queries and interpret images. It is now integrated more deeply into the social media platform X, which Musk also owns.

Despite recent controversy, including antisemitic responses generated by Grok’s official X account, xAI focused on showcasing the model’s performance.

Musk claimed Grok 4 is ‘better than PhD level’ in all academic subjects and revealed a high-performing version called Grok 4 Heavy, which uses multiple AI agents to solve problems collaboratively.

The models scored strongly on benchmark exams, including a 25.4% score for Grok 4 on Humanity’s Last Exam, outperforming major rivals. With tools enabled, Grok 4 Heavy reached 44.4%, nearly doubling OpenAI’s and Google’s results.

It also achieved a leading score of 16.2% on the ARC-AGI-2 pattern recognition test, nearly double that of Claude Opus 4.

xAI is targeting developers through its API and enterprise partnerships while teasing upcoming tools: an AI coding model in August, a multi-modal agent in September, and video generation in October.

Yet the road ahead may be rocky, as the company works to overcome trust issues and position Grok as a serious rival in the AI arms race.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Google partners with UK government on AI training

The UK government has struck a major partnership with Google Cloud aimed at modernising public services by eliminating agreing IT systems and equipping 100,000 civil servants with digital and AI skills by 2030.

Backed by DSIT, the initiative targets sectors like the NHS and local councils, seeking both operational efficiency and workforce transformation.

Replacing legacy contracts, some of which date back decades, could unlock as much as £45 billion in efficiency savings, say ministers. Google DeepMind will provide technical expertise to help departments adopt emerging AI solutions and accelerate public sector innovation.

Despite these promising aims, privacy campaigners warn that reliance on a US-based tech giant threatens national data sovereignty and may lead to long-term lock-in.

Foxglove’s Martha Dark described the deal as ‘dangerously naive’, with concerns around data access, accountability, public procurement processes and geopolitical risk.

As ministers pursue broader technological transformation, similar partnerships with Microsoft, OpenAI and Meta are underway, reflecting an industry-wide effort to bridge digital skills gaps and bring agile solutions into Whitehall.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

AI interviews leave job candidates in the dark

An increasing number of startups are now using AI to conduct video job interviews, often without making this clear to applicants. Senior software developers are finding themselves unknowingly engaging with automated systems instead of human recruiters.

Applicants are typically asked to submit videos responding to broad interview prompts, including examples and case studies, often without time constraints or human engagement.

AI processes these asynchronous interviews, which evaluate responses using natural language processing, facial cues and tone to assign scores.

Critics argue that this approach shifts the burden of labour onto job seekers, while employers remain unaware of the hidden costs and flawed metrics. There is also concern about the erosion of dignity in hiring, with candidates treated as data points rather than individuals.

Although AI offers potential efficiencies, the current implementation risks deepening dysfunctions in recruitment by prioritising speed over fairness, transparency and candidate experience. Until the technology is used more thoughtfully, experts advise job seekers to avoid such processes altogether.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

AI fluency is the new office software skill

As tools like ChatGPT, Copilot, and other generative AI systems become embedded in daily workflows, employers increasingly prioritise a new skill: AI fluency.

Much like proficiency in office software became essential in the past, knowing how to collaborate effectively with AI is now a growing requirement across industries.

But interacting with AI isn’t always intuitive. Many users encounter generic or unhelpful responses from chatbots and assume the technology is limited. In reality, AI systems rely heavily on the context they are given, and that’s where users come in.

Rather than considering AI as a search engine, it helps to see it as a partner needing guidance. A vague prompt like ‘write a proposal’ is unlikely to produce meaningful results. A better approach provides background, direction, and clear expectations.

One practical framework is CATS: context, angle, task, and style.

Context sets the stage. It includes your role, the situation, the audience, and constraints. For example, ‘I’m a nonprofit director writing a grant proposal for an environmental education program in urban schools’ offers much more to work with than a general request.

Angle defines the perspective. You can ask the AI to act as a peer reviewer, a mentor, or even a sceptical audience member. The roles help shape the tone and focus of the response.

Task clarifies the action you want. Instead of asking for help with a presentation, try ‘Suggest three ways to improve my opening slide for an audience of small business owners.’

Style determines the format and tone. Whether you need a formal report, a friendly email, or an outline in bullet points, specifying the style helps the AI deliver a more relevant output.

Beyond prompts, users can also practice context engineering—managing the environment around the prompt. The method includes uploading relevant documents, building on previous chats, or setting parameters through instructions. The steps help tailor responses more closely to your needs.

Think of prompting as a conversation, not a one-shot command. If the initial response isn’t ideal, clarify, refine, or build on it. Ask follow-up questions, adjust your instructions, or extract functional elements to develop further in a new thread.

That said, it’s essential to stay critical. AI systems can mimic natural conversation, but don’t truly understand the information they provide. Human oversight remains crucial. Always verify outputs, especially in professional or high-stakes contexts.

Ultimately, AI tools are powerful collaborators—but only when paired with clear guidance and human judgment. Provide the correct input, and you’ll often find the output exceeds expectations.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!