EU urged to pause AI act rollout

The digital sector is urging the EU leaders to delay the AI act, citing missing guidance and legal uncertainty. Industry group CCIA Europe warns that pressing ahead could damage AI innovation and stall the bloc’s economic ambitions.

The AI Act’s rules for general-purpose AI models are set to apply in August, but key frameworks are incomplete. Concerns have grown as the European Commission risks missing deadlines while the region seeks a €3.4 trillion AI-driven economic boost by 2030.

CCIA Europe calls for the EU heads of state to instruct a pause on implementation to ensure companies have time to comply. Such a delay would allow final standards to be established, offering developers clarity and supporting AI competitiveness.

Failure to adjust the timeline could leave Europe struggling to lead in AI, according to CCIA Europe’s leadership. A rushed approach, they argue, risks harming the very innovation the AI Act aims to promote.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Infosys chairman warns of global risks from tariffs and AI

Infosys chairman Nandan Nilekani has warned of mounting global uncertainty driven by tariff wars, AI and the ongoing energy transition.

At the company’s 44th annual general meeting, he urged businesses to de-risk sourcing and diversify supply chains as geopolitical trade tensions reshape global commerce.

He described a ‘perfect storm’ of converging challenges pushing the world away from a single global market and towards fragmented trade blocs. As firms navigate the shift, they must choose between regions and adopt more strategic, resilient supply networks.

Addressing AI, Nilekani acknowledged the disruption it may bring to the workforce but framed it as an opportunity for digital transformation. He said Infosys is investing in both ‘AI foundries’ for innovation and ‘AI factories’ for scale, with over 275,000 employees already trained in AI technologies.

Energy transition was also flagged as a significant uncertainty, as the future depends on breakthroughs in renewable sources like solar, wind and hydrogen. Nilekani stressed that all businesses now face rapid technological and operational change before they can progress confidently into an unpredictable future.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Google releases free Gemini CLI tool for developers

Google has introduced Gemini CLI, a free, open-source AI tool that connects developers directly to its Gemini AI models. The new agentic utility allows developers to request debugging, generate code, and run commands using natural language within their terminal environment.

Built as a lightweight interface, Gemini CLI provides a streamlined way to interact with Gemini. While its coding features stand out, Google says the tool handles content creation, deep research, and complex task management across various workflows.

Gemini CLI uses Gemini 2.5 Pro for coding and reasoning tasks by default. Still, it can also connect to other AI models, such as Imagen and Veo, for image and video generation. It supports the Model Context Protocol (MCP) and integrates with Gemini Code Assist.

Moreover, the tool is available on Windows, MacOS, and Linux, offering developers a free usage tier. Access through Vertex AI or AI Studio is available on a pay-as-you-go basis for advanced setups involving multiple agents or custom models.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Meta wins copyright case over AI training

Meta has won a copyright lawsuit brought by a group of authors who accused the company of using their books without permission to train its Llama generative AI.

A US federal judge in San Francisco ruled the AI training was ‘transformative’ enough to qualify as fair use under copyright law.

Judge Vince Chhabria noted, however, that future claims could be more successful. He warned that using copyrighted books to build tools capable of flooding the market with competing works may not always be protected by fair use, especially when such tools generate vast profits.

The case involved pirated copies of books, including Sarah Silverman’s memoir ‘The Bedwetter’ and Junot Diaz’s award-winning novel ‘The Brief Wondrous Life of Oscar Wao’. Meta defended its approach, stating that open-source AI drives innovation and relies on fair use as a key legal principle.

Chhabria clarified that the ruling does not confirm the legality of Meta’s actions, only that the plaintiffs made weak arguments. He suggested that more substantial evidence and legal framing might lead to a different outcome in future cases.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

WhatsApp launches AI feature to sum up all the unread messages

WhatsApp has introduced a new feature using Meta AI to help users manage unread messages more easily. Named ‘Message Summaries’, the tool provides quick overviews of missed messages in individual and group chats, assisting users to catch up without scrolling through long threads.

The summaries are generated using Meta’s Private Processing technology, which operates inside a Trusted Execution Environment. The secure cloud-based system ensures that neither Meta nor WhatsApp — nor anyone else in the conversation — can access your messages or the AI-generated summaries.

According to WhatsApp, Message Summaries are entirely private. No one else in the chat can see the summary created for you. If someone attempts to interfere with the secure system, operations will stop immediately, or the change will be exposed using a built-in transparency check.

Meta has designed the system around three principles: secure data handling during processing and transmission, strict enforcement of protections against tampering, and provable transparency to track any breach attempt.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Nvidia becomes world’s most valuable company after stock surge

Nvidia shares hit an all-time high on 25 June, rising 4.3 percent to US$154.31. The stock has surged 63 percent since April, adding another US$1.5 trillion to its market value.

With a total market capitalisation of about US$3.77 trillion, Nvidia has overtaken Microsoft to become the world’s most valuable listed company.

Strong earnings and growing AI infrastructure spending by major clients — including Microsoft, Meta, Alphabet and Amazon — have reinforced investor confidence.

Nvidia’s CEO, Jensen Huang, told shareholders that demand remains strong and that the computer industry is still in the early stages of a major AI upgrade cycle.

Despite gaining 15 percent in 2025, following a 170 percent rise in 2024 and a 240 percent surge in 2023, Nvidia still appears reasonably valued. It trades at 31.5 times forward earnings, below its 10-year average and close to the Nasdaq 100 multiple, even though its projected growth rate is higher.

Analyst sentiment remains firmly bullish. Nearly 90 percent of analysts tracked by Bloomberg recommend buying the stock, which trades below their average price target.

Yet, Nvidia is less widely held among institutional investors than peers like Microsoft and Apple, indicating further room for buying as AI momentum continues into 2026.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Fake video claims Nigeria is sending troops to Israel

A video circulating on TikTok falsely claims that Nigeria has announced the deployment of troops to Israel. Since 17 June, the video has been shared more than 6,100 times and presents a fabricated news segment constructed from artificial intelligence-generated visuals and outdated footage.

No official Nigerian authority has made any such announcement regarding military involvement in the ongoing Middle East crisis.

The video, attributed to a fictitious media outlet called ‘TBC News’, combines visuals of soldiers and aircraft with simulated newsroom graphics. However, no broadcaster by that name exists, and the logo and branding do not correspond to any known or legitimate media source.

Upon closer inspection, several anomalies suggest the use of generative AI. The news presenter’s appearance subtly shifts throughout the segment — with clothing changes, facial inconsistencies, and robotic voiceovers indicating non-authentic production.

Similarly, the footage of military activity lacks credible visual markers. For example, a purported official briefing displays a coat of arms inconsistent with Nigeria’s national emblems, and no standard flags or insignia are typically present at such events.

While two brief aircraft clips appear authentic — originally filmed during a May airshow in Lagos — the remainder seems digitally altered or artificially generated.

In reality, Nigerian officials have issued intense public criticism of Israel’s recent military actions in Iran and have not indicated any intent to provide military support to Israel.

The video in question, therefore, significantly distorts Nigeria’s diplomatic position and risks exacerbating tensions during an already sensitive period in international affairs.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

WSIS prepares for Geneva as momentum builds for impactful digital governance

As preparations intensify for the World Summit on the Information Society (WSIS+20) high-level event, scheduled for 7–11 July in Geneva, stakeholders from across sectors gathered at the Internet Governance Forum in Norway to reflect on WSIS’s evolution and map a shared path forward.

The session, moderated by Gitanjali Sah of ITU, brought together over a dozen speakers from governments, UN agencies, civil society, and the technical and business communities.

The event is crucial, marking two decades since the WSIS process began. It has grown into a multistakeholder framework involving more than 50 UN entities. While the action lines offer a structured and inclusive approach to digital cooperation, participants acknowledged that measurement and implementation remain the weakest links.

IGF
WSIS prepares for Geneva as momentum builds for impactful digital governance 10

Ambassador Thomas Schneider of Switzerland—co-host of the upcoming high-level event—called for a shift from discussion to decision-making. “Dialogue is necessary but not sufficient,” he stated. “We must ensure these voices translate into outcomes.” Echoing this, South Africa’s representative, Cynthia, reaffirmed her country’s leadership as chair-designate of the event and its commitment to inclusive governance via its G20 presidency focus on AI, digital public infrastructure, and small business support.

UNDP’s Yu Ping Chan shared insights from the field: “Capacity building remains the number one request from governments. It’s not a new principle—it has been central since WSIS began.” She cited UNDP’s work on the Hamburg Declaration on responsible AI and AI ecosystem development in Africa as examples of translating global dialogue into national action.

Tatevik Grigoryan from UNESCO emphasised the enduring value of WSIS’s human rights-based foundations. “We continue to facilitate action lines on access to information, e-learning, and media ethics,” she said, encouraging engagement with UNESCO’s ROMEX framework as a tool for ethical, inclusive digital societies.

Veni from ICANN reinforced the technical community’s role, expressing hope that the WSIS Forum would be formally recognised in the UN’s review documents. “We must not overlook the forum’s contributions. Multistakeholder governance remains essential,” he insisted.

Representing the FAO, Dejan Jakovljević reminded participants that 700 million people remain undernourished. “Digital transformation in agriculture is vital. But farmers without connectivity are left behind,” he said, highlighting the WSIS framework’s role in fostering collaboration across sectors.

Anriette Esterhuysen of APC called civil society to embrace WSIS as a complementary forum to the IGF. “WSIS gives us a policy and implementation framework. It’s not just about talk—it’s about tools we can use at the national level.”

The Inter-Parliamentary Union’s Andy Richardson underscored parliaments’ dual role: advancing innovation while protecting citizens. Meli from the International Chamber of Commerce pointed to business engagement through AI-related workshops and discussions on strengthening multi-stakeholders.

Gitanjali Sah acknowledged past successes but urged continued ambition. “We were very ambitious in 1998—and we must be again,” she said. Still, she noted a persistent challenge: “We lack clear indicators to measure WSIS action line progress. That’s a gap we must close.”

The upcoming Geneva event will feature 67 ministers, 72 WSIS champions, and a youth programme alongside the AI for Good summit. Delegates were encouraged to submit input to the UN review process by 15 July and to participate in shaping a WSIS future that is more measurable, inclusive, and action-oriented.

Track all key moments from the Internet Governance Forum 2025 on our dedicated IGF page.

AI sandboxes pave path for responsible innovation in developing countries

At the Internet Governance Forum 2025 in Lillestrøm, Norway, experts from around the world gathered to examine how AI sandboxes—safe, controlled environments for testing new technologies under regulatory oversight—can help ensure that innovation remains responsible and inclusive, especially in developing countries. Moderated by Sophie Tomlinson of the DataSphere Initiative, the session spotlighted the growing global appeal of sandboxes, initially developed for fintech, and now extending into healthcare, transportation, and data governance.

Speakers emphasised that sandboxes provide a much-needed collaborative space for regulators, companies, and civil society to test AI solutions before launching them into the real world. Mariana Rozo-Paz from the DataSphere Initiative likened them to childhood spaces for building and experimentation, underscoring their agility and potential for creative governance.

From the European AI Office, Alex Moltzau described how the EU AI Act integrates sandboxes to support safe innovation and cross-border collaboration. On the African continent, where 25 sandboxes already exist (mainly in finance), countries like Nigeria are using them to implement data protection laws and shape national AI strategies. However, funding and legal authority remain hurdles.

The workshop laid bare several shared challenges: limited resources, lack of clear legal frameworks, and insufficient participation in civil society. Natalie Cohen of the OECD pointed out that just 41% of countries trust governments to regulate new technologies effectively—a gap that sandboxes can help bridge. By enabling evidence-based experimentation and promoting transparency, they serve as trust-building tools among governments, businesses, and communities.

Despite regional differences, there was consensus that AI sandboxes—when well-designed and inclusive—can drive equitable digital innovation. With initiatives like the Global Sandboxes Forum and OECD toolkits in progress, stakeholders signalled a readiness to move from theory to practice, viewing sandboxes as more than just regulatory experiments—they are, increasingly, catalysts for international cooperation and responsible AI deployment.

Track all key moments from the Internet Governance Forum 2025 on our dedicated IGF page.

Anthropic AI training upheld as fair use; pirated book storage heads to trial

A US federal judge has ruled that Anthropic’s use of books to train its AI model falls under fair use, marking a pivotal decision for the generative AI industry.

The ruling, delivered by US District Judge William Alsup in San Francisco, held that while AI training using copyrighted works was lawful, storing millions of pirated books in a central library constituted copyright infringement.

The case involves authors Andrea Bartz, Charles Graeber and Kirk Wallace Johnson, who sued Anthropic last year. They claimed the Amazon- and Alphabet-backed firm had used pirated versions of their books without permission or compensation to train its Claude language model.

The proposed class action lawsuit is among several lawsuits filed by copyright holders against AI developers, including OpenAI, Microsoft, and Meta.

Judge Alsup stated that Anthropic’s training of Claude was ‘exceedingly transformative’, likening it to how a human reader learns to write by studying existing works. He concluded that the training process served a creative and educational function that US copyright law protects under the doctrine of fair use.

‘Like any reader aspiring to be a writer, Anthropic’s LLMs trained upon works not to replicate them but to create something different,’ the ruling said.

However, Alsup drew a clear line between fair use and infringement regarding storage practices. Anthropic’s copying and storage of over 7 million books in what the court described as a ‘central library of all the books in the world’ was not covered by fair use.

The judge ordered a trial scheduled for December to determine how much Anthropic may owe in damages. US copyright law permits statutory damages of up to $150,000 per work for wilful infringement.

Anthropic argued in court that its use of the books was consistent with copyright law’s intent to promote human creativity.

The company claimed that its system studied the writing to extract uncopyrightable insights and to generate original content. It also maintained that the source of the digital copies was irrelevant to the fair use determination.

Judge Alsup disagreed, noting that downloading content from pirate websites when lawful access was possible may not qualify as a reasonable step. He expressed scepticism that infringers could justify acquiring such copies as necessary for a later claim of fair use.

The decision is the first judicial interpretation of fair use in the context of generative AI. It will likely influence ongoing legal battles over how AI companies source and use copyrighted material for model training. Anthropic has not yet commented on the ruling.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!