Nvidia becomes world’s most valuable company after stock surge

Nvidia shares hit an all-time high on 25 June, rising 4.3 percent to US$154.31. The stock has surged 63 percent since April, adding another US$1.5 trillion to its market value.

With a total market capitalisation of about US$3.77 trillion, Nvidia has overtaken Microsoft to become the world’s most valuable listed company.

Strong earnings and growing AI infrastructure spending by major clients — including Microsoft, Meta, Alphabet and Amazon — have reinforced investor confidence.

Nvidia’s CEO, Jensen Huang, told shareholders that demand remains strong and that the computer industry is still in the early stages of a major AI upgrade cycle.

Despite gaining 15 percent in 2025, following a 170 percent rise in 2024 and a 240 percent surge in 2023, Nvidia still appears reasonably valued. It trades at 31.5 times forward earnings, below its 10-year average and close to the Nasdaq 100 multiple, even though its projected growth rate is higher.

Analyst sentiment remains firmly bullish. Nearly 90 percent of analysts tracked by Bloomberg recommend buying the stock, which trades below their average price target.

Yet, Nvidia is less widely held among institutional investors than peers like Microsoft and Apple, indicating further room for buying as AI momentum continues into 2026.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

UNESCO and ICANN lead push for multilingual and inclusive internet governance

At the 2025 Internet Governance Forum in Lillestrøm, Norway, experts gathered to discuss how to involve diverse communities—especially indigenous and underrepresented groups—better in the technical governance of the internet. The session, led by Niger’s Anne Rachel Inne, emphasised that meaningful participation requires more than token inclusion; it demands structural reforms and practical engagement tools.

Central to the dialogue was the role of multilingualism, which UNESCO’s Guilherme Canela de Souza described as both a right and a necessity for true digital inclusion. ICANN’s Theresa Swinehart spotlighted ‘Universal Acceptance’ as a tangible step toward digital equality, ensuring that domain names and email addresses work in all languages and scripts.

Real-world examples, like hackathons with university students in Bahrain, showcased how digital cooperation can bridge technical skills and community needs. Meanwhile, Valts Ernstreits from Latvia shared how international engagement helped elevate the status of the Livonian language at home, proving that global advocacy can yield local policy wins.

The workshop addressed persistent challenges to inclusion: from bureaucratic hurdles that exclude indigenous communities to the lack of connections between technical and policy realms. Panellists agreed that real change hinges on collaboration, mentorship, and tools that meet people where they are, like WhatsApp groups and local capacity-building networks.

Participants also highlighted UNESCO’s roadmap for multilingualism and ICANN’s upcoming domain name support program as critical opportunities for further action. In a solution-oriented close, speakers urged continued efforts to make digital spaces more representative.

They underscored the need for long-term investment in community-driven infrastructure and policies that reflect the internet’s global diversity. The message was clear: equitable internet governance can only be achieved when all voices—across languages, regions, and technical backgrounds—are heard and empowered.

Track all key moments from the Internet Governance Forum 2025 on our dedicated IGF page.

Anthropic AI training upheld as fair use; pirated book storage heads to trial

A US federal judge has ruled that Anthropic’s use of books to train its AI model falls under fair use, marking a pivotal decision for the generative AI industry.

The ruling, delivered by US District Judge William Alsup in San Francisco, held that while AI training using copyrighted works was lawful, storing millions of pirated books in a central library constituted copyright infringement.

The case involves authors Andrea Bartz, Charles Graeber and Kirk Wallace Johnson, who sued Anthropic last year. They claimed the Amazon- and Alphabet-backed firm had used pirated versions of their books without permission or compensation to train its Claude language model.

The proposed class action lawsuit is among several lawsuits filed by copyright holders against AI developers, including OpenAI, Microsoft, and Meta.

Judge Alsup stated that Anthropic’s training of Claude was ‘exceedingly transformative’, likening it to how a human reader learns to write by studying existing works. He concluded that the training process served a creative and educational function that US copyright law protects under the doctrine of fair use.

‘Like any reader aspiring to be a writer, Anthropic’s LLMs trained upon works not to replicate them but to create something different,’ the ruling said.

However, Alsup drew a clear line between fair use and infringement regarding storage practices. Anthropic’s copying and storage of over 7 million books in what the court described as a ‘central library of all the books in the world’ was not covered by fair use.

The judge ordered a trial scheduled for December to determine how much Anthropic may owe in damages. US copyright law permits statutory damages of up to $150,000 per work for wilful infringement.

Anthropic argued in court that its use of the books was consistent with copyright law’s intent to promote human creativity.

The company claimed that its system studied the writing to extract uncopyrightable insights and to generate original content. It also maintained that the source of the digital copies was irrelevant to the fair use determination.

Judge Alsup disagreed, noting that downloading content from pirate websites when lawful access was possible may not qualify as a reasonable step. He expressed scepticism that infringers could justify acquiring such copies as necessary for a later claim of fair use.

The decision is the first judicial interpretation of fair use in the context of generative AI. It will likely influence ongoing legal battles over how AI companies source and use copyrighted material for model training. Anthropic has not yet commented on the ruling.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

AI drives fall in graduate jobs

According to new figures from Indeed, AI adoption across industries has contributed to a steep drop in graduate job listings. The jobs platform reported a one-third fall in advertised roles for recent graduates, the lowest level seen in almost a decade.

Major professional services firms have significantly scaled back their graduate intakes in response to shifting labour demands. KPMG, Deloitte, EY and PwC all reported reductions, with KPMG cutting its graduate cohort by a third.

The UK government has pledged to improve the nation’s AI skills through partnerships to upskill 7.5 million workers. Prime Minister Keir Starmer announced the plan during London Tech Week as part of efforts to prepare for an AI-driven economy.

Concerns over AI replacing human roles were highlighted in a controversial ad campaign by Californian firm Artisan, which sparked complaints to the UK’s Advertising Standards Authority. The campaign’s slogan urged companies to stop hiring humans.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Verizon and Nokia secure UK contract

Verizon and Nokia have partnered to deliver private 5G networks at Thames Freeport in the UK. The networks will support industrial operations with high-speed, reliable connectivity, enabling AI, automation, and real-time data processing.

The UK contract is part of a broader multibillion-dollar transformation of the region. Nokia will provide all hardware and software, powering major sites, including DP World London Gateway and Ford’s Dagenham plant.

Preparations for 6G are already underway, with Nokia expecting commercial rollout by late 2029. The technology promises enhanced AI capabilities, improved device battery life, and efficient spectrum sharing with 5G.

Thanks to advanced spectrum management features, the transition between 5G and 6G is expected to be smooth. Both networks will operate simultaneously without interference, supporting the next industrial and consumer technology generation.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Microsoft family safety blocks Google Chrome on Windows 11

Windows 11 users have reported that Google Chrome crashes and fails to reopen when Microsoft family safety parental controls are active.

The issue appears to be linked to Chrome’s recent update, version 137.0.7151.68 and does not affect users of Microsoft Edge under the same settings.

Google acknowledged the problem and provided a workaround involving changes to family safety settings, such as unblocking Chrome or adjusting content filters.

Microsoft has not issued a formal statement, but its family safety FAQ confirms that non-Edge browsers are blocked from web filtering.

Users are encouraged to update Google Chrome to version 138.0.7204.50 to address other security concerns recently disclosed by Google.

The update aims to patch vulnerabilities that could let attackers bypass security policies and run malicious code.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Top 7 AI agents transforming business in 2025

AI agents are no longer a futuristic concept — they’re now embedded in the everyday operations of major companies across sectors.

From customer service to data analysis, AI-powered agents transform workflows by handling tasks like scheduling, reporting, and decision-making with minimal human input.

Unlike simple chatbots, today’s AI agents understand context, follow multi-step instructions, and integrate seamlessly with business tools. Google’s Gemini Agents, IBM’s Watsonx Orchestrate, Microsoft Copilot, and OpenAI’s Operator are some tools that reshape how businesses function.

These systems interpret goals and act on behalf of employees, boosting productivity without needing constant prompts.

Other leading platforms include Amelia, known for its enterprise-grade capabilities in finance and telecom; Claude by Anthropic, focused on safe and transparent reasoning; and North by Cohere, which delivers sector-specific AI for clients like Oracle and SAP.

Many of these tools offer no-code or low-code setups, enabling faster adoption across HR, finance, customer support, and more.

While most agents aren’t entirely autonomous, they’re designed to perform meaningful work and evolve with feedback.

The rise of agentic AI marks a significant shift in workplace automation as businesses move beyond experimentation toward real-world implementation, one workflow at a time.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

AGI moves closer to reshaping society

There was a time when machines that think like humans existed only in science fiction. But AGI now stands on the edge of becoming a reality — and it could reshape our world as profoundly as electricity or the internet once did.

Unlike today’s narrow AI systems, AGI can learn, reason and adapt across domains, handling everything from creative writing to scientific research without being limited to a single task.

Recent breakthroughs in neural architecture, multimodal models, and self-improving algorithms bring AGI closer—systems like GPT-4o and DeepMind’s Gemini now process language, images, audio and video together.

Open-source tools such as AutoGPT show early signs of autonomous reasoning. Memory-enabled AIs and brain-computer interfaces are blurring the line between human and machine thought while companies race to develop systems that can not only learn but learn how to learn.

Though true AGI hasn’t yet arrived, early applications show its potential. AI already assists in generating code, designing products, supporting mental health, and uncovering scientific insights.

AGI could transform industries such as healthcare, finance, education, and defence as development accelerates — not just by automating tasks but also by amplifying human capabilities.

Still, the rise of AGI raises difficult questions.

How can societies ensure safety, fairness, and control over systems that are more intelligent than their creators? Issues like bias, job disruption and data privacy demand urgent attention.

Most importantly, global cooperation and ethical design are essential to ensure AGI benefits humanity rather than becoming a threat.

The challenge is no longer whether AGI is coming but whether we are ready to shape it wisely.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

New ranking shows which AI respects your data

A new report comparing leading AI chatbots on privacy grounds has named Le Chat by Mistral AI as the most respectful of user data.

The study, conducted by data removal service Incogni, assessed nine generative AI services using eleven criteria related to data usage, transparency and user control.

Le Chat emerged as the top performer thanks to limited data collection and clarity in privacy practices, even if it lost some points for complete transparency.

ChatGPT followed in second place, earning praise for providing clear privacy policies and offering users tools to limit data use despite concerns about handling training data. Grok, xAI’s chatbot, took the third position, though its privacy policy was harder to read.

At the other end of the spectrum, Meta AI ranked lowest. Its data collection and sharing practices were flagged as the most invasive, with prompts reportedly shared within its corporate group and with research collaborators.

Microsoft’s Copilot and Google’s Gemini also performed poorly in terms of user control and data transparency.

Incogni’s report found that some services allow users to prevent their input from being used to train models, such as ChatGPT Grok and Le Chat. In contrast, others, including Gemini, Pi AI, DeepSeek and Meta AI, offered no clear way to opt-out.

The report emphasised that simple, well-maintained privacy support pages can significantly improve user trust and understanding.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

AI and the future of work: Global forum highlights risks, promise, and urgent choices

At the 20th Internet Governance Forum held in Lillestrøm, Norway, global leaders, industry experts, and creatives gathered for a high-level session exploring how AI is transforming the world of work. While the tone was broadly optimistic, participants wrestled with difficult questions about equity, regulation, and the ethics of data use.

AI’s capacity to enhance productivity, reshape industries, and bring solutions to health, education, and agriculture was celebrated, but sharp divides emerged over how to govern and share its benefits. Concrete examples showcased AI’s positive impact. Norway’s government highlighted AI’s role in green energy and public sector efficiency, while Lesotho’s minister shared how AI helps detect tuberculosis and support smallholder farmers through localised apps.

AI addresses systemic shortfalls in healthcare by reducing documentation burdens and enabling earlier diagnosis. Corporate representatives from Meta and OpenAI showcased tools that personalise education, assist the visually impaired, and democratise advanced technology through open-source platforms.

Joseph Gordon Levitt at IGF 2025

Yet, concerns about fairness and data rights loomed large. Actor and entrepreneur Joseph Gordon-Levitt delivered a pointed critique of tech companies using creative work to train AI without consent or compensation.

He called for economic systems that reward human contributions, warning that failing to do so risks eroding creative and financial incentives. This argument underscored broader concerns about job displacement, automation, and the growing digital divide, especially among women and marginalised communities.

Debates also exposed philosophical rifts between regulatory approaches. While the US emphasised minimal interference to spur innovation, the European Commission and Norway called for risk-based regulation and international cooperation to ensure trust and equity. Speakers agreed on the need for inclusive governance frameworks and education systems that foster critical thinking, resist de-skilling, and prepare workers for an AI-augmented economy.

The session made clear that the future of work in the AI era depends on today’s collective choices that must centre people, fairness, and global solidarity.

Track all key moments from the Internet Governance Forum 2025 on our dedicated IGF page.