China faces Nvidia chip shortages

Chinese server manufacturer H3C has warned of potential shortages of Nvidia’s H20 chip, the most advanced AI processor still legally available in the country under US export controls.

In a notice to clients, the company revealed that its stock of H20 chips was nearly depleted, citing geopolitical tensions as a major factor affecting global supply chains.

New shipments are expected by mid-April, but future availability remains uncertain due to ongoing trade restrictions and supply disruptions.

The demand for H20 chips has surged, particularly as companies race to integrate AI models developed by Chinese startup DeepSeek.

Major tech firms such as Tencent, Alibaba, and ByteDance have significantly increased their orders, leading to further strain on supply.

H3C stated that future chip distribution will prioritise long-term, high-margin customers under a profit-first approach, raising concerns among smaller buyers about access to the critical technology.

The H20 was introduced after the US tightened export controls on high-performance AI chips in October 2023, blocking Nvidia’s most advanced processors from the Chinese market.

Washington has restricted such exports since 2022, citing national security concerns over China’s potential military applications of AI technology.

Despite these measures, Nvidia has reportedly shipped around one million H20 units in 2024, generating more than $12 billion in revenue. Meanwhile, domestic alternatives from Huawei and Cambricon are emerging as potential substitutes amid the ongoing supply crunch.

For more information on these topics, visit diplomacy.edu.

Judge rejects UMG’s bid to block Anthropic

A US federal judge has denied a request by Universal Music Group and other publishers to block AI firm Anthropic from using copyrighted song lyrics to train its chatbot, Claude.

Judge Eumi Lee ruled that the publishers failed to prove Anthropic’s actions caused them ‘irreparable harm’ and said their request was too broad. The lawsuit, filed in 2023, accuses Anthropic of infringing on lyrics from at least 500 songs by artists such as Beyoncé and the Rolling Stones without permission.

The case is part of a wider debate over AI training and copyright law, with companies like OpenAI and Meta arguing that their use of copyrighted material falls under ‘fair use.’

Publishers claim that Anthropic’s actions threaten the licensing market for lyrics, but the court ruled that defining such a market is premature while fair use remains unresolved.

Lee’s decision did not address whether AI training with copyrighted works constitutes fair use, leaving that question open for future legal battles.

Anthropic welcomed the ruling, calling the publishers’ request ‘disruptive and amorphous,’ while the publishers remain confident in their broader case against the AI company.

The lawsuit highlights the growing tension between content creators and AI firms as courts and lawmakers grapple with the legal and ethical implications of training AI on copyrighted material.

For more information on these topics, visit diplomacy.edu.

Tech investors shift focus to AI adopters over hardware makers

European companies investing heavily in generative AI must start delivering returns by next year or risk losing investor confidence. While AI stocks have seen significant interest, recent market volatility and recession fears have led to a shift in investment focus.

Many investors are now favouring companies that integrate AI, such as SAP and RELX, over those supplying AI hardware. However, adopters must also prove that AI investments translate into profitability, or they too could fall out of favour.

The launch of DeepSeek in January sparked a tech selloff by offering a low-cost AI model that threatened demand for expensive chips. Hardware firms like ASM International and BE Semiconductor have since suffered sharp declines, while AI adopters have fared better.

SAP, for example, recently overtook Novo Nordisk as Europe’s most valuable company despite only a minor stock dip. Analysts warn that companies promising AI-driven growth must soon demonstrate tangible financial benefits, or investors will reassess their high valuations.

Market patience is wearing thin, with some analysts suggesting 2025 as a key deadline for AI firms to show meaningful impact. An internal Fidelity survey revealed that 72% of analysts expect AI to have no major profitability impact next year, though optimism grows over a five-year horizon.

Investors like Lazard and Schroders stress the need for viable AI applications that generate revenue, while asset managers at Amundi warn that AI firms trading at high multiples could see their valuations adjusted if returns fail to materialise.

For more information on these topics, visit diplomacy.edu.

US report highlights China’s growing military capabilities

A US intelligence report has identified China as the top military and cyber threat, warning of Beijing’s growing capabilities in AI, cyber warfare, and conventional weaponry.

The report highlights China’s ambitions to surpass the US as the leading AI power by 2030 and its steady progress towards military capabilities that could be used to capture Taiwan.

It also warns that China could target US infrastructure through cyberattacks and space-based assets.

The findings, presented to the Senate Intelligence Committee, sparked tensions between Washington and Beijing. Chinese officials rejected the report, accusing the US of using outdated Cold War thinking and hyping the ‘China threat’ to maintain military dominance.

China’s foreign ministry also criticised US support for Taiwan, urging Washington to stop backing separatist movements.

Meanwhile, Beijing dismissed accusations that it has failed to curb fentanyl shipments, a key source of US overdose deaths.

The report also notes that Russia, Iran, and North Korea are working to challenge US influence through military and cyber tactics.

While China continues to expand its global footprint, particularly in Greenland and the Arctic, the report points to internal struggles, including economic slowdowns and demographic challenges, that could weaken the Chinese government’s stability.

The intelligence report underscores ongoing concerns in Washington about Beijing’s long-term ambitions and its potential impact on global security.

For more information on these topics, visit diplomacy.edu.

China’s AI industry is transforming with open-source models, challenging the OpenAI proprietary approach

China’s AI landscape is witnessing a profound transformation as it embraces open-source large language models (LLMs), largely propelled by the innovative efforts of DeepSeek. The startup’s R1 model, released under the highly permissive ‘MIT License,’ has sparked a significant shift away from proprietary approaches dominated by major American tech firms, paving the way for increased accessibility, collaboration, and innovation.

That transition is likened to an ‘Android moment’ for China’s AI industry, highlighting the sector’s move towards more available and flexible AI development. The ripple effects of this open-source movement are evident across China’s tech giants. Baidu, long a proponent of proprietary models, has announced its shift to open-source by making its AI models, Ernie 4.5 and Ernie X1, freely available and plans to release them as open-source.

The following strategic pivot reflects the competitive pressure of disruptors like DeepSeek, prompting companies to revise their business models to maintain market relevance. Alibaba and Tencent are also joining this trend by open-sourcing their AI offerings, while smaller firms like ManusAI are following suit, embracing the open-source ethos to drive innovation and market presence.

The shift towards open-source models in China starkly contrasts the OpenAI’s continued focus on proprietary strategies bolstered by hefty investments. The open-source trend underscores a growing discourse on the future of AI development, investment, and competitive dynamics, with open-source frameworks emerging as potential harbingers of sustainable growth and inclusive technological advancement.

For more information on these topics, visit diplomacy.edu.

Re-evaluating the scaling hypothesis: The AI industry’s shift towards innovative strategies

In recent years, the AI industry has heavily invested in the ‘scaling hypothesis,’ which posited that by expanding data sets, model sizes, and computational power, artificial general intelligence (AGI) could be achieved. That belief, championed by industry leaders like OpenAI and advocated by figures such as Nando de Freitas, led to ventures like the OpenAI/Oracle/Softbank joint project Stargate and fuelled a half-trillion-dollar quest for AI breakthroughs.

Yet, scepticism has grown, as critics have pointed out that scaling often falls short of fostering genuine comprehension. Models continue to produce errors, hallucinations, and unreliable reasoning, raising doubts about fulfilling AGI’s promises with scaling alone.

As the AI landscape evolves, voices like industry investor Marc Andreessen and Microsoft CEO Satya Nadella have increasingly criticised scaling’s limitations. Nadella, at a Microsoft event, highlighted that scaling laws are more like predictable but non-permanent trends, akin to the once-reliable Moore’s Law, which has slowed over time.

Once hailed as the future path, scaling is being re-evaluated in light of these emerging limitations, suggesting a need for a more nuanced approach. To address this, the industry has pivoted towards ‘test-time compute,’ allowing AI systems more time to deliberate on tasks.

While promising, its effectiveness is limited to fields like maths and coding, leaving broader AI functions grappling with fundamental issues. Products like Grok 3 have underscored this problem, as significant computational investments failed to overcome persistent errors, triggering customer dissatisfaction and financial reconsiderations.

Why does it matter?

With the scaling premise failing to meet expectations, the industry faces a potential financial correction and recognises the need for innovative approaches that transcend mere data and power expansion. For substantial AI progress, investors and nations should shift focus from scaling to nurturing bold research and novel solutions that address the complex challenges AI faces. Long-term investments in inventive strategies could pave the way for achieving reliable, intelligent AI systems that reach beyond the allure of simple scaling.

For more information on these topics, visit diplomacy.edu.

Meta’s use of pirated content in AI development raises legal and ethical challenges

In its quest to develop the Llama 3 AI model, Meta faced significant ethical and legal hurdles regarding sourcing a large volume of high-quality text required for AI training. The company evaluated legal licensing for acquiring books and research papers but dismissed these options due to high costs and delays.

Internal discussions indicated a preference for maintaining legal flexibility by avoiding licensing constraints and pursuing a ‘fair use’ strategy. Consequently, Meta turned to Library Genesis (LibGen), a vast database of pirated books and papers, a move reportedly sanctioned by CEO Mark Zuckerberg.

That decision led to copyright-infringement lawsuits from authors, including Sarah Silverman and Junot Díaz, underlining the complexities of pirated content in AI development. Meta and OpenAI have defended their use of copyrighted materials by invoking ‘fair use’, arguing that their AI systems transform original works into new creations.

Despite this defence, the legality remains contentious, especially as Meta’s internal communications acknowledged the legal risks and outlined measures to reduce exposure, such as removing data marked as pirated.

The situation draws attention to broader issues in the publishing world, where expensive and restricted access to literature and research has fuelled the rise of piracy sites like LibGen and Sci-Hub. While providing wider access, these platforms threaten intellectual creation’s sustainability by bypassing compensation for authors and researchers.

The challenges facing Meta and other AI companies raise important questions about managing the flow of knowledge in the digital era. While LibGen and similar repositories democratise access, they undermine intellectual property rights, disrupting the balance between accessibility and the protection of creators’ contributions.

For more information on these topics, visit diplomacy.edu.

AI startups in Silicon Valley rethink VC funding with leaner teams and strategic growth

In Silicon Valley, a notable trend is emerging as AI startups achieve significant revenue with leaner teams, challenging traditional venture capital (VC) funding models. Companies, sometimes with as few as 20 employees, are reporting revenues reaching tens of millions, highlighted by their participation in the accelerator Y Combinator (YC).

That shift signifies a transformation in startup dynamics, as many founders desire to scale without relying heavily on VC funding. They use the analogy of summiting Mount Everest with minimal oxygen, comparing it to reducing VC dependency, even in oversubscribed rounds. Raising less capital allows founders to retain greater ownership and flexibility for future business decisions.

The following strategic move is partly informed by past experiences where inflated valuations forced companies to endure ‘down rounds’. Terrence Rohan of Otherwise Fund notes that it’s becoming more common for YC startups to accept less capital than is offered, reflecting a more nuanced understanding of the implications of equity dilution.

However, not everyone endorses this strategy. Parker Conrad, CEO of Rippling, argues that lower funding could hinder a startup’s ability to invest in crucial growth areas like R&D and marketing, which are vital for product development and competitive advantage.

Conrad stresses the importance of substantial funding to accelerate growth, suggesting that it plays a crucial role in market expansion. Despite differing viewpoints, the examples of AI startups like Anysphere and ElevenLabs, which achieved high revenue with minimal staff yet secured significant funding, illustrate the ongoing allure of venture capital.

Overall, a changing perception is taking hold among YC founders, who are now more aware of both the advantages and pitfalls of VC funding. Pursuing capital from elite VC firms is no longer the sole indicator of success.

Instead, these startups favour strategic fundraising, considering the risks of overvaluation and excessive dilution. That shift reflects a broader evolution in the startup ecosystem, balancing lean operations with the potential benefits of venture capital to shape growth and maintain control strategically.

For more information on these topics, visit diplomacy.edu.

Google launches advanced Gemini 2.5 AI

Google has unveiled its new Gemini 2.5 AI models, starting with the experimental Gemini 2.5 Pro version.

Described as ‘thinking models’, these AI systems are designed to demonstrate advanced reasoning abilities, including the capacity to analyse information, make logical conclusions, and handle complex problems with context and nuance.

The models aim to support more intelligent, context-aware AI agents in the future.

The Gemini 2.5 models improve on the Gemini 2.0 Flash Thinking model released in December, offering an enhanced base model and better post-training capabilities.

The Gemini 2.5 Pro model, which has already been rolled out for Gemini Advanced subscribers and is available in Google AI Studio, stands out for its strong reasoning and coding skills. It excels in maths and science benchmarks and can generate fully functional video games from simple prompts.

It is also expected to handle sophisticated tasks, from coding web apps to transforming and editing code. Google’s future plans involve incorporating these ‘thinking’ capabilities into all of its AI models, aiming to enhance their ability to tackle more complex challenges in various fields.

For more information on these topics, visit diplomacy.edu.

AI physiotherapy service helps UK patients manage back pain

Lower back pain, one of the world’s leading causes of disability, has left hundreds of thousands of people in the UK stuck on long waiting lists for treatment. To address the crisis, the NHS is trialling a new solution: Flok Health, the first AI-powered physiotherapy clinic approved by the Care Quality Commission.

The app offers patients immediate access to personalised treatment plans through pre-recorded videos driven by artificial intelligence.

Created by former Olympic rower Finn Stevenson and tech expert Ric da Silva, Flok aims to treat straightforward cases that don’t require scans or hands-on intervention.

Patients interact with an AI-powered virtual physio, responding to questions that tailor the treatment pathway, with over a billion potential combinations. Unlike generative AI, Flok uses a more controlled system, eliminating the risk of fabricated medical advice.

The service has already launched in Scotland and is expanding across England, with ambitions to cover half the UK within a year. Flok is also adding treatment for conditions like hip and knee osteoarthritis, and women’s pelvic health.

While promising, the system depends on patients correctly following instructions, as the AI cannot monitor physical movements. Real physiotherapists are available to answer questions, but they do not provide live feedback during exercises.

Though effective for some, not all users find AI a perfect fit. Some, like the article’s author, prefer the hands-on guidance and posture corrections of human therapists.

Experts agree AI has potential to make healthcare more accessible and efficient, but caution that these tools must be rigorously evaluated, continuously monitored, and designed to support – not replace – clinical care.

For more information on these topics, visit diplomacy.edu.