Tech investors shift focus to AI adopters over hardware makers

European companies investing heavily in generative AI must start delivering returns by next year or risk losing investor confidence. While AI stocks have seen significant interest, recent market volatility and recession fears have led to a shift in investment focus.

Many investors are now favouring companies that integrate AI, such as SAP and RELX, over those supplying AI hardware. However, adopters must also prove that AI investments translate into profitability, or they too could fall out of favour.

The launch of DeepSeek in January sparked a tech selloff by offering a low-cost AI model that threatened demand for expensive chips. Hardware firms like ASM International and BE Semiconductor have since suffered sharp declines, while AI adopters have fared better.

SAP, for example, recently overtook Novo Nordisk as Europe’s most valuable company despite only a minor stock dip. Analysts warn that companies promising AI-driven growth must soon demonstrate tangible financial benefits, or investors will reassess their high valuations.

Market patience is wearing thin, with some analysts suggesting 2025 as a key deadline for AI firms to show meaningful impact. An internal Fidelity survey revealed that 72% of analysts expect AI to have no major profitability impact next year, though optimism grows over a five-year horizon.

Investors like Lazard and Schroders stress the need for viable AI applications that generate revenue, while asset managers at Amundi warn that AI firms trading at high multiples could see their valuations adjusted if returns fail to materialise.

For more information on these topics, visit diplomacy.edu.

US report highlights China’s growing military capabilities

A US intelligence report has identified China as the top military and cyber threat, warning of Beijing’s growing capabilities in AI, cyber warfare, and conventional weaponry.

The report highlights China’s ambitions to surpass the US as the leading AI power by 2030 and its steady progress towards military capabilities that could be used to capture Taiwan.

It also warns that China could target US infrastructure through cyberattacks and space-based assets.

The findings, presented to the Senate Intelligence Committee, sparked tensions between Washington and Beijing. Chinese officials rejected the report, accusing the US of using outdated Cold War thinking and hyping the ‘China threat’ to maintain military dominance.

China’s foreign ministry also criticised US support for Taiwan, urging Washington to stop backing separatist movements.

Meanwhile, Beijing dismissed accusations that it has failed to curb fentanyl shipments, a key source of US overdose deaths.

The report also notes that Russia, Iran, and North Korea are working to challenge US influence through military and cyber tactics.

While China continues to expand its global footprint, particularly in Greenland and the Arctic, the report points to internal struggles, including economic slowdowns and demographic challenges, that could weaken the Chinese government’s stability.

The intelligence report underscores ongoing concerns in Washington about Beijing’s long-term ambitions and its potential impact on global security.

For more information on these topics, visit diplomacy.edu.

China’s AI industry is transforming with open-source models, challenging the OpenAI proprietary approach

China’s AI landscape is witnessing a profound transformation as it embraces open-source large language models (LLMs), largely propelled by the innovative efforts of DeepSeek. The startup’s R1 model, released under the highly permissive ‘MIT License,’ has sparked a significant shift away from proprietary approaches dominated by major American tech firms, paving the way for increased accessibility, collaboration, and innovation.

That transition is likened to an ‘Android moment’ for China’s AI industry, highlighting the sector’s move towards more available and flexible AI development. The ripple effects of this open-source movement are evident across China’s tech giants. Baidu, long a proponent of proprietary models, has announced its shift to open-source by making its AI models, Ernie 4.5 and Ernie X1, freely available and plans to release them as open-source.

The following strategic pivot reflects the competitive pressure of disruptors like DeepSeek, prompting companies to revise their business models to maintain market relevance. Alibaba and Tencent are also joining this trend by open-sourcing their AI offerings, while smaller firms like ManusAI are following suit, embracing the open-source ethos to drive innovation and market presence.

The shift towards open-source models in China starkly contrasts the OpenAI’s continued focus on proprietary strategies bolstered by hefty investments. The open-source trend underscores a growing discourse on the future of AI development, investment, and competitive dynamics, with open-source frameworks emerging as potential harbingers of sustainable growth and inclusive technological advancement.

For more information on these topics, visit diplomacy.edu.

Re-evaluating the scaling hypothesis: The AI industry’s shift towards innovative strategies

In recent years, the AI industry has heavily invested in the ‘scaling hypothesis,’ which posited that by expanding data sets, model sizes, and computational power, artificial general intelligence (AGI) could be achieved. That belief, championed by industry leaders like OpenAI and advocated by figures such as Nando de Freitas, led to ventures like the OpenAI/Oracle/Softbank joint project Stargate and fuelled a half-trillion-dollar quest for AI breakthroughs.

Yet, scepticism has grown, as critics have pointed out that scaling often falls short of fostering genuine comprehension. Models continue to produce errors, hallucinations, and unreliable reasoning, raising doubts about fulfilling AGI’s promises with scaling alone.

As the AI landscape evolves, voices like industry investor Marc Andreessen and Microsoft CEO Satya Nadella have increasingly criticised scaling’s limitations. Nadella, at a Microsoft event, highlighted that scaling laws are more like predictable but non-permanent trends, akin to the once-reliable Moore’s Law, which has slowed over time.

Once hailed as the future path, scaling is being re-evaluated in light of these emerging limitations, suggesting a need for a more nuanced approach. To address this, the industry has pivoted towards ‘test-time compute,’ allowing AI systems more time to deliberate on tasks.

While promising, its effectiveness is limited to fields like maths and coding, leaving broader AI functions grappling with fundamental issues. Products like Grok 3 have underscored this problem, as significant computational investments failed to overcome persistent errors, triggering customer dissatisfaction and financial reconsiderations.

Why does it matter?

With the scaling premise failing to meet expectations, the industry faces a potential financial correction and recognises the need for innovative approaches that transcend mere data and power expansion. For substantial AI progress, investors and nations should shift focus from scaling to nurturing bold research and novel solutions that address the complex challenges AI faces. Long-term investments in inventive strategies could pave the way for achieving reliable, intelligent AI systems that reach beyond the allure of simple scaling.

For more information on these topics, visit diplomacy.edu.

Meta’s use of pirated content in AI development raises legal and ethical challenges

In its quest to develop the Llama 3 AI model, Meta faced significant ethical and legal hurdles regarding sourcing a large volume of high-quality text required for AI training. The company evaluated legal licensing for acquiring books and research papers but dismissed these options due to high costs and delays.

Internal discussions indicated a preference for maintaining legal flexibility by avoiding licensing constraints and pursuing a ‘fair use’ strategy. Consequently, Meta turned to Library Genesis (LibGen), a vast database of pirated books and papers, a move reportedly sanctioned by CEO Mark Zuckerberg.

That decision led to copyright-infringement lawsuits from authors, including Sarah Silverman and Junot Díaz, underlining the complexities of pirated content in AI development. Meta and OpenAI have defended their use of copyrighted materials by invoking ‘fair use’, arguing that their AI systems transform original works into new creations.

Despite this defence, the legality remains contentious, especially as Meta’s internal communications acknowledged the legal risks and outlined measures to reduce exposure, such as removing data marked as pirated.

The situation draws attention to broader issues in the publishing world, where expensive and restricted access to literature and research has fuelled the rise of piracy sites like LibGen and Sci-Hub. While providing wider access, these platforms threaten intellectual creation’s sustainability by bypassing compensation for authors and researchers.

The challenges facing Meta and other AI companies raise important questions about managing the flow of knowledge in the digital era. While LibGen and similar repositories democratise access, they undermine intellectual property rights, disrupting the balance between accessibility and the protection of creators’ contributions.

For more information on these topics, visit diplomacy.edu.

AI startups in Silicon Valley rethink VC funding with leaner teams and strategic growth

In Silicon Valley, a notable trend is emerging as AI startups achieve significant revenue with leaner teams, challenging traditional venture capital (VC) funding models. Companies, sometimes with as few as 20 employees, are reporting revenues reaching tens of millions, highlighted by their participation in the accelerator Y Combinator (YC).

That shift signifies a transformation in startup dynamics, as many founders desire to scale without relying heavily on VC funding. They use the analogy of summiting Mount Everest with minimal oxygen, comparing it to reducing VC dependency, even in oversubscribed rounds. Raising less capital allows founders to retain greater ownership and flexibility for future business decisions.

The following strategic move is partly informed by past experiences where inflated valuations forced companies to endure ‘down rounds’. Terrence Rohan of Otherwise Fund notes that it’s becoming more common for YC startups to accept less capital than is offered, reflecting a more nuanced understanding of the implications of equity dilution.

However, not everyone endorses this strategy. Parker Conrad, CEO of Rippling, argues that lower funding could hinder a startup’s ability to invest in crucial growth areas like R&D and marketing, which are vital for product development and competitive advantage.

Conrad stresses the importance of substantial funding to accelerate growth, suggesting that it plays a crucial role in market expansion. Despite differing viewpoints, the examples of AI startups like Anysphere and ElevenLabs, which achieved high revenue with minimal staff yet secured significant funding, illustrate the ongoing allure of venture capital.

Overall, a changing perception is taking hold among YC founders, who are now more aware of both the advantages and pitfalls of VC funding. Pursuing capital from elite VC firms is no longer the sole indicator of success.

Instead, these startups favour strategic fundraising, considering the risks of overvaluation and excessive dilution. That shift reflects a broader evolution in the startup ecosystem, balancing lean operations with the potential benefits of venture capital to shape growth and maintain control strategically.

For more information on these topics, visit diplomacy.edu.

Google launches advanced Gemini 2.5 AI

Google has unveiled its new Gemini 2.5 AI models, starting with the experimental Gemini 2.5 Pro version.

Described as ‘thinking models’, these AI systems are designed to demonstrate advanced reasoning abilities, including the capacity to analyse information, make logical conclusions, and handle complex problems with context and nuance.

The models aim to support more intelligent, context-aware AI agents in the future.

The Gemini 2.5 models improve on the Gemini 2.0 Flash Thinking model released in December, offering an enhanced base model and better post-training capabilities.

The Gemini 2.5 Pro model, which has already been rolled out for Gemini Advanced subscribers and is available in Google AI Studio, stands out for its strong reasoning and coding skills. It excels in maths and science benchmarks and can generate fully functional video games from simple prompts.

It is also expected to handle sophisticated tasks, from coding web apps to transforming and editing code. Google’s future plans involve incorporating these ‘thinking’ capabilities into all of its AI models, aiming to enhance their ability to tackle more complex challenges in various fields.

For more information on these topics, visit diplomacy.edu.

AI physiotherapy service helps UK patients manage back pain

Lower back pain, one of the world’s leading causes of disability, has left hundreds of thousands of people in the UK stuck on long waiting lists for treatment. To address the crisis, the NHS is trialling a new solution: Flok Health, the first AI-powered physiotherapy clinic approved by the Care Quality Commission.

The app offers patients immediate access to personalised treatment plans through pre-recorded videos driven by artificial intelligence.

Created by former Olympic rower Finn Stevenson and tech expert Ric da Silva, Flok aims to treat straightforward cases that don’t require scans or hands-on intervention.

Patients interact with an AI-powered virtual physio, responding to questions that tailor the treatment pathway, with over a billion potential combinations. Unlike generative AI, Flok uses a more controlled system, eliminating the risk of fabricated medical advice.

The service has already launched in Scotland and is expanding across England, with ambitions to cover half the UK within a year. Flok is also adding treatment for conditions like hip and knee osteoarthritis, and women’s pelvic health.

While promising, the system depends on patients correctly following instructions, as the AI cannot monitor physical movements. Real physiotherapists are available to answer questions, but they do not provide live feedback during exercises.

Though effective for some, not all users find AI a perfect fit. Some, like the article’s author, prefer the hands-on guidance and posture corrections of human therapists.

Experts agree AI has potential to make healthcare more accessible and efficient, but caution that these tools must be rigorously evaluated, continuously monitored, and designed to support – not replace – clinical care.

For more information on these topics, visit diplomacy.edu.

DeepSeek launches V3 to challenge OpenAI

Chinese AI startup DeepSeek has unveiled a major upgrade to its V3 large language model, intensifying the competition with US tech giants like OpenAI and Anthropic.

The new model, DeepSeek-V3-0324, is available via the AI development platform Hugging Face, showcasing significant advancements in reasoning and coding capabilities.

Benchmark tests have highlighted notable improvements in technical performance instead of its predecessor. DeepSeek, which has quickly gained recognition in the AI industry, continues to release competitive models, offering lower operational costs than many Western counterparts.

Following the V3 launch in December, DeepSeek also introduced its R1 model in January, further establishing its presence in the global AI market.

For more information on these topics, visit diplomacy.edu.

Instagram users react to Meta’s new AI experiment

Meta has come under fire once again, this time over a new AI experiment on Instagram that suggests comments for users. Some users accused the company of using AI to inflate engagement metrics, potentially misleading advertisers and diminishing authentic user interaction.

The feature, spotted by test users, involves a pencil icon next to the comment bar on Instagram posts. Tapping it generates suggested replies based on the image’s content.

Meta has confirmed the feature is in testing but did not reveal plans for a broader launch. The company stated that it is exploring ways to incorporate Meta AI across different parts of its apps, including feeds, comments, groups, and search.

Public reaction has been largely negative, with concerns that AI-generated comments could flood the platform with inauthentic conversations. Social media users voiced fears of fake interactions replacing genuine ones, and some accused Meta of deceiving advertisers through inflated statistics.

Comparisons to dystopian scenarios were common, as users questioned the future of online social spaces.

This isn’t the first time Meta has faced backlash for its AI ventures. Previous attempts included AI personas modelled on celebrities and diverse identities, which were criticised for being disingenuous and engineered by largely homogenous development teams.

The future of AI-generated comments on Instagram remains uncertain as scrutiny continues to mount.

For more information on these topics, visit diplomacy.edu.