Judge rejects UMG’s bid to block Anthropic

A US federal judge has denied a request by Universal Music Group and other publishers to block AI firm Anthropic from using copyrighted song lyrics to train its chatbot, Claude.

Judge Eumi Lee ruled that the publishers failed to prove Anthropic’s actions caused them ‘irreparable harm’ and said their request was too broad. The lawsuit, filed in 2023, accuses Anthropic of infringing on lyrics from at least 500 songs by artists such as Beyoncé and the Rolling Stones without permission.

The case is part of a wider debate over AI training and copyright law, with companies like OpenAI and Meta arguing that their use of copyrighted material falls under ‘fair use.’

Publishers claim that Anthropic’s actions threaten the licensing market for lyrics, but the court ruled that defining such a market is premature while fair use remains unresolved.

Lee’s decision did not address whether AI training with copyrighted works constitutes fair use, leaving that question open for future legal battles.

Anthropic welcomed the ruling, calling the publishers’ request ‘disruptive and amorphous,’ while the publishers remain confident in their broader case against the AI company.

The lawsuit highlights the growing tension between content creators and AI firms as courts and lawmakers grapple with the legal and ethical implications of training AI on copyrighted material.

For more information on these topics, visit diplomacy.edu.

French tech giant bets on US expansion

Schneider Electric has announced plans to invest more than $700 million into its US operations over the next two years to support the rising energy demands driven by AI technology.

The French firm aims to boost manufacturing capacity and enhance the country’s energy resilience.

The expansion includes new and upgraded facilities across states like Texas, Ohio, and the Carolinas, with over 1,000 new jobs expected. Combined with previous spending, Schneider’s total US investment this decade will exceed $1 billion.

The move also comes amid ongoing trade tensions and tariff threats, which have prompted many global firms to shift production back to US soil.

Schneider says the investment marks a turning point for American industry, driven by AI’s rapid growth.

For more information on these topics, visit diplomacy.edu.

Tech investors shift focus to AI adopters over hardware makers

European companies investing heavily in generative AI must start delivering returns by next year or risk losing investor confidence. While AI stocks have seen significant interest, recent market volatility and recession fears have led to a shift in investment focus.

Many investors are now favouring companies that integrate AI, such as SAP and RELX, over those supplying AI hardware. However, adopters must also prove that AI investments translate into profitability, or they too could fall out of favour.

The launch of DeepSeek in January sparked a tech selloff by offering a low-cost AI model that threatened demand for expensive chips. Hardware firms like ASM International and BE Semiconductor have since suffered sharp declines, while AI adopters have fared better.

SAP, for example, recently overtook Novo Nordisk as Europe’s most valuable company despite only a minor stock dip. Analysts warn that companies promising AI-driven growth must soon demonstrate tangible financial benefits, or investors will reassess their high valuations.

Market patience is wearing thin, with some analysts suggesting 2025 as a key deadline for AI firms to show meaningful impact. An internal Fidelity survey revealed that 72% of analysts expect AI to have no major profitability impact next year, though optimism grows over a five-year horizon.

Investors like Lazard and Schroders stress the need for viable AI applications that generate revenue, while asset managers at Amundi warn that AI firms trading at high multiples could see their valuations adjusted if returns fail to materialise.

For more information on these topics, visit diplomacy.edu.

US report highlights China’s growing military capabilities

A US intelligence report has identified China as the top military and cyber threat, warning of Beijing’s growing capabilities in AI, cyber warfare, and conventional weaponry.

The report highlights China’s ambitions to surpass the US as the leading AI power by 2030 and its steady progress towards military capabilities that could be used to capture Taiwan.

It also warns that China could target US infrastructure through cyberattacks and space-based assets.

The findings, presented to the Senate Intelligence Committee, sparked tensions between Washington and Beijing. Chinese officials rejected the report, accusing the US of using outdated Cold War thinking and hyping the ‘China threat’ to maintain military dominance.

China’s foreign ministry also criticised US support for Taiwan, urging Washington to stop backing separatist movements.

Meanwhile, Beijing dismissed accusations that it has failed to curb fentanyl shipments, a key source of US overdose deaths.

The report also notes that Russia, Iran, and North Korea are working to challenge US influence through military and cyber tactics.

While China continues to expand its global footprint, particularly in Greenland and the Arctic, the report points to internal struggles, including economic slowdowns and demographic challenges, that could weaken the Chinese government’s stability.

The intelligence report underscores ongoing concerns in Washington about Beijing’s long-term ambitions and its potential impact on global security.

For more information on these topics, visit diplomacy.edu.

China’s AI industry is transforming with open-source models, challenging the OpenAI proprietary approach

China’s AI landscape is witnessing a profound transformation as it embraces open-source large language models (LLMs), largely propelled by the innovative efforts of DeepSeek. The startup’s R1 model, released under the highly permissive ‘MIT License,’ has sparked a significant shift away from proprietary approaches dominated by major American tech firms, paving the way for increased accessibility, collaboration, and innovation.

That transition is likened to an ‘Android moment’ for China’s AI industry, highlighting the sector’s move towards more available and flexible AI development. The ripple effects of this open-source movement are evident across China’s tech giants. Baidu, long a proponent of proprietary models, has announced its shift to open-source by making its AI models, Ernie 4.5 and Ernie X1, freely available and plans to release them as open-source.

The following strategic pivot reflects the competitive pressure of disruptors like DeepSeek, prompting companies to revise their business models to maintain market relevance. Alibaba and Tencent are also joining this trend by open-sourcing their AI offerings, while smaller firms like ManusAI are following suit, embracing the open-source ethos to drive innovation and market presence.

The shift towards open-source models in China starkly contrasts the OpenAI’s continued focus on proprietary strategies bolstered by hefty investments. The open-source trend underscores a growing discourse on the future of AI development, investment, and competitive dynamics, with open-source frameworks emerging as potential harbingers of sustainable growth and inclusive technological advancement.

For more information on these topics, visit diplomacy.edu.

Re-evaluating the scaling hypothesis: The AI industry’s shift towards innovative strategies

In recent years, the AI industry has heavily invested in the ‘scaling hypothesis,’ which posited that by expanding data sets, model sizes, and computational power, artificial general intelligence (AGI) could be achieved. That belief, championed by industry leaders like OpenAI and advocated by figures such as Nando de Freitas, led to ventures like the OpenAI/Oracle/Softbank joint project Stargate and fuelled a half-trillion-dollar quest for AI breakthroughs.

Yet, scepticism has grown, as critics have pointed out that scaling often falls short of fostering genuine comprehension. Models continue to produce errors, hallucinations, and unreliable reasoning, raising doubts about fulfilling AGI’s promises with scaling alone.

As the AI landscape evolves, voices like industry investor Marc Andreessen and Microsoft CEO Satya Nadella have increasingly criticised scaling’s limitations. Nadella, at a Microsoft event, highlighted that scaling laws are more like predictable but non-permanent trends, akin to the once-reliable Moore’s Law, which has slowed over time.

Once hailed as the future path, scaling is being re-evaluated in light of these emerging limitations, suggesting a need for a more nuanced approach. To address this, the industry has pivoted towards ‘test-time compute,’ allowing AI systems more time to deliberate on tasks.

While promising, its effectiveness is limited to fields like maths and coding, leaving broader AI functions grappling with fundamental issues. Products like Grok 3 have underscored this problem, as significant computational investments failed to overcome persistent errors, triggering customer dissatisfaction and financial reconsiderations.

Why does it matter?

With the scaling premise failing to meet expectations, the industry faces a potential financial correction and recognises the need for innovative approaches that transcend mere data and power expansion. For substantial AI progress, investors and nations should shift focus from scaling to nurturing bold research and novel solutions that address the complex challenges AI faces. Long-term investments in inventive strategies could pave the way for achieving reliable, intelligent AI systems that reach beyond the allure of simple scaling.

For more information on these topics, visit diplomacy.edu.

Meta’s use of pirated content in AI development raises legal and ethical challenges

In its quest to develop the Llama 3 AI model, Meta faced significant ethical and legal hurdles regarding sourcing a large volume of high-quality text required for AI training. The company evaluated legal licensing for acquiring books and research papers but dismissed these options due to high costs and delays.

Internal discussions indicated a preference for maintaining legal flexibility by avoiding licensing constraints and pursuing a ‘fair use’ strategy. Consequently, Meta turned to Library Genesis (LibGen), a vast database of pirated books and papers, a move reportedly sanctioned by CEO Mark Zuckerberg.

That decision led to copyright-infringement lawsuits from authors, including Sarah Silverman and Junot Díaz, underlining the complexities of pirated content in AI development. Meta and OpenAI have defended their use of copyrighted materials by invoking ‘fair use’, arguing that their AI systems transform original works into new creations.

Despite this defence, the legality remains contentious, especially as Meta’s internal communications acknowledged the legal risks and outlined measures to reduce exposure, such as removing data marked as pirated.

The situation draws attention to broader issues in the publishing world, where expensive and restricted access to literature and research has fuelled the rise of piracy sites like LibGen and Sci-Hub. While providing wider access, these platforms threaten intellectual creation’s sustainability by bypassing compensation for authors and researchers.

The challenges facing Meta and other AI companies raise important questions about managing the flow of knowledge in the digital era. While LibGen and similar repositories democratise access, they undermine intellectual property rights, disrupting the balance between accessibility and the protection of creators’ contributions.

For more information on these topics, visit diplomacy.edu.

Anduril confident in Trump-era defence priorities

Anduril, the AI-powered defence start-up founded by Palmer Luckey, is optimistic about the Trump administration’s approach to defence reform.

Company president Christian Brose said the administration’s focus on innovation aligns with Anduril’s work in low-cost autonomous military systems. The firm recently partnered with OpenAI to integrate advanced artificial intelligence into national security missions.

Brose, a former adviser to Senator John McCain, has long criticised traditional defence procurement processes and believes the administration’s willingness to do things differently presents a major opportunity.

The company is expanding its global footprint, with plans to build manufacturing facilities outside the United States. Australia has emerged as a key market, with Anduril’s AI intrusion detection software being trialled at RAAF Base Darwin, where US Marines rotate annually.

The firm is also bidding to produce solid rocket motors for Australia’s Guided Weapons and Explosive Ordnance Enterprise.

Its Ghost Shark autonomous underwater system, developed in collaboration with the Australian Defence Force, is moving towards large-scale production, with a dedicated facility planned in New South Wales.

Autonomous military technology is a growing focus under the AUKUS treaty, which will see Australia invest heavily in nuclear-powered submarines with the support from the United States and the United Kingdom.

Brose emphasised that both crewed and autonomous systems will play a role in modern defence strategies, with the advantage of autonomous platforms being their faster production, larger deployment scale, and lower cost.

Anduril’s continued expansion highlights the increasing demand for AI-driven defence solutions in a rapidly evolving global security landscape.

For more information on these topics, visit diplomacy.edu.

OpenAI unveils new image generator in ChatGPT

OpenAI has rolled out an image generator feature within ChatGPT, enabling users to create realistic images with improved accuracy. The new feature, available for all Plus, Pro, Team, and Free users, is powered by GPT-4o, which now offers distortion-free images and more accurate text generation.

OpenAI shared a sample image of a boarding pass, showcasing the advanced capabilities of the new tool.

Previously, image generation was available through DALL-E, but its results often contained errors and were easily identifiable as AI-generated. Now integrated into ChatGPT, the new tool allows users to describe images with specific details such as colours, aspect ratios, and transparent backgrounds.

The update aims to enhance creative freedom while maintaining a higher standard of image quality.

CEO Sam Altman praised the feature as a ‘new high-water mark’ for creative control, although he acknowledged the potential for some users to create offensive content.

OpenAI plans to monitor how users interact with this tool and adjust as needed, especially as the technology moves closer to artificial general intelligence (AGI).

For more information on these topics, visit diplomacy.edu.

AI startups in Silicon Valley rethink VC funding with leaner teams and strategic growth

In Silicon Valley, a notable trend is emerging as AI startups achieve significant revenue with leaner teams, challenging traditional venture capital (VC) funding models. Companies, sometimes with as few as 20 employees, are reporting revenues reaching tens of millions, highlighted by their participation in the accelerator Y Combinator (YC).

That shift signifies a transformation in startup dynamics, as many founders desire to scale without relying heavily on VC funding. They use the analogy of summiting Mount Everest with minimal oxygen, comparing it to reducing VC dependency, even in oversubscribed rounds. Raising less capital allows founders to retain greater ownership and flexibility for future business decisions.

The following strategic move is partly informed by past experiences where inflated valuations forced companies to endure ‘down rounds’. Terrence Rohan of Otherwise Fund notes that it’s becoming more common for YC startups to accept less capital than is offered, reflecting a more nuanced understanding of the implications of equity dilution.

However, not everyone endorses this strategy. Parker Conrad, CEO of Rippling, argues that lower funding could hinder a startup’s ability to invest in crucial growth areas like R&D and marketing, which are vital for product development and competitive advantage.

Conrad stresses the importance of substantial funding to accelerate growth, suggesting that it plays a crucial role in market expansion. Despite differing viewpoints, the examples of AI startups like Anysphere and ElevenLabs, which achieved high revenue with minimal staff yet secured significant funding, illustrate the ongoing allure of venture capital.

Overall, a changing perception is taking hold among YC founders, who are now more aware of both the advantages and pitfalls of VC funding. Pursuing capital from elite VC firms is no longer the sole indicator of success.

Instead, these startups favour strategic fundraising, considering the risks of overvaluation and excessive dilution. That shift reflects a broader evolution in the startup ecosystem, balancing lean operations with the potential benefits of venture capital to shape growth and maintain control strategically.

For more information on these topics, visit diplomacy.edu.