weekly newsletter

Home | Newsletters & Shorts | Digital Watch Quarterly Newsletter – Issue 1

Digital Watch Quarterly Newsletter – Issue 1

22 April 2025

AI trends in the first three months of 2025

 Stencil, Person, Text

Dear readers,

You are not alone if you are overwhelmed with AI news. Many of us have hundreds of open browser tabs waiting to be read. It’s frustrating that we cannot keep up with AI developments that shape our reality. This feeling inspired the Quarterly newsletter as a way to step back and identify trends.

 Disk, Chart


The first quarterly newsletter of 2025 looks back at a packed start to the year. This year is anything but typical. With Trump’s return to the presidency, history is accelerating, and rapid AI developments are already gaining even more momentum. 

Yet, within this fast-moving environment, slower, deeper shifts will shape our reality in the longer term. Since January, three key developments have set the tone for AI in 2025:

  1. The scaling limit of AI models is challenging the future development of AI models.
  2. A shift in AI governance, moving away from the existential risk narrative toward more immediate and concrete concerns.
  3. Copyright and data protection requirements challenge future AI developments.

These three trends are unpacked in the first part of this Quarterly newsletter, followed by an analysis of the position of the leading actors in the AI race. You can also revisit quarterly developments in the context of our predictions for 2025.


Limits of AI scaling: from ’bigger is better’ to ‘smaller can be smarter’ AI

One of the most profound transformations this year has been moving from a ‘bigger is better’ mindset to recognising that ‘small can be smarter.’ Since the launch of ChatGPT in November 2022, the prevailing wisdom was simple: the more Nvidia chips you had, the better your AI model. Large, centralised systems—often described as mysterious or opaque—fuelled concerns about existential risks and inspired governance frameworks modelled on nuclear safety.

However, a quiet counter-narrative has been building. At Diplo, we began challenging the ‘bigger is better’ paradigm as early as June 2022, advocating instead for bottom-up, context-aware AI.

In 2025, this alternative vision gained traction through three developments:

  • AI agents: These tools organise thinking and data in a subject-specific way. For example, our AI agents for diplomacy, while running on smaller models, outperform larger generic LLMs in domain-specific tasks.
  • Reinforcement learning: AI systems are increasingly trained in real-world environments, anchoring their performance in local experience and expertise.
  • DeepSeek’s breakthrough: DeepSeek’s model, developed with significantly fewer resources, matched or surpassed the performance of much larger models—shattering the ‘scaling law’ myth.

Despite this shift, inertia remains. Investment continues in AI megaprojects—AI factories, massive data centres—but the rethinking has begun. Nvidia saw a 37% drop in its share price in 2025, in paritcular following DeepSeek’s rise.

Nvidia’s drop in its share price in 2025

Samsung reported a slowdown in AI chip sales. Microsoft is slowing its expansion of AI data centres.

In parallel, companies and governments are embracing smaller, open-source models like Llama, Mistral, and DeepSeek. They now face a critical strategic choice:

  • Invest in large-scale, top-down systems that face physical and economic scaling limits, or embrace smarter, leaner AI solutions?
  • Focus on bottom-up development through training, fine-tuning, and local deployment.

From existential to existing risks in AI governance

AI’s technological shift is reshaping AI governance. The focus is moving from long-term existential threats dominating the 2023 agenda to tangible, near-term risks like job displacement, educational disruption, and privacy threats.

This shift was clearly reflected at the AI Action Summit in Paris, which broke from the safety-heavy narrative of the 2023 Bletchley Park summit. US Senator J.D. Vance underscored this change, criticising the safety obsession and excluding AI safety experts from the US delegation.

South Korea, once a key player in the AI safety camp, is now under pressure to broaden its focus. The EU, although still anchored in the AI safety framework via the EU AI Act, is also adjusting. President Macron has called for a simplification of AI rules. The AI liability directive has been delayed, and copyright rules have been clarified, indicating a shift toward a more balanced approach.

This evolution raises a key question: Where should AI be regulated?

To answer, we can use the AI regulation pyramid:

AD 4nXdEk74kxyJssiWLvB6XqM jjknBQhT t8Voz81OtJGGVqpZvlloUQywCq57LK4pK1g3VoiqD6BUDyOD f7r97WCLIhAReWv 744lsNNgC5Tn6QzGeWVIeUfu5ytjYLJjgGMqm NwQ?key=cLg0IQNN3uO3oUFwJvrMkeak
  • The lower you regulate (hardware/algorithms), the more specific and complex new rules are required.
  • The higher you go (use and impact), the more existing laws (IP, human rights, trade) can apply.

Whenever there is a call for new AI regulation, we should ask: Can this be addressed through existing legal frameworks?


Copyright and data protection challenges in AI developments

As AI becomes more widespread and commodified, it is shedding its aura of mystery and entering domains governed by law, ethics, and policy. Two major limitations are emerging that will increasingly shape the pace and direction of AI development: copyright and data protection.

Copyright

The training of AI models on large corpora of online content—often scraped without clear licensing—has led to mounting legal scrutiny and ethical debates.

As court cases multiply, the AI industry faces increasing pressure to establish clearer licensing practices, potentially shifting toward licensed datasets or synthetic data to mitigate legal exposure.

Data protection

Alongside copyright, personal data used in training AI models has triggered regulatory concern, especially under robust privacy regimes like the EU’s GDPR and emerging global standards.

  • AI developers must now address whether personally identifiable information (PII) was inadvertently used in training sets. The lack of transparency in data pipelines raises questions about consent, anonymisation, and data deletion rights.
  • Governments and data protection authorities are beginning to investigate whether generative AI models comply with data minimisation, purpose limitation, and user rights to access or delete personal data.
  • The challenge is further amplified by model opacity—it’s often impossible to identify exactly what data a model has ‘memorised’ or retained, making compliance with deletion requests technically and legally ambiguous.

These issues will push AI developers to rethink data governance strategies, adopt more transparent training protocols, and possibly move toward federated learning and other privacy-preserving techniques.

Position of the main actors: Q1 2025

 Text, Page, Menu

USA: Recalibrating AI strategy

Under President Trump, the USA has begun a significant shift in its AI strategy—from a Biden-era focus on AI safety toward a more development-centric agenda.

On his first day in office, President Trump revoked Executive Order 14110 of Biden’s administration, which was widely regarded as the cornerstone of the US AI safety framework. Three days later, he issued a new executive order initiating a public consultation to inform a new AI Action Plan, with a mandate to support US leadership in AI. The plan is expected to be finalised within 180 days.

By 15 March, 8,755 public comments had been submitted, including from major tech companies, offering a glimpse into the evolving industry-government dynamics. Three notable corporate positions stand out:

OpenAI adopts a highly ideological stance, framing its submission around the global contest between freedom and authoritarianism. It proposes a three-tiered international AI framework:

  • Deep cooperation with democratic allies;
  • Exclusion of adversaries, especially China.
  • Voluntary federal regulation, flexible copyright rules that permit fair use of training data, and strict export controls to secure the global dominance of ‘democratic AI.’

Google takes a more pragmatic, pro-business approach, emphasising competitiveness and regulatory harmonisation. It supports international cooperation through forums like the OECD and ISO while warning against the dangers of rule fragmentation across countries and US states. Without invoking ideological divides, Google stresses the need for open data access, interoperable regulation, and balanced policies that protect privacy and IP without stifling innovation.

Anthropic focuses on technical safety and national security. It urges the US government to treat advanced AI as a national strategic asset. While vague on global governance, Anthropic calls for tighter collaboration with key allies like the UK—particularly through institutions like the AI Safety Institute—and emphasises the need to prevent adversarial misuse of AI.

China: A new wave of AI platforms

The beginning of 2025 saw a visible surge of new Chinese AI platforms, led by DeepSeek. Although their emergence appeared sudden, China’s LLM ecosystem has been growing rapidly over the past few years, with new models emerging almost daily. Since January, platforms from  Baidu, Alibaba, and Manus, among others, have gained global visibility.

This new wave marks three important trends:

  • Necessity driving innovation: Due to US export controls on advanced Nvidia chips, Chinese developers have innovated with smaller, more efficient systems. DeepSeek’s multi-threaded approach is one such breakthrough, delivering powerful performance with limited resources.
  • Open-source strategy: In a departure from previous practice, many leading Chinese AI models are now released as fully open-source, allowing local adaptation and customization. This shift may reflect both a technical and geopolitical strategy to broaden adoption globally.
  • Diversification beyond LLMs: In addition to LLMs, Chinese developers are advancing in image, video, and multi-modal AI, reflecting a broader ambition across the generative AI landscape.

EU: in search of a strategy

The EU lags behind the US and China in the AI race and actively seeks a more coherent strategy. Thus far, its efforts have largely mirrored the dominant narrative: scaling up computing infrastructure and building big models.

On the regulatory front, the EU is showing signs of adaptation:

Despite these steps, the EU is still grappling with the challenge of balancing regulatory leadership with technological competitiveness—a tension that will define its AI strategy for the remainder of 2025.

For a detailed survey of quarterly developments, please consult the Monthly Newsletters for January, February, and March.

For more information on these topics, visit diplomacy.edu.