27 June – 4 July 2025
Dear readers,
Over the past week, Meta has dramatically escalated its push toward artificial general intelligence (AGI), launching a new AI Superintelligence Lab and aggressively hiring top talent from rival firms, most notably OpenAI. This hiring spree includes at least eight high-profile researchers who worked on OpenAI’s o3 and GPT-4o teams. The move comes with eye-popping compensation offers: reports suggest Meta dangled multi-year packages worth up to $300 million, although Meta has denied the $300 million figure specifically. Still, reports suggest that offers in the range of $100 million in equity are becoming the new norm in high-stakes AI recruitment.
This recruitment drive is part of a broader reorganisation under Meta’s newly created Superintelligence Labs (MSL), combining existing AI divisions—including FAIR and the LLaMA development team—into one streamlined unit aimed squarely at AGI.
OpenAI, for its part, appears to be reeling. The high-profile departures have sparked internal concerns about the organisation’s culture and long-term vision. OpenAI execs likened Meta’s tactics to ‘breaking into our home,’ while others within OpenAI point to growing discontent over its shift to a capped-profit structure and Microsoft’s increasing influence. To contain the fallout, OpenAI is now reportedly revising its compensation and equity structures to retain remaining staff.
At the same time, tensions between OpenAI and Microsoft—its largest investor and infrastructure partner—are also coming to a head. A central point of contention is a clause in their 2019 partnership agreement that terminates Microsoft’s access to OpenAI’s technology if OpenAI achieves AGI. Microsoft is pushing to remove this clause as part of a renegotiation tied to OpenAI’s shift toward a for-profit model. Microsoft reportedly wants a more substantial equity stake—potentially up to 35%—and clearer influence over strategic direction.
These developments highlight an AI ecosystem undergoing deep transformation. Meta is betting on vertical integration, elite hiring, and open-source models as the path to AI dominance. OpenAI is grappling with internal coherence, external partnerships, and its public mission. Microsoft, meanwhile, is recalibrating its relationship with OpenAI even as competitors like Google DeepMind and Anthropic continue their own ambitious pushes.
Looking ahead, the implications are far-reaching. The extraordinary compensation packages now on offer are reshaping global talent flows, drawing researchers out of academia and away from startups toward Big Tech’s AGI arms race. As Meta centralises its superintelligence efforts and OpenAI charts its future course, governance questions loom large: Who decides when AGI is achieved? Who controls access? In the coming months, expect not only fierce competition for technical breakthroughs but also intensifying negotiations over power, accountability, and the rules of the game in an increasingly AGI-centric tech world.
Last week in Geneva
The Global Digital Collaboration Conference, held on July 1–2, 2025, in Geneva, Switzerland, was a significant event focused on advancing digital identity, credentials, and trusted infrastructure. Hosted by Federal Councillor Beat Jans and co-organised with 46 organisations, including UN agencies, international bodies, standardisation organisations, and open-source groups, the conference attracted over 1,000 experts from around the world. Discussions over 22 overview sessions and almost 100 collaborative sessions highlighted the importance of aligning governance frameworks with technical design to build trustworthy, open digital systems.
Diplo, the organisation that operates the Digital Watch Observatory, C4DT-EPFL and the Swiss Federal Department of Foreign Affairs (FDFA) co-hosted the session ‘Understanding cyber norms and the rules-based order: Ahat are the stakeholders’ roles in protecting critical infrastructure?” during the conference.
The UN CSTD multi-stakeholder working group on data governance at all levels kicked off its second meeting yesterday, 3 July, in Geneva. Yesterday, the group started discussing (a) fundamental principles of data governance at all levels as relevant for development; (b) proposals to support interoperability between national, regional and international data systems; (c) considerations of sharing the benefits of data; (d) options to facilitate safe, secure and trusted data flows, including cross-border data flows as relevant for development. They continued these discussions today and started discussions on the structure of the group’s report. The modalities of the group’s work, as well as agreement on dates and the provisional agenda for future sessions, are on the agenda for this afternoon.
DW team
For the main updates, reflections and events, consult the RADAR, the READING CORNER and the UPCOMING EVENTS section below.
Join us as we connect the dots, from daily updates to main weekly developments, to bring you a clear, engaging monthly snapshot of worldwide digital trends.
RADAR
Highlights from the week of 27 June – 4 July 2025
Beijing narrows tech gap with CPU and quantum launches.
Sarcoma hackers leaked 1.3TB of sensitive files after breaching Swiss contractor Radix.
Oxford researchers reveal only 32 countries have the infrastructure to build advanced AI, leaving most of Africa sidelined in the race.
Platforms could face fines for failing to remove illegal deepfake content under proposed Danish law.
Sophos says 49% of ransomware victims paid in 2025, but average ransom payments and backup use have declined.
As geopolitical tensions mount beneath the waves, the UK is racing to future-proof its defence laws against unseen threats lurking in the deep.
Experts say geopolitical hacktivism now poses serious risks to national infrastructure, calling for coordinated strategic cyber defences.
US and global firms adopt DeepSeek’s models due to cost savings, even as public sector bans remain in place.
UPCOMING EVENTS
The BRICS partnership will use the annual summit, to be held 6-7 July in Rio de Janeiro, Brazil, to address issues such as the environment, energy, science and technology, health and the inclusion of more civil society actors.
The Open-Ended Working Group (OEWG) on ICT security will hold its eleventh substantive session on 7-11 July 2025 in New York, USA. This will be the final session of the group’s work.
WSIS+20 High-Level Event 2025 will take place on 7–11 July 2025 in Geneva, Switzerland. The event will facilitate multistakeholder dialogue on achievements, key trends, and challenges since the two phases of WSIS in 2003 and 2005. In the lead-up to the WSIS+20 review by the UN General Assembly, the event will also feature discussions on progress made in the implementation of the WSIS outcomes.
The AI for Good Global Summit 2025 will be held from 8 to 11 July in Geneva, Switzerland and feature 3 events: AI for Good Global Summit from 8 to 9 July, AI Governance Day on 10 July, and International AI Standards Day on 11 July.
This discussion between young leaders and experts will explore how AI can be governed in a way that reflects shared human values and ensures inclusive, sustainable, and ethical development.
Big week ahead—don’t miss a beat!
With WSIS+20 High-Level Event 2025, AI for Good 2025, and the final session of the OEWG all taking place next week, it’s impossible to follow everything live. That’s where we come in.
Our coverage combines expert analysis with AI-generated session reports and insights to help you stay informed, even when you can’t attend it all.
Bookmark our event pages to get real-time updates, highlights, and key takeaways as they happen.
READING CORNER
The interplay between the magical and the real is at the heart of AI governance. How do we regulate something that feels both wondrous and mundane? How do we balance its promises with its perils? To navigate this, we might turn to the literary tradition of magical realism, where the impossible coexists seamlessly with the ordinary.
At June’s G7 meeting, leaders agreed that reactive defense is no longer enough for AI and cybersecurity challenges. Clear rules to prevent conflict, but questions remain about who will set them and the Global South’s role in this new technological era.
Revisit the key discussions of IGF 2025 with expert reporting with AI-powered tools, including session reports, a visual map of discussions, and an assistant that answers your policy-related questions.
What if the very tool designed to boost your productivity is quietly dulling your mind each time you use it?
Can Wikipedia teach diplomacy a lesson? Aldo Matteucci contrasts rigid hierarchies with messy, adaptive self-organising systems, and asks which one truly gets more done.