Tech giants race to remake social media with AI

Tech firms are racing to integrate AI into social media, reshaping online interaction while raising fresh concerns over privacy, misinformation, and copyright. Platforms like OpenAI’s Sora and Meta’s Vibes are at the centre of the push, blending generative AI tools with short-form video features similar to TikTok.

OpenAI’s Sora allows users to create lifelike videos from text prompts, but film studios say copyrighted material is appearing without permission. OpenAI has promised tighter controls and a revenue-sharing model for rights holders, while Meta has introduced invisible watermarks to identify AI content.

Safety concerns are mounting as well. Lawsuits allege that AI chatbots such as Character.AI have contributed to mental health issues among teenagers. OpenAI and Meta have added stronger restrictions for young users, including limits on mature content and tighter communication controls for minors.

Critics question whether users truly want AI-generated content dominating their feeds, describing the influx as overwhelming and confusing. Yet industry analysts say the shift could define the next era of social media, as companies compete to turn AI creativity into engagement and profit.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

Tariffs and AI top the agenda for US CEOs over the next three years

US CEOs prioritise cost reduction and AI integration amid global economic uncertainty. According to KPMG’s 2025 CEO Outlook, leaders are reshaping supply chains while preparing for rapid AI transformation over the next three years.

Tariffs are a key factor influencing business strategies, with 89% of US CEOs expecting significant operational impacts. Many are adjusting sourcing models, while 86% say they will increase prices where needed. Supply chain resilience remains the top short-term pressure for decision-making.

AI agents are seen as major game-changers. 84% of CEOs expect a native AI company to become a leading industry player within 3 years, displacing incumbents. Companies are accelerating investment returns, with most expecting payoffs within one to three years.

Cybersecurity is a significant concern alongside AI integration. Forty-six percent have increased spending on digital risk resilience, focusing on fraud prevention and data privacy. CEOs recognise that AI and quantum computing introduce both opportunities and new vulnerabilities.

Workforce transformation is a clear priority. Eighty-six percent plan to embed AI agents into teams next year, while 73% focus on retaining and retraining high-potential talent. Upskilling, governance, and organisational redesign are emerging as essential strategies.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

Grok to get new AI video detection tools, Musk says

Musk said Grok will analyse bitstreams for AI signatures and scan the web to verify the origins of videos. Grok added that it will detect subtle AI artefacts in compression and generation patterns that humans cannot see.

AI tools such as Grok Imagine and Sora are reshaping the internet by making realistic video generation accessible to anyone. The rise of deepfakes has alarmed users, who warn that high-quality fake videos could soon be indistinguishable from real footage.

A user on X expressed concern that leaders are not addressing the growing risks. Elon Musk responded, revealing that his AI company xAI is developing Grok’s ability to detect AI-generated videos and trace their origins online.

The detection features aim to rebuild trust in digital media as AI-generated content spreads. Commentators have dubbed the flood of such content ‘AI slop’, raising concerns about misinformation and consent.

Concerns about deepfakes have grown since OpenAI launched the Sora app. A surge in deepfake content prompted OpenAI to tighten restrictions on cameo mode, allowing users to opt out of specific scenarios.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

Weekly #233 New rules for the digital playground

 Logo, Text

3 – 10 October 2025


HIGHLIGHT OF THE WEEK

New rules for the digital playground

A new wave of digital protectionism is taking shape around the world — this time in the name of children’s safety.

Denmark is preparing to ban social media for users under 15, joining a small but growing club of countries seeking to push minors off major platforms. The government has yet to release full details, but the move reflects a growing recognition across many countries that the costs of children’s unrestricted access to social media — from mental health issues to exposure to harmful content — are no longer acceptable.

For inspiration, Copenhagen does not have to look far. Australia has already outlined one of the most detailed blueprints for a nationwide ban on under-16s, set to take effect on 10 December 2025. The law requires platforms to verify users’ ages, remove underage accounts, and block re-registrations. Platforms will also need to communicate clearly with affected users, although questions remain, including whether deleted content will be restored when a user turns 16.

 Book, Comics, Publication, Baby, Person, Face, Head, Flag

In Italy, families have launched legal action against Facebook, Instagram, and TikTok, claiming that the platforms failed to protect minors from exploitative algorithms and inappropriate content. Across the Atlantic, New York City has filed a sweeping lawsuit against major social media platforms, accusing them of deliberately designing features that addict children and harm their mental health. 

In the EU, the debate over how to protect children online is entangled with a parallel fight over privacy and surveillance. Within the EU Council, a meeting of home affairs ministers taking place next week was expected to include a vote on the long-discussed  ‘Chat Control’ regulation proposal, which aims to combat the distribution of child sexual abuse material (CSAM). This proposal is no longer on the agenda, as member states don’t seem to be in agreement on the current text; the vote is said to be postponed for December.

According to the most recent version of the draft regulation, a chat service can be required to screen users’ messages before they are sent and encrypted, but only after a decision from a judicial authority. The system would then search for images of child sexual abuse that are already in databases, while text messages themselves would not be reviewed. Although these provisions were presented as safeguards, not everyone is in agreement, and concerns remain over implications for privacy and encryption, among other issues. 

Why it matters: Together, these developments suggest that the era of self-regulation for social media may be drawing to a close. The global debate is not about whether the digital playground needs guardians, but about the final design of its safety features. As governments weigh bans, lawsuits, and surveillance mandates, they struggle to balance two imperatives: protecting children from harm while safeguarding fundamental rights to privacy and free expression.

IN OTHER NEWS THIS WEEK

Decisive actions in AI governance

The world is incessantly debating the future and governance of AI. Here are some of the latest moves in the space.

Italy has made history as the first member state in the EU to pass its own national AI law, going beyond the framework of the EU’s Artificial Intelligence Act. From October 10, the law comes into effect, introducing sector-specific rules across health, justice, work, and public administration.  Among its provisions: transparency obligations, criminal penalties for misuse of AI (such as harmful deepfakes), new oversight bodies, and protections for minors (e.g. parental consent if under 14). 

In Brussels, the European Commission is simultaneously strategising for digital sovereignty – trying to break the EU’s dependence on foreign AI infrastructure. Its new ‘Apply AI’ strategy aims to channel €1 billion into deploying European AI platforms, integrating them into public services (health, defence, industry), and supporting local tech innovation. The Commission also launched an ‘AI in Science’ initiative to solidify Europe’s position at the forefront of AI research, through a network called RAISE. 

Meanwhile, across the Atlantic, California has signed into law a bold transparency and whistleblower regime aimed at frontier AI developers – those deploying large, compute-intensive models. Under SB 53 (the Transparency in Frontier Artificial Intelligence Act), companies must publish safety protocols, monitor risks, and disclose ‘critical safety incidents.’ Crucially, employees who believe there is a catastrophic risk (even without full proof) are shielded from retaliation. 

The bigger picture: These moves from Italy, the EU and California are part of a broader trend in which debates on AI governance are giving way to decisive action. 


Beijing tightens rare earth grip

China has tightened its grip on the global tech supply chain by significantly expanding its restrictions on its rare earth exports. The new rules no longer focus solely on raw minerals — they now encompass processed materials, manufacturing equipment, and even the expertise used to refine and recycle rare earths. Exporters must seek government approval not only to ship these elements, but also for any product that contains them at a level exceeding 0.1%. Licences will be denied if the end users are involved in weapons production or military applications. Semiconductors won’t be spared either — chipmakers will now face intrusive case-by-case scrutiny, with Beijing demanding full visibility into tech specifications and end users before granting approval.

China is also sealing off human expertise. Engineers and companies in China are prohibited from participating to rare earth projects abroad unless the government explicitly permits it.

A critical moment: The timing of this development is no accident. With US-China tensions escalating and high-level talks between Presidents Trump and Xi on the horizon, Beijing is brandishing what could be described as a powerful economic weapon: monopoly over the minerals that power advanced electronics.


New COMESA platform enables instant, affordable cross-border payments

The Common Market for Eastern and Southern Africa (COMESA) Clearing House (CCH) has announced the advancement of its Digital Retail Payments Platform (DRPP) into user trials across the Malawi–Zambia corridor. This initiative aims to facilitate cross-border payments using local currencies, enhancing financial inclusion and supporting micro, small, and medium enterprises (MSMEs), women, and underserved communities. 

The trials, supported by two digital financial service providers and one foreign exchange provider, mark a significant step toward a secure and inclusive regional payment system. CCH encourages active participation from partners and stakeholders to refine and validate the platform, ensuring it delivers reliable, immediate, and affordable payments that empower individuals and businesses across the region.

The DRPP is part of CCH’s broader mission to promote economic growth and prosperity through intra-regional trade and integration. By bridging national markets and reducing barriers to trade, the platform seeks to create a financially integrated COMESA region where secure, affordable, and inclusive cross-border payments power trade, investment, and prosperity.


Superconducting breakthrough wins 2025 Nobel Prize in physics

The 2025 Nobel Prize in Physics was awarded to John Clarke, Michel H. Devoret, and John M. Martinis for demonstrating that quantum mechanical effects can occur in systems large enough to be held in the hand.

Their pioneering experiments in the mid-1980s used superconducting circuits – specifically Josephson junctions, where superconducting components are separated by an ultra-thin insulating layer. By carefully controlling these circuits, the laureates showed that they could exhibit two hallmark quantum phenomena: tunnelling, where a system escapes a trapped state by passing through an energy barrier, and energy quantisation, where it absorbs or emits only specific amounts of energy.

Their work revealed that quantum behaviour, once thought to apply only to atomic particles, can manifest at the macroscopic scale. The discovery not only deepens understanding of fundamental physics but also underpins emerging quantum technologies, from computing to cryptography.

As Nobel Committee Chair Olle Eriksson noted, the award celebrates how century-old quantum mechanics continues to yield new insights and practical innovations shaping the digital age.

LOOKING AHEAD
 Person, Face, Head, Binoculars

80th session of the UNGA First Committee 

The 80th session of the UN General Assembly First Committee on Disarmament and International Security is taking place in New York from 8 October to 7 November 2025. The general debate on all disarmament and international security agenda items will run from Wednesday, 8 October, to Friday, 17 October. Among the topics expected to be discussed is the Global Mechanism, which is set to succeed the work of the OEWG. A briefing by the Chairperson of the Open-ended Working Group on security of and in the use of information and communications technologies 2021-2025 is scheduled for 27 October.

In parallel, UNIDIR is hosting a hybrid event on the UN Global Intergovernmental Points of Contact (POC) Directory, established following the OEWG 2021–2025, to support international cooperation on disarmament and security.

WSIS+20 review process

The written contributions to the WSIS+20 zero draft have now been published, providing a foundation for the upcoming discussions. 

UN DESA will host two days of virtual consultations to review the ‘Zero Draft’ of the WSIS+20 process. Member states and stakeholders from civil society, academia, technical communities, and international organisations will discuss digital governance, bridging digital divides, human rights, and the digital economy. Sessions are designed for inclusive, global participation, offering a platform to share experiences, provide feedback, and refine the draft ahead of the second Preparatory Meeting on 15 October. 

Informal negotiations on the draft are set to begin next week, taking place on 16–17 and 20–21 October 2025. 

Geneva Peace Week 2025

The 2025 edition of Geneva Peace Week will bring together peacebuilders, policymakers, academics, and civil society to discuss and advance peacebuilding initiatives. The programme covers a wide range of topics, including conflict prevention, humanitarian response, environmental peacebuilding, and social cohesion. Sessions this year will explore new technologies, cybersecurity, and AI, including AI-fueled polarisation, AI for decision-making in fragile contexts, responsible AI use in peacebuilding, and digital approaches to supporting the voluntary and dignified return of displaced communities.

GESDA 2025 Summit

The GESDA 2025 Summit brings together scientists, diplomats, policymakers, and thought leaders to explore the intersection of science, technology, and diplomacy. Held at CERN in Geneva with hybrid participation, the three-day programme features sessions on emerging scientific breakthroughs, dual-use technologies, and equitable access to innovation. Participants will engage in interactive discussions, workshops, and demonstrations to examine how frontier science can inform global decision-making, support diplomacy, and address challenges such as climate change and sustainable development.



READING CORNER
d41586 025 03223 0 51532626

Other researchers question whether autonomous AI scientists are possible or even desirable. Other researchers question whether autonomous AI scientists are possible or even desirable.

Cities and tech

They don’t talk treaties; they talk APIs. Discover how tech ambassadors are navigating the complex relationship between cities and big tech.

bioscience with AI

Discover how generative AI is designing synthetic proteins that outperform nature, revolutionising gene therapy and accelerating the search for new medical cures.

4j4fhgnc upuLogotypeEn 0

The report takes stock of the current role the postal sector is playing in enabling inclusive digital transformation and provides recommendations on how to further scale its contribution.

OpenAI joins dialogue with the EU on fair and transparent AI development

The US AI company, OpenAI, has met with the European Commission to discuss competition in the rapidly expanding AI sector.

A meeting focused on how large technology firms such as Apple, Microsoft and Google shape access to digital markets through their operating systems, app stores and search engines.

During the discussion, OpenAI highlighted that such platforms significantly influence how users and developers engage with AI services.

The company encouraged regulators to ensure that innovation and consumer choice remain priorities as the industry grows, noting that collaboration between major and minor players can help maintain a balanced ecosystem.

An issue arises as OpenAI continues to partner with several leading technology companies. Microsoft, a key investor, has integrated ChatGPT into Windows 11’s Copilot, while Apple recently added ChatGPT support to Siri as part of its Apple Intelligence features.

Therefore, OpenAI’s engagement with regulators is part of a broader dialogue about maintaining open and competitive markets while fostering cooperation across the industry.

Although the European Commission has not announced any new investigations, the meeting reflects ongoing efforts to understand how AI platforms interact within the broader digital economy.

OpenAI and other stakeholders are expected to continue contributing to discussions to ensure transparency, fairness and sustainable growth in the AI ecosystem.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Microsoft attracts tech pioneers to build the next era of AI

Some of the world’s most influential technologists (the creators of Python, Kubernetes, Google Docs, Google Lens, RSS feeds and ONNX) are now helping Microsoft shape the next era of AI.

Drawn by the company’s scale, openness to collaboration, and long-term investment in AI, they are leading projects that span infrastructure, productivity, responsible innovation and reasoning systems.

R.V. Guha, who invented RSS feeds, is developing NLWeb, a project that lets users converse directly with websites.

Brendan Burns, co-creator of Kubernetes, focuses on improving AI tools that simplify developers’ work. At the same time, Aparna Chennapragada, the mind behind Google Lens, now leads efforts to build intelligent AI agents and enhance productivity through Microsoft 365 Copilot.

Sarah Bird, who helped create the ONNX framework, leads Microsoft’s responsible AI division, ensuring that emerging systems are safe, secure and reliable.

Meanwhile, Sam Schillace, co-creator of Google Docs, explores ways AI can collaborate with people more naturally. Python’s creator, Guido van Rossum, works on systems to strengthen AI’s long-term memory across conversations.

Together, these innovators illustrate how Microsoft has become a magnet for the pioneers who defined modern computing, and they are now united in advancing the next stage of AI’s evolution.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Retailers face new pressure under California privacy law

California has entered a new era of privacy and AI enforcement after the state’s privacy regulator fined Tractor Supply USD1.35 million for failing to honour opt-outs and ignoring Global Privacy Control signals. The case marks the largest penalty yet from the California Privacy Protection Agency.

In California, there is a widening focus on how companies manage consumer data, verification processes and third-party vendors. Regulators are now demanding that privacy signals be enforced at the technology layer, not just displayed through website banners or webforms.

Retailers must now show active, auditable compliance, with clear privacy notices, automated data controls and stronger vendor agreements. Regulators have also warned that businesses will be held responsible for partner failures and poor oversight of cookies and tracking tools.

At the same time, California’s new AI law, SB 53, extends governance obligations to frontier AI developers, requiring transparency around safety benchmarks and misuse prevention. The measure connects AI accountability to broader data governance, reinforcing that privacy and AI oversight are now inseparable.

Executives across retail and technology are being urged to embed compliance and governance into daily operations. California’s regulators are shifting from punishing visible lapses to demanding continuous, verifiable proof of compliance across both data and AI systems.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

Ant Group launches trillion-parameter AI model Ling-1T

Ant Group has unveiled its Ling AI model family, introducing Ling-1T, a trillion-parameter large language model that has been open-sourced for public use.

The Ling family now includes three main series: the Ling non-thinking models, the Ring thinking models, and the multimodal Ming models.

Ling-1T delivers state-of-the-art performance in code generation, mathematical reasoning, and logical problem-solving, achieving 70.42% accuracy on the 2025 AIME benchmark.

A model that combines efficient inference with strong reasoning capabilities, marking a major advance in AI development for complex cognitive tasks.

Company’s Chief Technology Officer, He Zhengyu, said that Ant Group views AGI as a public good that should benefit society.

The release of Ling-1T and the earlier Ring-1T-preview underscores Ant Group’s commitment to open, collaborative AI innovation and the development of inclusive AGI technologies.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

MIT creates AI tool to build virtual worlds for robots

Researchers at MIT’s Computer Science and AI Laboratory have developed a new AI system that can build realistic virtual environments for training robots. The tool, called steerable scene generation, creates kitchens, restaurants and living rooms filled with 3D objects where robots interact with the physical world.

The system uses a diffusion model guided by Monte Carlo tree search to produce scenes that follow real-world physics. Unlike traditional simulations, it can accurately position objects and avoid visual errors such as items overlapping or floating unrealistically.

By generating millions of unique, lifelike environments, the system can dramatically increase the training data available for robotic foundation models. Robots trained in these AI settings can practise everyday actions like stacking plates or placing cutlery with greater precision.

The researchers say the technique allows robots to learn more efficiently without the cost or limits of real-world testing. Future work aims to include movable objects and internet-sourced assets to make the simulations even more dynamic and diverse.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

Microsoft boosts AI leadership with NVIDIA GB300 NVL72 supercomputer

Microsoft Azure has launched the world’s first NVIDIA GB300 NVL72 supercomputing cluster, explicitly designed for OpenAI’s large-scale AI workloads.

The new NDv6 GB300 VM series integrates over 4,600 NVIDIA Blackwell Ultra GPUs, representing a significant step forward in US AI infrastructure and innovation leadership.

Each rack-scale system combines 72 GPUs and 36 Grace CPUs, offering 37 terabytes of fast memory and 1.44 exaflops of FP4 performance.

A configuration that supports complex reasoning and multimodal AI systems, achieving up to five times the throughput of the previous NVIDIA Hopper architecture in MLPerf benchmarks.

The cluster is built on NVIDIA’s Quantum-X800 InfiniBand network, delivering 800 Gb/s of bandwidth per GPU for unified, high-speed performance.

Microsoft and NVIDIA’s long-standing collaboration has enabled a system capable of powering trillion-parameter models, positioning Azure at the forefront of the next generation of AI training and deployment.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!