French President Emmanuel Macron told the AI Impact Summit in New Delhi that Europe would remain a safe space for AI innovation and investment. Speaking in New Delhi, he said the European Union would continue shaping global AI rules alongside partners such as India.
Macron pointed to the EU AI Act, adopted in 2024, as evidence that Europe can regulate emerging technologies and AI while encouraging growth. In New Delhi, he claims that oversight would not stifle innovation but ensure responsible development, but not much evidence to back it up.
The French leader said that France is doubling the number of AI scientists and engineers it trains, with startups creating tens of thousands of jobs. He added in New Delhi that Europe aims to combine competitiveness with strong guardrails.
Macron also highlighted child protection as a G7 priority, arguing in New Delhi that children must be shielded from AI driven digital abuse. Europe, he said, intends to protect society while remaining open to investment and cooperation with India.
Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!
The UK government has announced new measures to protect children online, giving parents clearer guidance and support. PM Keir Starmer said no platform will get a free pass, with illegal AI chatbot content targeted immediately.
New powers, to be introduced through upcoming legislation, will allow swift action following a consultation on children’s digital well-being.
Proposed measures include enforcing social media age limits, restricting harmful features like infinite scrolling, and strengthening safeguards against sharing non-consensual intimate images.
Ministers are already consulting parents, children, and civil society groups. The Department for Science, Innovation and Technology launched ‘You Won’t Know until You Ask’ to advise on safety settings, talking to children, and handling harmful content.
Charities such as NSPCC and the Molly Rose Foundation welcomed the announcement, emphasising swift action on age limits, addictive design, and AI content regulation. Children’s feedback will help shape the new rules, aiming to make the UK a global leader in online safety.
Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!
Gabon’s media regulator, the High Authority for Communication (HAC), has announced a nationwide open-ended suspension of social media, citing online content that it says is fueling tensions and undermining social cohesion. In a statement, the HAC framed the move as a response to material it described as defamatory or hateful and, in some cases, a threat to national security, telling telecom operators and internet service providers to block access to major platforms.
The regulator pointed to what it called a rise in coordinated cyberbullying and the unauthorised sharing of personal data, saying existing moderation measures were not working and that the shutdown was necessary to stop violations of Gabon’s 2016 Communications Code.
The announcement arrives amid mounting labour pressure. Teachers began a high-profile strike in December 2025 over pay, status and working conditions, and the dispute has become one of the most visible signs of broader public-sector discontent. At the same time, the economic stakes are significant: Gabon had an estimated 850,000 active social media users in late 2025 (around a third of the population), and platforms are widely used for marketing and small-business sales.
Why does it matter?
Governments increasingly treat social media suspensions as a rapid-response tool for ‘public order’, but they also reshape information access, civic debate and commerce, especially in countries where mobile apps are a primary channel for news and income. The current announcement comes at a politically sensitive moment, since Gabon has a precedent here: during the 2023 election period, authorities shut down internet access, citing the need to counter calls for violence and misinformation. Gabon is still in transition after the August 2023 coup, and President Brice Oligui Nguema, who led the takeover, won the subsequent presidential election by a landslide in 2025, consolidating power while facing rising expectations for reform and stability.
Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!
A landmark trial has begun in Los Angeles, accusing Meta and Google’s YouTube of deliberately addicting children to their platforms.
The case is part of a wider series of lawsuits across the US seeking to hold social media companies accountable for harms to young users. TikTok and Snap settled before trial, leaving Meta and YouTube to face the allegations in court.
The first bellwether case involves a 19-year-old identified as ‘KGM’, whose claims could shape thousands of similar lawsuits. Plaintiffs allege that design features were intentionally created to maximise engagement among children, borrowing techniques from slot machines and the tobacco industry.
A trial that may see testimony from executives, including Meta CEO Mark Zuckerberg, and could last six to eight weeks.
Social media companies deny the allegations, emphasising existing safeguards and arguing that teen mental health is influenced by numerous factors, such as academic pressure, socioeconomic challenges and substance use, instead of social media alone.
Meta and YouTube maintain that they prioritise user safety and privacy while providing tools for parental oversight.
Similar trials are unfolding across the country. New Mexico is investigating allegations of sexual exploitation facilitated by Meta platforms, while Oakland will hear cases representing school districts.
More than 40 state attorneys general have filed lawsuits against Meta, with TikTok facing claims in over a dozen states. Outcomes could profoundly impact platform design, regulation and legal accountability for youth-focused digital services.
Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!
Spain is preparing legislation to ban social media access for users under 16, with the proposal expected to be introduced within days. Prime Minister Pedro Sánchez framed the move as a child-protection measure aimed at reducing exposure to harmful online environments.
Government plans include mandatory age-verification systems for platforms, designed to serve as practical barriers rather than symbolic safeguards. Officials argue that minors face escalating risks online, including addiction, exploitation, violent content, and manipulation.
Additional provisions could hold technology executives legally accountable for unlawful or hateful content that remains online. The proposal reflects a broader regulatory shift toward platform responsibility and stricter enforcement standards.
Momentum for youth restrictions is building across Europe. France and Denmark are pursuing similar controls, while the EU Digital Services Act guidelines allow member states to define a national ‘digital majority age’.
The European Commission is also testing an age verification app, with wider deployment expected next year.
Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!
The Geneva Engage initiative, launched in 2016 by the Geneva Internet Platform under DiploFoundation with the support of the Republic and Canton of Geneva, continues to track how International Geneva connects with audiences worldwide. Through research and annual awards, it assesses how Geneva-based actors communicate on global policy issues ranging from development and human rights to health, the environment, and digital governance.
The 11th edition of the Geneva Engage Awards was held on 3 February 2026 at the World Meteorological Organization building, and it came at a moment of significant change in how people access information. Under the theme ‘Back to basics in the AI era’, the event explored how International Geneva can remain a trusted source as users increasingly rely on AI assistants rather than traditional searches, websites, and reports.
Each year, the Geneva Engage Awards recognise excellence in digital outreach across three main categories: international organisations, non-governmental organisations, and permanent representations. The evaluation focuses on how effectively these actors use digital tools to engage global audiences, build trust, and remain visible in an evolving information ecosystem.
The methodology combines quantitative analysis across three areas, social media outreach, web relevancy, and web accessibility. Performance is measured using engagement data from social media platforms, the visibility and relevance of web content in global search results, and accessibility standards that assess how usable and inclusive websites are for diverse audiences.
Together, this year’s results highlight how digital trust, accessibility, and relevance are becoming central to diplomacy in an AI-driven information landscape.
Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!
Swiss technology and privacy expert Anna Zeiter is leading the development of W Social, a new European-built social media network designed as an alternative to X. The project aims to reduce reliance on US tech and strengthen European digital sovereignty.
W Social will require users to verify their identity and provide a photo to ensure genuine human accounts, tackling fake profiles and bot driven disinformation that critics link to existing platforms. Zeiter said the name W stands for ‘We’ as well as values and verification.
The platform’s infrastructure will be hosted in Europe under strict EU data protection laws, with decentralised storage and offices planned in Berlin and Paris. Early support comes from European political and tech figures, signalling interest beyond Silicon Valley.
W Social could launch a beta version as early as February, with broader public access planned by year-end. Backers hope the network will foster more positive dialogue and provide a European alternative to US based social media influence.
Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!
The UK competition watchdog has proposed new rules that would force Google to give publishers greater control over how their content is used in search and AI tools.
The Competition and Markets Authority (CMA) plans to require opt-outs for AI-generated summaries and model training, marking the first major intervention under Britain’s new digital markets regime.
Publishers argue that generative AI threatens traffic and revenue by answering queries directly instead of sending users to the original sources.
The CMA proposal would also require clearer attribution of publisher content in AI results and stronger transparency around search rankings, including AI Overviews and conversational search features.
Additional measures under consultation include search engine choice screens on Android and Chrome, alongside stricter data portability obligations. The regulator says tailored obligations would give businesses and users more choice while supporting innovation in digital markets.
Google has warned that overly rigid controls could damage the user experience, describing the relationship between AI and search as complex.
The consultation runs until late February, with the outcome expected to shape how AI-powered search operates in the UK.
Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!
A new analysis found Grok generated an estimated three million sexualised images in 11 days, including around 23,000 appearing to depict children. The findings raise serious concerns over safeguards, content moderation, and platform responsibility.
The surge followed the launch of Grok’s one-click image editing feature in late December, which quickly gained traction among users. Restrictions were later introduced, including paid access limits and technical measures to prevent image undressing.
Researchers based their estimates on a random sample of 20,000 images, extrapolating from these results to more than 4.6 million images generated during the study period. Automated tools and manual review identified sexualised content and confirmed cases involving individuals appearing under 18.
Campaigners have warned that the findings expose significant gaps in AI safety controls, particularly in protecting children. Calls are growing for stricter oversight, stronger accountability, and more robust safeguards before large-scale AI image deployment.
Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!
The upper house of the Parliament of the United Kingdom,, the House of Lords has voted in favour of banning under-16s from social media platforms, backing an amendment to the government’s schools bill by 261 votes to 150. The proposal would require ministers to define restricted platforms and enforce robust age verification within a year.
Political momentum for tighter youth protections has grown after Australia’s similar move, with cross-party support emerging at Westminster. More than 60 Labour MPs have joined Conservatives in urging a UK ban, increasing pressure ahead of a Commons vote.
Supporters argue that excessive social media use contributes to declining mental health, online radicalisation, and classroom disruption. Critics warn that a blanket ban could push teenagers toward less regulated platforms and limit positive benefits, urging more vigorous enforcement of existing safety rules.
The government has rejected the amendment and launched a three-month consultation on age checks, curfews, and curbing compulsive online behaviour. Ministers maintain that further evidence is needed before introducing new legal restrictions.
Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!