US lawsuits target social media platforms for deliberate child engagement designs

A landmark trial has begun in Los Angeles, accusing Meta and Google’s YouTube of deliberately addicting children to their platforms.

The case is part of a wider series of lawsuits across the US seeking to hold social media companies accountable for harms to young users. TikTok and Snap settled before trial, leaving Meta and YouTube to face the allegations in court.

The first bellwether case involves a 19-year-old identified as ‘KGM’, whose claims could shape thousands of similar lawsuits. Plaintiffs allege that design features were intentionally created to maximise engagement among children, borrowing techniques from slot machines and the tobacco industry.

A trial that may see testimony from executives, including Meta CEO Mark Zuckerberg, and could last six to eight weeks.

Social media companies deny the allegations, emphasising existing safeguards and arguing that teen mental health is influenced by numerous factors, such as academic pressure, socioeconomic challenges and substance use, instead of social media alone.

Meta and YouTube maintain that they prioritise user safety and privacy while providing tools for parental oversight.

Similar trials are unfolding across the country. New Mexico is investigating allegations of sexual exploitation facilitated by Meta platforms, while Oakland will hear cases representing school districts.

More than 40 state attorneys general have filed lawsuits against Meta, with TikTok facing claims in over a dozen states. Outcomes could profoundly impact platform design, regulation and legal accountability for youth-focused digital services.

Would you like to learn more about AI, tech and digital diplomacyIf so, ask our Diplo chatbot!  

AI governance takes focus at UN security dialogue

The UN will mark the fourth International Day for the Prevention of Violent Extremism Conducive to Terrorism on 12 February 2026 with a high-level dialogue focused on AI. The event will examine how emerging technologies are reshaping both prevention strategies and extremist threats.

Organised by the UN Office of Counter-Terrorism in partnership with the Republic of Korea’s UN mission, the dialogue will take place at UN Headquarters in New York. Discussions will bring together policymakers, technology experts, civil society representatives, and youth stakeholders.

A central milestone will be the launch of the first UN Practice Guide on Artificial Intelligence and Preventing and Countering Violent Extremism. The guide offers human rights-based advice on responsible AI use, addressing ethical, governance, and operational risks.

Officials warn that AI-generated content, deepfakes, and algorithmic amplification are accelerating extremist narratives online. Responsibly governed AI tools could enhance early detection, research, and community prevention efforts.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

EU telecom simplification at risk as Digital Networks Act adds extra admin

The ambitions of the EU to streamline telecom rules are facing fresh uncertainty after a Commission document indicated that the Digital Networks Act may create more administrative demands for national regulators instead of easing their workload.

The plan to simplify long-standing procedures risks becoming more complex as officials examine the impact on oversight bodies.

Concerns are growing among telecom authorities and BEREC, which may need to adjust to new reporting duties and heightened scrutiny. The additional requirements could limit regulators’ ability to respond quickly to national needs.

Policymakers hoped the new framework would reduce bureaucracy and modernise the sector. The emerging assessment now suggests that greater coordination at the EU level may introduce extra layers of compliance at a time when regulators seek clarity and flexibility.

The debate has intensified as governments push for faster network deployment and more predictable governance. The prospect of heavier administrative tasks could slow progress rather than deliver the streamlined system originally promised.

Would you like to learn more about AI, tech and digital diplomacyIf so, ask our Diplo chatbot!  

Coal reserves could help Nigeria enter $650 billion AI economy

Nigeria has been advised to develop its coal reserves to benefit from the rapidly expanding global AI economy. A policy organisation said the country could capture part of the projected $650 billion AI investment by strengthening its energy supply capacity.

AI infrastructure requires vast and reliable electricity to power data centres and advanced computing systems. Technology companies worldwide are increasing energy investments as competition intensifies and demand for computing power continues to grow rapidly.

Nigeria holds nearly five billion metric tonnes of coal, offering a significant opportunity to support global energy needs. Experts warned that failure to develop these resources could result in major economic losses and missed industrial growth.

The organisation also proposed creating a national corporation to convert coal into high-value energy and industrial products. Analysts stressed that urgent government action is needed to secure Nigeria’s position in the emerging AI-driven economy.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

EU faces pressure to boost action on health disinformation

A global health organisation is urging the EU to make fuller use of its digital rules to curb health disinformation as concerns grow over the impact of deepfakes on public confidence.

Warnings point to a rising risk that manipulated content could reduce vaccine uptake instead of supporting informed public debate.

Experts argue that the Digital Services Act already provides the framework needed to limit harmful misinformation, yet enforcement remains uneven. Stronger oversight could improve platforms’ ability to detect manipulated content and remove inaccurate claims that jeopardise public health.

Campaigners emphasise that deepfake technology is now accessible enough to spread false narratives rapidly. The trend threatens vaccination campaigns at a time when several member states are attempting to address declining trust in health authorities.

The EU officials continue to examine how digital regulation can reinforce public health strategies. The call for stricter enforcement highlights the pressure on Brussels to ensure that digital platforms act responsibly rather than allowing misleading material to circulate unchecked.

Would you like to learn more about AI, tech and digital diplomacyIf so, ask our Diplo chatbot!  

Discord expands teen-by-default protection worldwide

Discord is preparing a global transition to teen-appropriate settings that will apply to all users unless they confirm they are adults.

The phased rollout begins in early March and forms part of the company’s wider effort to offer protection tailored to younger audiences rather than relying on voluntary safety choices. Controls will cover communication settings, sensitive content and access to age-restricted communities.

The update is based on an expanded age assurance system designed to protect privacy while accurately identifying users’ age groups. People can use facial age estimation on their own device or select identity verification handled by approved partners.

Discord will also rely on an age-inference model that runs quietly in the background. Verification results remain private, and documents are deleted quickly, with users able to appeal group assignments through account settings.

Stricter defaults will apply across the platform. Sensitive media will stay blurred unless a user is confirmed as an adult, and access to age-gated servers or commands will require verification.

Message requests from unfamiliar contacts will be separated, friend-request alerts will be more prominent and only adults will be allowed to speak on community stages instead of sharing the feature with teens.

Discord is complementing the update by creating a Teen Council to offer advice on future safety tools and policies. The council will include up to a dozen young users and aims to embed real teen insight in product development.

The global rollout builds on earlier launches in the UK and Australia, adding to an existing safety ecosystem that includes Teen Safety Assist, Family Centre, and several moderation tools intended to support positive and secure online interactions.

Would you like to learn more about AI, tech and digital diplomacyIf so, ask our Diplo chatbot!  

Singtel opens largest AI ready data centre in Singapore

Singtel’s data centre arm Nxera has opened its largest data centre in Singapore at Tuas. The facility strengthens Singapore’s role as a regional hub for AI infrastructure.

The Tuas site in Singapore offers 58MW of AI-ready capacity and is described as the country’s highest- power-density data centre. More than 90 per cent of Singapore’s capacity was committed before the official launch.

Nxera said the Singapore facility is hyperconnected through direct access to international and domestic networks. Singapore gains lower latency and improved reliability from integration with a cable landing station.

Singtel said the Tuas development supports rising demand in Singapore for AI, cloud and high performance computing. Nxera plans further expansion in Asia while reinforcing Singapore’s position in digital infrastructure.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

New York weighs pause on data centre expansion

Lawmakers in New York have introduced a bill proposing a three year pause on permits for new data centres. Supporters say rapid expansion linked to AI infrastructure risks straining energy systems in New York.

Concerns in New York focus on rising electricity demand and higher household bills as tech companies scale AI operations. Critics across the US argue local communities bear the cost of supporting large scale computing facilities.

The New York proposal has drawn backing from environmental groups and politicians in the US who want time to set stricter rules. US senator Bernie Sanders has also called for a nationwide halt on new data centres.

Officials in New York say the pause would allow stronger policies on grid access and fair cost sharing. The debate reflects wider US tension between economic growth driven by AI and environmental limits.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

Shadow AI becomes a new governance challenge for European organisations

Employees are adopting generative tools at work faster than organisations can approve or secure them, giving rise to what is increasingly described as ‘shadow AI‘. Unlike earlier forms of shadow IT, these tools can transform data, infer sensitive insights, and trigger automated actions beyond established controls.

For European organisations, the issue is no longer whether AI should be used, but how to regain visibility and control without undermining productivity, as shadow AI increasingly appears inside approved platforms, browser extensions, and developer tools, expanding risks beyond data leakage.

Security experts warn that blanket bans often push AI use further underground, reducing transparency and trust. Instead, guidance from EU cybersecurity bodies increasingly promotes responsible enablement through clear policies, staff awareness, and targeted technical controls.

Key mitigation measures include mapping AI use across approved and informal tools, defining safe prompt data, and offering sanctioned alternatives, with logging, least-privilege access, and approval steps becoming essential as AI acts across workflows.

With the EU AI Act introducing clearer accountability across the AI value chain, unmanaged shadow AI is also emerging as a compliance risk. As AI becomes embedded across enterprise software, organisations face growing pressure to make safe use the default rather than the exception.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

EU strengthens cyber defence after attack on Commission mobile systems

A cyber-attack targeting the European Commission’s central mobile infrastructure was identified on 30 January, raising concerns that staff names and mobile numbers may have been accessed.

The Commission isolated the affected system within nine hours instead of allowing the breach to escalate, and no mobile device compromise was detected.

Also, the Commission plans a full review of the incident to reinforce the resilience of internal systems.

Officials argue that Europe faces daily cyber and hybrid threats targeting essential services and democratic institutions, underscoring the need for stronger defensive capabilities across all levels of the EU administration.

CERT-EU continues to provide constant threat monitoring, automated alerts and rapid responses to vulnerabilities, guided by the Interinstitutional Cybersecurity Board.

These efforts support the broader legislative push to strengthen cybersecurity, including the Cybersecurity Act 2.0, which introduces a Trusted ICT Supply Chain to reduce reliance on high-risk providers.

Recent measures are complemented by the NIS2 Directive, which sets a unified legal framework for cybersecurity across 18 critical sectors, and the Cyber Solidarity Act, which enhances operational cooperation through the European Cyber Shield and the Cyber Emergency Mechanism.

Together, they aim to ensure collective readiness against large-scale cyber threats.

Would you like to learn more about AI, tech and digital diplomacyIf so, ask our Diplo chatbot!