US lawsuits target social media platforms for deliberate child engagement designs

A landmark trial has begun in Los Angeles, accusing Meta and Google’s YouTube of deliberately addicting children to their platforms.

The case is part of a wider series of lawsuits across the US seeking to hold social media companies accountable for harms to young users. TikTok and Snap settled before trial, leaving Meta and YouTube to face the allegations in court.

The first bellwether case involves a 19-year-old identified as ‘KGM’, whose claims could shape thousands of similar lawsuits. Plaintiffs allege that design features were intentionally created to maximise engagement among children, borrowing techniques from slot machines and the tobacco industry.

A trial that may see testimony from executives, including Meta CEO Mark Zuckerberg, and could last six to eight weeks.

Social media companies deny the allegations, emphasising existing safeguards and arguing that teen mental health is influenced by numerous factors, such as academic pressure, socioeconomic challenges and substance use, instead of social media alone.

Meta and YouTube maintain that they prioritise user safety and privacy while providing tools for parental oversight.

Similar trials are unfolding across the country. New Mexico is investigating allegations of sexual exploitation facilitated by Meta platforms, while Oakland will hear cases representing school districts.

More than 40 state attorneys general have filed lawsuits against Meta, with TikTok facing claims in over a dozen states. Outcomes could profoundly impact platform design, regulation and legal accountability for youth-focused digital services.

Would you like to learn more about AI, tech and digital diplomacyIf so, ask our Diplo chatbot!  

EU faces pressure to boost action on health disinformation

A global health organisation is urging the EU to make fuller use of its digital rules to curb health disinformation as concerns grow over the impact of deepfakes on public confidence.

Warnings point to a rising risk that manipulated content could reduce vaccine uptake instead of supporting informed public debate.

Experts argue that the Digital Services Act already provides the framework needed to limit harmful misinformation, yet enforcement remains uneven. Stronger oversight could improve platforms’ ability to detect manipulated content and remove inaccurate claims that jeopardise public health.

Campaigners emphasise that deepfake technology is now accessible enough to spread false narratives rapidly. The trend threatens vaccination campaigns at a time when several member states are attempting to address declining trust in health authorities.

The EU officials continue to examine how digital regulation can reinforce public health strategies. The call for stricter enforcement highlights the pressure on Brussels to ensure that digital platforms act responsibly rather than allowing misleading material to circulate unchecked.

Would you like to learn more about AI, tech and digital diplomacyIf so, ask our Diplo chatbot!  

Writing as thinking in the age of AI

In his article, Richard Gunderman argues that writing is not merely a way to present ideas but a core human activity through which people think, reflect and form meaning.

He contends that when AI systems generate text on behalf of users, they risk replacing this cognitive process with automated output, weakening the connection between thought and expression.

According to the piece, writing serves as a tool for reasoning, emotional processing and moral judgment. Offloading it to AI can diminish originality, flatten individual voice and encourage passive consumption of machine-produced ideas.

Gunderman warns that this shift could lead to intellectual dependency, where people rely on AI to structure arguments and articulate positions rather than developing those skills themselves.

The article also raises ethical concerns about authenticity and responsibility. If AI produces large portions of written work, it becomes unclear who is accountable for the ideas expressed. Gunderman suggests that overreliance on AI writing tools may undermine trust in communication and blur the line between human and machine authorship.

Overall, the piece calls for a balanced approach: AI may assist with editing or idea generation, but the act of writing itself should remain fundamentally human, as it is central to critical thinking, identity and social responsibility.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Discord expands teen-by-default protection worldwide

Discord is preparing a global transition to teen-appropriate settings that will apply to all users unless they confirm they are adults.

The phased rollout begins in early March and forms part of the company’s wider effort to offer protection tailored to younger audiences rather than relying on voluntary safety choices. Controls will cover communication settings, sensitive content and access to age-restricted communities.

The update is based on an expanded age assurance system designed to protect privacy while accurately identifying users’ age groups. People can use facial age estimation on their own device or select identity verification handled by approved partners.

Discord will also rely on an age-inference model that runs quietly in the background. Verification results remain private, and documents are deleted quickly, with users able to appeal group assignments through account settings.

Stricter defaults will apply across the platform. Sensitive media will stay blurred unless a user is confirmed as an adult, and access to age-gated servers or commands will require verification.

Message requests from unfamiliar contacts will be separated, friend-request alerts will be more prominent and only adults will be allowed to speak on community stages instead of sharing the feature with teens.

Discord is complementing the update by creating a Teen Council to offer advice on future safety tools and policies. The council will include up to a dozen young users and aims to embed real teen insight in product development.

The global rollout builds on earlier launches in the UK and Australia, adding to an existing safety ecosystem that includes Teen Safety Assist, Family Centre, and several moderation tools intended to support positive and secure online interactions.

Would you like to learn more about AI, tech and digital diplomacyIf so, ask our Diplo chatbot!  

How early internet choices shaped today’s AI

Two decisions taken on the same day in February 1996 continue to shape how the internet, and now AI, is governed today. That is the central argument of Jovan Kurbalija’s blog ‘Thirty years of Original Sin of digital and AI governance,’ which traces how early legal and ideological choices created a lasting gap between technological power and public accountability.

The first moment unfolded in Davos, where John Perry Barlow published his Declaration of the Independence of Cyberspace, portraying the internet as a realm beyond the reach of governments and existing laws. According to Kurbalija, this vision helped popularise the idea that digital space was fundamentally separate from the physical world, a powerful narrative that encouraged the belief that technology should evolve faster than, and largely outside of, politics and law.

In reality, the blog argues, there is no such thing as a stateless cyberspace. Every online action relies on physical infrastructure, data centres, and networks that exist within national jurisdictions. Treating the internet as a lawless domain, Kurbalija suggests, was less a triumph of freedom than a misconception that sidelined long-standing legal and ethical traditions.

The second event happened the same day in Washington, D.C., when the United States enacted the Communications Decency Act. Hidden within it was Section 230, a provision that granted internet platforms broad immunity from liability for the content they host. While originally designed to protect a young industry, this legal shield remains in place even as technology companies have grown into trillion-dollar corporations.

Kurbalija notes that the myth of a separate cyberspace and the legal immunity of platforms reinforced each other. The idea of a ‘new world’ helped justify why old legal principles should not apply, despite early warnings, including from US judge Frank Easterbrook, that existing laws were sufficient to regulate new technologies by focusing on human relationships rather than technical tools.

Today, this unresolved legacy has expanded into the realm of AI. AI companies, the blog argues, benefit from the same logic of non-liability, even as their systems can amplify harm at a scale comparable to, or even greater than, that of other heavily regulated industries.

Kurbalija concludes that addressing AI’s societal impact requires ending this era of legal exceptionalism and restoring a basic principle that those who create, deploy, and profit from technology must also be accountable for its consequences.

Would you like to learn more about AI, tech and digital diplomacyIf so, ask our Diplo chatbot!

Czechia weighs under-15 social media ban as government debate intensifies

A ban on social media use for under-15s is being weighed in Czechia, with government officials suggesting the measure could be introduced before the end of the year.

Prime Minister Andrej Babiš has voiced strong support and argues that experts point to potential harm linked to early social media exposure.

France recently enacted an under-15 restriction, and a growing number of European countries are exploring similar limits rather than relying solely on parental guidance.

The discussion is part of a broader debate about children’s digital habits, with Czech officials also considering a ban on mobile phones in schools. Slovakia has already adopted comparable rules, giving Czech ministers another model to study as they work on their own proposals.

Not all political voices agree on the direction of travel. Some warn that strict limits could undermine privacy rights or diminish online anonymity, while others argue that educational initiatives would be more effective than outright prohibition.

UNICEF has cautioned that removing access entirely may harm children who rely on online platforms for learning or social connection instead of traditional offline networks.

Implementing a nationwide age restriction poses practical and political challenges. The government of Czechia heavily uses social media to reach citizens, complicating attempts to restrict access for younger users.

Age verification, fair oversight and consistent enforcement remain open questions as ministers continue consultations with experts and service providers.

Would you like to learn more about AI, tech and digital diplomacyIf so, ask our Diplo chatbot!

Smart policing project halted by Greek data protection authority

Greece’s data protection authority has warned against activating an innovative policing system planned by the Hellenic Police. The ruling said biometric identity checks carried out on the street would breach data protection law in Greece.

The system would allow police patrols in Greece to use portable devices to scan fingerprints and facial images during spot checks. Regulators said Greek law lacks a clear legal basis for such biometric processing.

The authority said existing rules cited by the Hellenic Police only apply to suspects or detainees and do not cover modern biometric technologies. Greece, therefore, faces unlawful processing risks if the system enters full operation.

The innovative policing project in Greece received the EU funding of around four million euros and received backlash in the past. Regulators said deployment must wait until new legislation explicitly authorises police to use biometrics.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

Dubai hosts launch of AI tools for university students

The UAE Ministry of Higher Education and Scientific Research has partnered with Microsoft to develop AI agents to help university students find jobs. The initiative was announced in Dubai during a major policy gathering in the UAE.

The collaboration in the UAE will use Microsoft Azure to build prototype AI agents supporting personalised learning and career navigation. Dubai-based officials said the tools are designed to align higher education with labour market needs in the UAE.

Four AI agents are being developed in the UAE, covering lifelong skills planning, personalised learning, course co creation and research alignment. Dubai remains central to the project as a hub for higher education innovation in the UAE.

Officials in the UAE said the partnership reflects national priorities around innovation and a knowledge based economy. Microsoft said Dubai offers an ideal environment to scale AI driven education tools across the UAE.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

Slovenia plans social media ban for children under 15

Among several countries lately, Slovenia is also moving towards banning access to social media platforms for children under the age of 15, as the government prepares draft legislation aimed at protecting minors online.

Deputy Prime Minister Matej Arčon said the Education Ministry initiated the proposal and would be developed with input from professionals.

The planned law would apply to major social networks where user-generated content is shared, including TikTok, Snapchat and Instagram. Arčon said the initiative reflects growing international concern over the impact of social media on children’s mental health, privacy and exposure to addictive design features.

Slovenia’s move follows similar debates and proposals across Europe and beyond. Countries such as Italy, France, Spain, UK, Greece and Austria have considered restrictions, while Australia has already introduced a nationwide minimum age for social media use.

Spain’s prime minister recently defended proposed limits, arguing that technology companies should not influence democratic decision-making.

Critics of such bans warn of potential unintended consequences. Telegram founder Pavel Durov has argued that age-based restrictions could lead to broader data collection and increased state control over online content.

Despite these concerns, Slovenia’s government appears determined to proceed, positioning the measure as part of a broader effort to strengthen child protection in the digital space.

Would you like to learn more about AI, tech and digital diplomacyIf so, ask our Diplo chatbot!

EU split widens over ban on AI nudification apps

European lawmakers remain divided over whether AI tools that generate non-consensual sexual images should face an explicit ban in the EU legislation.

The split emerged as debate intensified over the AI simplification package, which is moving through Parliament and the Council rather than remaining confined to earlier negotiations.

Concerns escalated after Grok was used to create images that digitally undressed women and children.

The EU regulators responded by launching an investigation under the Digital Services Act, and the Commission described the behaviour as illegal under existing European rules. Several lawmakers argue that the AI Act should name pornification apps directly instead of relying on broader legal provisions.

Lead MEPs did not include a ban in their initial draft of the Parliament’s position, prompting other groups to consider adding amendments. Negotiations continue as parties explore how such a restriction could be framed without creating inconsistencies within the broader AI framework.

The Commission appears open to strengthening the law and has hinted that the AI omnibus could be an appropriate moment to act. Lawmakers now have a limited time to decide whether an explicit prohibition can secure political agreement before the amendment deadline passes.

Would you like to learn more about AI, tech and digital diplomacyIf so, ask our Diplo chatbot!