Italy lawsuit against Meta and TikTok tests child safety rules

A first hearing has taken place at the Milan Business Court in a case brought by MOIGE, the Italian Parents’ Movement, and a group of families against Meta and TikTok over the protection of minors on social media platforms.

According to MOIGE, the class-wide injunction seeks to protect around 3.5 million Italian children aged between 7 and 14 who are allegedly active on social platforms despite age restrictions. The organisation described the case as the first such action in Europe focused on protecting minors in the digital sector.

The hearing focused on preliminary objections, including challenges by lawyers for Meta and TikTok to the jurisdiction and competence of Italian courts to rule on the companies’ conduct. MOIGE said the platforms also contested documents submitted by its legal team concerning the alleged effects of recommendation algorithms on minors.

According to MOIGE, the documents refer to concerns around variable reinforcement mechanisms, infinite scrolling and behavioural profiling allegedly designed to maximise engagement among younger users. The organisation and the families’ lawyers argue that such design features raise concerns over addictive behaviour and wider risks to children’s well-being.

MOIGE’s lawyers urged the court to proceed quickly, arguing that delays could prolong potential harm affecting minors in Italy. The case will continue with further hearings, with the court expected to set the next steps in the proceedings.

Why does it matter?

The case could become an important test of how courts assess platform responsibility for children’s safety, age restrictions and recommendation systems. If the action advances, it may contribute to wider European debates on algorithmic design, age verification, addictive platform features and whether child online safety should be treated not only as a content moderation issue, but also as a consumer protection and public health concern.

Would you like to learn more about AI, tech and digital diplomacyIf so, ask our Diplo chatbot!

CJEU backs fair remuneration for press publishers

The Court of Justice of the European Union (CJEU) has ruled that member states may allow press publishers to claim fair remuneration when they authorise online service providers to use their publications.

The judgement came in a case involving Meta Platforms Ireland’s challenge to an Italian Communications Regulatory Authority decision on criteria for determining fair remuneration for online use of press publications. Meta argued that the Italian framework conflicted with EU rules on publishers’ rights under the Digital Single Market copyright directive.

The CJEU found that a fair remuneration right for publishers can be compatible with EU law if the payment is consideration for authorising online service providers to use press publications. Publishers must also be able to refuse authorisation or grant it free of charge, and online service providers cannot be required to pay for it when they do not use the publications.

The ruling also says online service providers may be required to negotiate with publishers without limiting content visibility during talks and to provide data needed to calculate remuneration. The CJEU said such obligations may restrict the freedom to conduct a business, but appear justified where they help ensure fair negotiations and support EU objectives on copyright, media pluralism, and publishers’ ability to recoup investments.

The CJEU also found that powers granted to AGCOM to set criteria, determine remuneration in the event of disagreement, ensure compliance with information obligations, and impose penalties may be permissible if they support the effective implementation of publishers’ rights.

The final assessment remains for the national court, which must verify whether the Italian legislation satisfies the conditions identified by the CJEU.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

Meta tests compromise plan in EU WhatsApp AI access dispute

European Commission officials are examining whether Meta’s policy on access to WhatsApp for AI providers may raise competition concerns in the European Economic Area.

Changes to the WhatsApp Business Solution terms are at the centre of the investigation, particularly as they affect how third-party AI providers can offer services on the platform. The Commission is assessing whether the policy could limit access for competing AI services and reduce choice for users and businesses.

Messaging platforms are becoming important distribution channels for AI-powered services. As chatbots and AI assistants become more integrated into everyday communication tools, access to widely used platforms such as WhatsApp may become an important factor in competition between providers.

Commission officials have said they will examine whether Meta’s conduct complies with the EU competition rules. Opening an investigation does not mean that the Commission has reached a conclusion or found an infringement.

The broader EU scrutiny of large digital platforms is increasingly focused on how access to infrastructure, services and user ecosystems is managed as AI tools become more widely adopted.

Why does it matter?

Competition questions are expanding into AI distribution channels. Messaging platforms can shape which AI services reach users and businesses at scale, making access rules an important part of the emerging AI market. The outcome could influence how major platforms design access policies for third-party AI providers while regulators seek to preserve competition and user choice.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our chatbot!

Meta gives parents deeper insight into teen algorithms

Meta has introduced new supervision features designed to give parents greater visibility into the content shaping teenagers’ experiences on Instagram.

The updated tools allow parents and guardians to view the general topics their teens engage with through Instagram’s ‘Your Algorithm’ feature, which helps shape recommendations on Reels and Explore. Meta said parents in selected markets will soon receive notifications when teens add new interests, such as basketball, photography or musicals, helping explain why recommended content may change over time.

The company said the feature remains subject to existing teen safety protections and content restrictions already applied to Teen Accounts, including limits on certain content for users aged 13 and above and enforcement of Meta’s Community Standards.

Meta has also consolidated supervision tools for Instagram, Facebook, Messenger and Meta Horizon into a single Family Centre hub. Parents can now manage supervised accounts, safety settings and invitations across multiple apps without switching between separate platforms.

Meta said the number of US teens enrolled in supervision on Instagram has more than doubled over the past year. Additional updates planned for the coming months include aggregated activity insights, such as total time spent across Meta’s apps, to give families broader visibility into teen online habits.

Why does it matter?

The update shows how major platforms are responding to pressure for greater transparency around their recommendation systems, particularly regarding teenagers. While the tools do not reveal the full logic of Instagram’s algorithm, they give parents more visibility into the interest categories shaping teen content feeds and create another layer of oversight around personalised recommendations, screen time and online safety.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our chatbot!  

Ireland and the EU intensify DSA pressure on Meta

Coimisiún na Meán, the media regulator of Ireland, has launched two formal investigations into Meta over the design of recommender systems on Facebook and Instagram under the Digital Services Act. The investigations focus on whether users are prevented from choosing recommendation feeds that are not based on the profiling of their personal data.

Coimisiún na Meán said concerns emerged following platform supervision reviews and complaints linked to potential ‘dark patterns’ and deceptive interface designs. Regulators are examining whether users can easily access and modify non-profiled recommendation feeds as required under Article 27 of the DSA, alongside whether interface designs may improperly influence user choices under Article 25.

John Evans, Digital Services Commissioner at Coimisiún na Meán, said recommender systems can repeatedly push harmful material into user feeds, particularly affecting children and younger users. The regulator also warned that Very Large Online Platforms (VLOPs) must ensure users can exercise their rights under the DSA without manipulation or unnecessary barriers.

EU investigates Meta over under-13 access on Instagram and Facebook

At the same time, the European Commission has preliminarily found Meta in potential breach of the DSA over failures to adequately prevent children under 13 from accessing Instagram and Facebook. Regulators said Meta’s age verification and reporting systems may be ineffective, while the company’s risk assessments allegedly failed to properly address harms faced by underage users.

Why does it matter?

These investigations are critical because they could shape how the DSA is enforced across Europe, particularly in cases involving children and algorithmic recommendation systems. If regulators conclude that Meta failed to properly protect minors or used manipulative interface designs that discouraged users from choosing non-profiled feeds, the case may set a wider precedent for how large online platforms handle age assurance, user consent, privacy protections, and recommender system transparency under EU law.

Would you like to learn more about AI, tech and digital diplomacyIf so, ask our Diplo chatbot!

New Meta age assurance system aims to prevent underage access

Meta has expanded its use of AI to strengthen age assurance and improve enforcement of underage account policies across its platforms. The systems are designed to detect users under 13 for removal and to place suspected teens into protected Teen Account settings on Instagram and Facebook in regions including the EU, Brazil, and the US.

The technology analyses a range of signals, including profile information, user activity, and other contextual indicators, to estimate age more accurately. Automated systems are also being used to support faster and more consistent review of reports related to underage use.

Visual analysis has also become part of Meta’s broader detection approach, with the company saying its systems look for general age-related indicators rather than attempting to identify specific individuals. Reporting tools have been simplified, and AI-assisted moderation is being used to improve the speed and reliability of enforcement decisions.

Alongside these enforcement measures, Meta is increasing parental engagement through notifications and guidance to encourage more accurate age reporting and safer online behaviour. The wider effort reflects growing pressure on platforms to move beyond self-declared age checks and to build stronger systems to protect younger users.

Why does it matter?

The significance of the move lies in the fact that age assurance is becoming a core platform governance issue rather than a secondary moderation tool. Meta is trying to show that large social platforms can use AI not only to recommend or personalise content, but also to enforce minimum age rules at scale. That matters because regulators are increasingly questioning whether self-declared age data is enough to protect minors online. It also points to a broader shift in which platforms are expected to combine safety obligations, automated detection, and parental tools into a more active system of child protection.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!  

Meta explores agentic AI assistants

Meta is developing an advanced ‘agentic’ AI assistant designed to perform complex, multi-step tasks for consumers. The initiative reflects the company’s broader push to expand its AI capabilities beyond basic chat functions.

The planned assistant is intended to act more autonomously, helping users complete actions such as organising activities or managing digital tasks. Powered by a new internal model called Muse Spark, the assistant is still under development, and its rollout timeline depends on internal testing.

Meta’s strategy focuses on embedding these tools across its platforms, aiming to deepen user engagement and create more personalised digital experiences.

This marks a shift towards AI systems that can anticipate needs rather than simply respond to prompts. The move also signals intensifying competition among major technology companies in consumer AI.

The report highlights that Meta is positioning AI as central to its future growth, with a focus on making assistants more proactive and capable within everyday digital environments in the US.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

Meta faces EU Digital Services Act breach finding over under-13 access

The European Commission has preliminarily found Meta’s Instagram and Facebook in breach of the Digital Services Act over failures to adequately prevent children under 13 from accessing the platforms. The finding remains provisional and does not prejudge the outcome of the investigation.

According to the Commission, Meta’s existing measures do not effectively enforce its own minimum age requirement of 13. The preliminary findings say children below that age can still create accounts by entering false birth dates, while the company’s reporting tool for underage users is difficult to use and often does not result in effective follow-up.

The Commission also considers Meta’s risk assessment to be incomplete and arbitrary. It says the company failed to identify and assess the risks properly posed to children under 13 who access Instagram and Facebook, despite evidence from across the EU suggesting that a significant share of children under 13 use one or both services. This wording is best kept cautious unless you are quoting the exact percentage directly from the Commission text.

At this stage, the Commission says Meta must revise its risk assessment methodology and strengthen its measures to prevent, detect, and remove children under 13 from the platforms. It also says the company must better counter and mitigate the risks those children may face and ensure a high level of privacy, safety, and security for minors.

The preliminary findings form part of formal proceedings opened against Meta in May 2024 under the DSA. The Commission says the investigation has included analysis of Meta’s risk assessment reports, internal data and documents, and the company’s responses to requests for information, with support from civil society organisations and child protection experts across the EU.

If the Commission’s preliminary view is confirmed, it may adopt a non-compliance decision and impose a fine of up to 6% of the provider’s total worldwide annual turnover, as well as periodic penalty payments. Meta now has the opportunity to reply before any final decision is taken.

Henna Virkkunen, Executive Vice President for Tech Sovereignty, Security and Democracy, said Meta’s own terms and conditions already state that its services are not intended for children under 13, but that the company appears to be doing too little in practice to prevent them from gaining access.

Why does it matter?

The case matters because it goes to the heart of how the Digital Services Act is expected to work in practice: not only by requiring large platforms to set rules for child safety, but by obliging them to enforce those rules effectively. If the Commission’s preliminary view is confirmed, the Meta case could become an important benchmark for how the EU treats age assurance, risk assessments, and platform accountability in cases involving minors, with wider implications for other services that rely on self-declared age checks and weak reporting tools.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

Meta partners with Overview and Noon Energy to power AI data centres

Meta has announced two energy partnerships to support its AI infrastructure, teaming up with Overview Energy for space solar power and Noon Energy for ultra-long-duration storage, with up to 1 GW reserved under each agreement.

Overview Energy operates satellites in geosynchronous orbit, roughly 22,000 miles above Earth, where sunlight is constant. The satellites collect solar energy and beam it to existing ground-based solar farms as low-intensity, near-infrared light, enabling around-the-clock electricity generation without requiring additional land or grid infrastructure.

Noon Energy‘s technology relies on modular, reversible solid-oxide fuel cells and carbon-based storage, offering over 100 hours of energy storage. Meta has reserved up to 1 GW/100 GWh, with an initial 25 MW/2.5 GWh pilot demonstration expected by 2028. The company describes this as among the largest commitments to ultra-long-duration storage in the industry.

Both partnerships build on Meta’s existing energy portfolio, which includes more than 30 GW of contracted clean and renewable energy. The company is also one of the largest corporate purchasers of nuclear energy in the US, with 7.7 GW secured across agreements with Vistra, TerraPower, Oklo and Constellation Energy.

Overview Energy’s orbital demonstration is planned for 2028, with commercial delivery to the US grid potentially starting as early as 2030. Noon Energy’s demonstration project targets the same year, with its modular design allowing capacity to scale alongside Meta’s growing data centre footprint.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

Meta expands parental oversight with new AI conversation insights for teens

Meta has introduced new supervision features that allow parents to see the topics their teenagers discuss with its AI assistant across Facebook, Messenger, and Instagram.

The update provides visibility into activity over the previous seven days, grouping interactions into areas such as education, health and well-being, lifestyle, travel, and entertainment. Parents can review these themes through a new Insights tab, although they will not see the exact prompts their teen sent or Meta AI’s responses.

The feature forms part of Meta’s broader effort to strengthen safeguards for younger users as AI becomes more embedded in everyday digital experiences. For more sensitive issues, including suicide and self-harm, Meta says it is developing additional alerts to notify parents when teens try to engage in those types of conversations with its AI assistant.

Meta has also partnered with external experts, including the Cyberbullying Research Centre, to develop structured conversation prompts to help families talk about AI use. The company says these tools are intended to support informed, non-judgemental dialogue rather than passive monitoring.

Alongside these updates, Meta has created an AI Wellbeing Expert Council to provide input on the development of age-appropriate AI systems for teens. The move reflects a wider shift towards embedding safety, transparency, and parental involvement into AI-driven platforms.

Would you like to learn more about AI, tech and digital diplomacyIf so, ask our Diplo chatbot!