Brazil study maps age assurance practices across 25 digital services

A new study by CGI.br and NIC.br examines how digital services in Brazil implement age assurance measures. Presented in Brasília during an event on the Digital Child and Adolescent Statute (ECA Digital), the study reviewed 25 popular online services used by children and adolescents.

The study found that most of the services analysed do not apply age checks at the point of registration, including some platforms aimed at adults. According to the release, age assurance usually appears later, when users try to access specific features such as livestreaming or monetisation.

Titled ‘Age assurance practices in 25 digital services used by children in Brazil’, the study analysed governance documents published before the ECA Digital entered into force. From 18 March, the law requires information-society services aimed at children and adolescents in Brazil, or likely to be accessed by them, to adopt effective age-assurance measures and parental supervision.

The study found that 11 of the 25 platforms relied on third-party age-assurance services, particularly social media and generative AI platforms. Official identity document submission was the most common verification method, while selfie-based checks were the most common age-estimation tool. Differences were also found between the minimum ages stated by services and those listed in app stores, and some adult-oriented platforms could still be accessed by younger users with parental consent.

Parental supervision tools were available in 15 of the 25 services, but activation was usually optional and depended on parents or guardians. Transparency also emerged as a weakness: only six services published Brazil-specific reports, and only one explained how its minimum-age policy was applied. Policies were often spread across multiple pages, averaging 22 pages per service, and around 40% of the services provided related information in other languages.

Fábio Senne, General Research Coordinator at Cetic.br | NIC.br, said: ‘One of the study’s central aims was to verify the integrity of the information made available by digital services in Brazil. It is essential that data on age protection be communicated clearly and accessibly, allowing more informed and effective parental supervision.’

Juliana Cunha, manager of the Digital Public Policy Advisory Office at CGI.br | NIC.br, said: ‘This survey was developed to support the debate on implementation of the ECA Digital and to offer a clear understanding of the current landscape. This initiative forms part of a broader set of actions by CGI.br and NIC.br aimed at providing technical evidence to support effective enforcement of the law. Our commitment is to foster a safer and more responsible digital ecosystem for children and adolescents in Brazil.’

The release says the study used as a methodological reference the OECD technical paper ‘Age assurance practices of 50 online services used by children’, published in 2025. Information was collected between 10 and 30 January 2026 from public documents made available by the services in Brazil, totalling 550 pages analysed. The event also marked the launch of TIC Kids Online Brazil 2025, a publication on internet use by children and adolescents aged 9 to 17 in Brazil.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

ILO and World Bank paper says GenAI may deepen labour-market divides

A joint working paper by the International Labour Organization (ILO) and the World Bank says generative AI is likely to reshape labour markets globally, but not in the same way across countries.

The paper finds that advanced economies face greater overall exposure, while developing economies may see disruption arrive faster than productivity gains due to weaker digital infrastructure and differences in how work is organised.

Prepared as a background study for the World Development Report 2026, the paper examines labour-market exposure to GenAI across 135 countries, covering about two-thirds of global employment. According to the study, digital infrastructure and job-task composition are among the main factors shaping the distribution of risks and opportunities between advanced and developing economies.

Exposure is highest in advanced economies, especially in clerical and professional occupations. Lower-income countries are less exposed overall, but the paper says structural constraints reduce their ability to benefit from the technology. A central concern is that workers in jobs vulnerable to automation are often already online, even in poorer settings, meaning displacement could happen relatively quickly.

The paper also says many of the jobs most exposed to automation in developing economies are relatively higher-quality roles, including clerical and administrative work that has often provided a route into decent employment, especially for women and young workers. AI-driven automation, the study warns, could narrow those pathways.

Potential gains are also uneven. Many workers in jobs that could benefit from GenAI lack reliable internet access in lower-income settings. The paper adds that the same occupation title can involve different tasks depending on the country, with workers in poorer economies often carrying out fewer non-routine analytical tasks, relying less on computers, and doing more routine or manual work. Such differences reduce the scope for productivity gains from GenAI deployment.

ILO and the World Bank conclude in the paper that GenAI’s labour-market effects will depend not only on the technology itself, but also on digital connectivity, skills, task organisation, labour-market institutions, and social protection. Expanded digital access, stronger skills policies, and better labour protections are presented as necessary if the gains from GenAI are to be shared more broadly.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

South Korea sets ambition to become AI leader

South Korea has unveiled a national strategy to become one of the world’s top three AI powers by 2028. The plan combines investment in digital infrastructure, data systems and next-generation connectivity.

Authorities aim to expand networks by advancing 5G capabilities and preparing for the commercial deployment of 6G by 2030. Cybersecurity and data integration are also key priorities to support a stronger digital ecosystem.

The strategy includes developing talent across education levels and investing in core technologies such as semiconductors and quantum computing. AI adoption is expected to expand across sectors, including manufacturing, healthcare and agriculture.

The South Korean officials also plan to promote digital inclusion through learning centres and assistive technologies. Coordination between ministries will be strengthened to ensure effective delivery of the long-term roadmap.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

Campaign highlights risks of profit-driven digital platforms

A global campaign led by the Norwegian Consumer Council (NCC) has drawn attention to the decline in quality across digital platforms, a phenomenon widely referred to as ‘enshitification’, in which services deteriorate over time as companies prioritise monetisation over user experience.

The initiative has gained momentum through a viral video and coordinated advocacy efforts across multiple regions.

Inshitification is a term coined by journalist Cory Doctorow that describes a pattern in which platforms initially serve users well, then shift towards extracting value from both users and business partners.

In practice, it often results in increased advertising, paywalls, and reduced functionality, with platforms leveraging user dependence to introduce less favourable conditions.

More than 70 advocacy groups across the EU, the US and Norway have urged policymakers to take stronger action, arguing that declining competition and market concentration allow platforms to degrade services without losing users.

Network effects and high switching costs further limit consumer choice, making it difficult to move to alternative platforms even when dissatisfaction grows.

Existing frameworks, such as the Digital Markets Act and the Digital Services Act, aim to address some of these issues by promoting interoperability, transparency, and accountability.

However, experts argue that enforcement remains too slow and insufficient to deter harmful practices, suggesting that stronger regulatory intervention will be necessary to restore balance between consumers, platforms, and competition in the digital economy.

Would you like to learn more about AI, tech and digital diplomacyIf so, ask our Diplo chatbot!  

Stanford study warns about the risks of ‘sycophantic’ AI chatbots

A new study from Stanford University has raised concerns about the growing use of AI chatbots for personal advice, highlighting risks linked to a behaviour known as ‘sycophancy’, where systems validate users’ views instead of challenging them.

Researchers argue that such responses are not merely stylistic but have broader consequences for decision-making and social behaviour.

The analysis examined multiple leading models, including ChatGPT, Claude, and Gemini, and found that chatbot responses supported user perspectives far more often than human feedback.

In scenarios involving questionable or harmful actions, systems frequently endorsed behaviour that human evaluators would criticise, raising concerns about reliability in sensitive contexts such as relationships or ethical decisions.

Further experiments involving thousands of participants showed that users tend to prefer and trust sycophantic responses, increasing the likelihood of repeated use.

However, such interactions also appeared to reinforce self-centred thinking and reduce willingness to reconsider or apologise, suggesting a deeper impact on social judgement and interpersonal skills.

Researchers warn that users’ tendency to favour agreeable responses may create incentives for developers to prioritise engagement over accuracy or ethical balance.

The findings highlight the need for oversight and caution, with experts advising against relying on AI systems as substitutes for human guidance in complex personal situations.

Would you like to learn more about AI, tech and digital diplomacyIf so, ask our Diplo chatbot!

EU investigates cyber attack targeting Commission websites

The European Commission has confirmed a cyber-attack targeting its cloud infrastructure hosting the Europa.eu services, with authorities acting swiftly to contain the incident and prevent disruption to public access.

The attack was identified on 24 March, prompting immediate mitigation measures to secure systems and maintain service continuity.

Preliminary findings indicate that some data may have been accessed from affected websites, although the full scope of the incident remains under investigation.

The Commission has begun notifying the relevant EU entities that may be affected, while continuing efforts to assess the extent of the breach and strengthen safeguards.

Officials confirmed that internal systems were not affected, limiting the overall impact of the attack.

Monitoring efforts remain ongoing, with additional security measures being implemented to protect data and infrastructure, rather than relying solely on existing defences. The Commission has also committed to analysing the incident to improve its cybersecurity capabilities.

The attack comes amid growing cyber and hybrid threats targeting European institutions and critical services.

Existing frameworks, including the NIS2 Directive and the Cyber Solidarity Act, aim to strengthen resilience and coordination across member states, supporting a more unified response to large-scale cyber incidents across the EU.

Would you like to learn more about AI, tech and digital diplomacyIf so, ask our Diplo chatbot!  

UK regulator targets misleading online reviews in new crackdown

The Competition and Markets Authority has launched new investigations into five companies as part of a wider crackdown on fake and misleading online reviews, targeting practices that shape consumer decisions rather than reflect genuine customer experiences.

The cases involve Autotrader, Feefo, Dignity, Just Eat and Pasta Evangelists across sectors, including car sales, food delivery and funeral services.

CMA is examining whether negative reviews were suppressed, ratings inflated, or incentives offered in exchange for positive feedback without disclosure.

Concerns also extend to moderation practices and whether review systems provide a complete and accurate picture of customer experiences, rather than favouring reputational or commercial interests. No conclusions have yet been reached on whether consumer law has been breached.

Online reviews play a central role in consumer behaviour, influencing significant levels of spending across the UK economy.

Research indicates that a large majority of consumers rely on reviews when making purchasing decisions, raising concerns that misleading content can distort markets and undermine trust, particularly as AI makes it harder to detect fabricated reviews.

The investigations form part of a broader enforcement effort under the Digital Markets Competition and Consumers Act 2024, which introduced stricter rules on fake and misleading reviews.

Authorities aim to improve transparency and accountability across digital platforms, with potential penalties reaching up to 10% of global turnover for companies found to have breached consumer protection laws.

Would you like to learn more about AI, tech and digital diplomacyIf so, ask our Diplo chatbot!

EU and Japan strengthen digital partnership in ICT Dialogue

The European Commission and Japan have reinforced their digital cooperation through the 31st the EU–Japan ICT Dialogue held in Tokyo, focusing on advancing shared priorities in emerging technologies instead of pursuing separate national strategies.

A meeting that forms part of the broader EU–Japan Digital Partnership, which aims to deepen collaboration in key areas of the digital economy.

Discussions covered a wide range of topics, including AI, cybersecurity, and secure connectivity infrastructure such as submarine cables and Arctic networks.

Both sides also explored developments in 5G and 6G technologies, alongside emerging solutions like quantum key distribution, highlighting the importance of secure and resilient communication systems in an evolving digital landscape.

The dialogue also emphasised cooperation between the EU AI Office and AI Safety Institute, as well as joint efforts in research, innovation, and international standardisation.

These initiatives aim to align regulatory approaches and technological development rather than create fragmented global frameworks.

By strengthening collaboration across critical digital sectors, the EU and Japan seek to enhance technological resilience and promote secure, interoperable systems.

The ongoing partnership reflects a shared commitment to shaping global digital standards while supporting innovation and economic growth in both regions.

Would you like to learn more about AI, tech and digital diplomacyIf so, ask our Diplo chatbot!

Barnsley tests AI in healthcare and skills training through the Tech Town programme

Barnsley is advancing its Tech Town programme with new AI pilots aimed to improving healthcare services and supporting local businesses. The initiative aims to demonstrate how AI can deliver practical benefits for communities and public services.

A Healthcare Living Lab will test AI tools within hospital settings to reduce waiting times, missed appointments and administrative workload. The pilot will generate evidence on improving patient care and supporting NHS staff efficiency.

Alongside this, a £800,000 AI Upskilling Challenge Fund will provide targeted training for SMEs and residents. The programme focuses on industries such as manufacturing and aims to equip individuals with the skills needed to adopt AI in their work.

The pilots also prioritise inclusion by supporting groups with limited access to technology or digital confidence. If successful, the approach could offer a scalable model for wider AI adoption across the UK.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot 

India strengthens digital economy with AI and media initiatives

India has launched three initiatives to expand AI adoption, digital content creation and access to broadcasting services. The programme focuses on building an AI-skilled workforce and strengthening the country’s digital ecosystem.

A national AI skilling initiative aims to train 15,000 creators and media professionals through partnerships with Google and YouTube. The programme covers generative AI, prompting and advanced tools, supporting future-ready skills in media and creative industries.

The government also introduced MyWAVES, a platform within WAVES OTT that enables users to create, upload and share content. Designed for user-generated content, it supports multiple formats and multilingual participation across India.

Access to broadcasting has been simplified through in-built satellite tuners and an advanced programme guide in television sets. The update removes the need for set-top boxes, improving affordability and expanding reach, particularly in remote areas.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot