Brazil study maps age assurance practices across 25 digital services

A new study by CGI.br and NIC.br examines how digital services in Brazil implement age assurance measures. Presented in Brasília during an event on the Digital Child and Adolescent Statute (ECA Digital), the study reviewed 25 popular online services used by children and adolescents.

The study found that most of the services analysed do not apply age checks at the point of registration, including some platforms aimed at adults. According to the release, age assurance usually appears later, when users try to access specific features such as livestreaming or monetisation.

Titled ‘Age assurance practices in 25 digital services used by children in Brazil’, the study analysed governance documents published before the ECA Digital entered into force. From 18 March, the law requires information-society services aimed at children and adolescents in Brazil, or likely to be accessed by them, to adopt effective age-assurance measures and parental supervision.

The study found that 11 of the 25 platforms relied on third-party age-assurance services, particularly social media and generative AI platforms. Official identity document submission was the most common verification method, while selfie-based checks were the most common age-estimation tool. Differences were also found between the minimum ages stated by services and those listed in app stores, and some adult-oriented platforms could still be accessed by younger users with parental consent.

Parental supervision tools were available in 15 of the 25 services, but activation was usually optional and depended on parents or guardians. Transparency also emerged as a weakness: only six services published Brazil-specific reports, and only one explained how its minimum-age policy was applied. Policies were often spread across multiple pages, averaging 22 pages per service, and around 40% of the services provided related information in other languages.

Fábio Senne, General Research Coordinator at Cetic.br | NIC.br, said: ‘One of the study’s central aims was to verify the integrity of the information made available by digital services in Brazil. It is essential that data on age protection be communicated clearly and accessibly, allowing more informed and effective parental supervision.’

Juliana Cunha, manager of the Digital Public Policy Advisory Office at CGI.br | NIC.br, said: ‘This survey was developed to support the debate on implementation of the ECA Digital and to offer a clear understanding of the current landscape. This initiative forms part of a broader set of actions by CGI.br and NIC.br aimed at providing technical evidence to support effective enforcement of the law. Our commitment is to foster a safer and more responsible digital ecosystem for children and adolescents in Brazil.’

The release says the study used as a methodological reference the OECD technical paper ‘Age assurance practices of 50 online services used by children’, published in 2025. Information was collected between 10 and 30 January 2026 from public documents made available by the services in Brazil, totalling 550 pages analysed. The event also marked the launch of TIC Kids Online Brazil 2025, a publication on internet use by children and adolescents aged 9 to 17 in Brazil.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

ILO and World Bank paper says GenAI may deepen labour-market divides

A joint working paper by the International Labour Organization (ILO) and the World Bank says generative AI is likely to reshape labour markets globally, but not in the same way across countries.

The paper finds that advanced economies face greater overall exposure, while developing economies may see disruption arrive faster than productivity gains due to weaker digital infrastructure and differences in how work is organised.

Prepared as a background study for the World Development Report 2026, the paper examines labour-market exposure to GenAI across 135 countries, covering about two-thirds of global employment. According to the study, digital infrastructure and job-task composition are among the main factors shaping the distribution of risks and opportunities between advanced and developing economies.

Exposure is highest in advanced economies, especially in clerical and professional occupations. Lower-income countries are less exposed overall, but the paper says structural constraints reduce their ability to benefit from the technology. A central concern is that workers in jobs vulnerable to automation are often already online, even in poorer settings, meaning displacement could happen relatively quickly.

The paper also says many of the jobs most exposed to automation in developing economies are relatively higher-quality roles, including clerical and administrative work that has often provided a route into decent employment, especially for women and young workers. AI-driven automation, the study warns, could narrow those pathways.

Potential gains are also uneven. Many workers in jobs that could benefit from GenAI lack reliable internet access in lower-income settings. The paper adds that the same occupation title can involve different tasks depending on the country, with workers in poorer economies often carrying out fewer non-routine analytical tasks, relying less on computers, and doing more routine or manual work. Such differences reduce the scope for productivity gains from GenAI deployment.

ILO and the World Bank conclude in the paper that GenAI’s labour-market effects will depend not only on the technology itself, but also on digital connectivity, skills, task organisation, labour-market institutions, and social protection. Expanded digital access, stronger skills policies, and better labour protections are presented as necessary if the gains from GenAI are to be shared more broadly.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

South Korea sets ambition to become AI leader

South Korea has unveiled a national strategy to become one of the world’s top three AI powers by 2028. The plan combines investment in digital infrastructure, data systems and next-generation connectivity.

Authorities aim to expand networks by advancing 5G capabilities and preparing for the commercial deployment of 6G by 2030. Cybersecurity and data integration are also key priorities to support a stronger digital ecosystem.

The strategy includes developing talent across education levels and investing in core technologies such as semiconductors and quantum computing. AI adoption is expected to expand across sectors, including manufacturing, healthcare and agriculture.

The South Korean officials also plan to promote digital inclusion through learning centres and assistive technologies. Coordination between ministries will be strengthened to ensure effective delivery of the long-term roadmap.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

Boston schools expand AI learning initiative

A new partnership led by the City of Boston aims to expand AI literacy across public schools, supported by funding from tech entrepreneur Paul English. The initiative brings together government, academia and industry to strengthen digital skills.

The programme will introduce AI-focused learning in high schools, alongside teacher training and the development of industry-informed curricula. Plans include creating student ambassador roles and offering access to advanced courses.

University of Massachusetts Boston in the US will help design educational content and provide resources through its applied AI institute. The collaboration aims to prepare students for changing job markets shaped by emerging technologies.

Officials say the effort will support responsible and ethical use of AI while opening career pathways. An advisory board of industry experts will guide the programme and connect schools with the wider technology sector.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

Campaign highlights risks of profit-driven digital platforms

A global campaign led by the Norwegian Consumer Council (NCC) has drawn attention to the decline in quality across digital platforms, a phenomenon widely referred to as ‘enshitification’, in which services deteriorate over time as companies prioritise monetisation over user experience.

The initiative has gained momentum through a viral video and coordinated advocacy efforts across multiple regions.

Inshitification is a term coined by journalist Cory Doctorow that describes a pattern in which platforms initially serve users well, then shift towards extracting value from both users and business partners.

In practice, it often results in increased advertising, paywalls, and reduced functionality, with platforms leveraging user dependence to introduce less favourable conditions.

More than 70 advocacy groups across the EU, the US and Norway have urged policymakers to take stronger action, arguing that declining competition and market concentration allow platforms to degrade services without losing users.

Network effects and high switching costs further limit consumer choice, making it difficult to move to alternative platforms even when dissatisfaction grows.

Existing frameworks, such as the Digital Markets Act and the Digital Services Act, aim to address some of these issues by promoting interoperability, transparency, and accountability.

However, experts argue that enforcement remains too slow and insufficient to deter harmful practices, suggesting that stronger regulatory intervention will be necessary to restore balance between consumers, platforms, and competition in the digital economy.

Would you like to learn more about AI, tech and digital diplomacyIf so, ask our Diplo chatbot!  

Ofcom tightens online safety enforcement across major platforms

Enforcement of the Online Safety Act intensifies in 2026, with regulators pushing stronger age verification across social media, gaming, messaging, and adult platforms. Significant progress has been reported in the adult sector, with most major pornography services now using age assurance or restricting UK access.

Ofcom has issued new expectations for major children’s platforms, including stricter age verification, stronger protections against grooming, safer feeds, and tighter product testing. The regulator has warned that further enforcement action may follow if compliance is not met.

New obligations are also being introduced, including a requirement from April 2026 for services to report child sexual exploitation and abuse content to the National Crime Agency.

Providers are being instructed to keep risk assessments up to date and adapt to evolving regulatory guidance, including upcoming consultations and expanded reporting duties.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

Major service disruption affects DeepSeek chatbot in China

DeepSeek’s chatbot suffered a seven-hour-plus disruption in China, prompting multiple updates as the company worked to restore full functionality. Users began reporting issues on Sunday evening, with further performance problems recorded on Monday morning.

Initial alerts appeared on monitoring platforms and DeepSeek’s own status page, which acknowledged an incident shortly after it began. Although early fixes were deployed within hours, additional disruptions followed, requiring further corrective updates before the system stabilised.

The company has not disclosed the cause of the outage, and no official comment has been provided. The extended downtime stands out for a platform known for consistent performance, which has maintained a near 99 percent uptime record since the launch of its R1 model in 2025.

The disruption comes at a time of heightened anticipation for DeepSeek’s next major update, as speculation builds across China’s competitive AI sector, where firms continue to race to release new models.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

AI platform from Fujitsu transforms legacy code analysis

Fujitsu has launched a generative AI service that modernises legacy IT systems by analysing source code and generating design documents. The Application Transform platform, powered by Fujitsu Kozuchi, targets complex environments such as COBOL-based enterprise systems.

The service aims to significantly reduce the time and expertise required for system documentation, cutting workloads by up to 97 percent. Fujitsu combines proprietary code analysis with Knowledge Graph-enhanced retrieval to improve accuracy and reduce missing or inconsistent outputs.

Enhanced by generative AI, the system produces structured, readable documentation while ensuring consistency across large, complex codebases. Reported improvements include higher comprehensiveness and significantly better readability compared with conventional methods.

Fujitsu plans to offer the service as SaaS in Japan from 30 March 2026, with additional capabilities such as automated code rewriting and system maintenance support expected in future updates.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

Cryptocurrency political donations banned under new Canada bill

Canada’s Liberal government has introduced Bill C-25 to prohibit cryptocurrency and other non-cash instruments from being used as political donations. The measure covers all registered parties, candidates, leadership, and nomination contests, and third-party advertisers, tightening campaign finance rules.

The proposal reverses a 2019 framework that had allowed limited crypto contributions under strict conditions, though uptake remained minimal and no major party reported receiving such donations in recent federal elections.

Authorities argue that pseudo-anonymous blockchain transactions make it difficult to verify the true source of funds, raising concerns about traceability and foreign interference risks.

Under the new rules, any prohibited donation must be returned, destroyed, or converted and forwarded to the Receiver General within 30 days. Enforcement includes fines of up to twice the illegal contribution’s value, reaching CA$25,000 for individuals and CA$100,000 for corporations.

Bill C-25 also revives provisions from the earlier Bill C-65, which collapsed in 2025 after Parliament was prorogued. The updated law aligns with UK restrictions and expands election oversight powers, including measures against deepfakes and foreign interference.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

VivaCity partners with Nottingham to enhance urban transport using AI

Nottingham City Council has partnered with VivaCity to install over 200 AI-enabled transport sensors across the city. The sensors include ANPR, traffic monitoring, and Smart Signal Control capabilities.

Sensors will collect real-time, anonymous data on vehicle types, pedestrians, and cyclists to inform traffic management decisions. The first Smart Junction at the Ring Road-Aspley Lane will adjust traffic lights according to current conditions.

Funding comes from the Future Transport Zones Fund, for which the Department awarded £16.7 million for Transport. Installation began in February 2023 and will finish by November 2023, with coverage across main routes.

Data from the sensors will feed into a public Data Hub alongside car park and EV charging datasets. Air quality monitors will be added near sensors to help assess correlations between road use and pollution levels.

Sensors will not function as speed cameras and will not record personal information. The technology will be upgraded over time to identify additional vehicle types such as taxis, minibuses, and mobility scooters.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot