Council of Europe highlights legal frameworks for AI fairness

The Council of Europe recently hosted an online event to examine the challenges posed by algorithmic discrimination and explore ways to strengthen governance frameworks for AI and automated decision-making (ADM) systems.

Two new publications were presented, focusing on legal protections against algorithmic bias and policy guidelines for equality bodies and human rights institutions.

Algorithmic bias has been shown to exacerbate existing social inequalities. In employment, AI systems trained on historical data may unfairly favour male candidates or disadvantage minority groups.

Public authorities also use AI in law enforcement, migration, welfare, justice, education, and healthcare, where profiling, facial recognition, and other automated tools can carry discriminatory risks. Private-sector applications in banking, insurance, and personnel services similarly raise concerns.

Legal frameworks such as the EU AI Act (2024/1689) and the Council of Europe’s Framework Convention on AI, human rights, democracy, and the rule of law aim to mitigate these risks. The publications review how regulations protect against algorithmic discrimination and highlight remaining gaps.

National equality bodies and human rights structures play a key role in monitoring AI/ADM systems, ensuring compliance, and promoting human rights-based deployment.

The webinar highlighted practical guidance and examples for applying EU and Council of Europe rules to public sector AI initiatives, fostering more equitable and accountable systems.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

Brazil excluded from WhatsApp rival AI chatbot ban

WhatsApp has excluded Brazil from its new restriction on third-party general-purpose chatbots, allowing AI providers to continue operating on the platform despite a broader policy shift affecting other markets.

The decision follows action by the competition authority of Brazil, which ordered Meta to suspend elements of the policy while assessing whether the rules unfairly disadvantage rival chatbot providers in favour of Meta AI.

Developers have been informed that services linked to Brazilian phone numbers do not need to stop responding to users or issue service warnings.

Elsewhere, WhatsApp has introduced a 90-day grace period starting in mid-January, requiring chatbot developers to halt responses and notify users that services will no longer function on the app.

The policy applies to tools such as ChatGPT and Grok, while customer service bots used by businesses remain unaffected.

Italy has already secured a similar exemption after regulatory scrutiny, while the EU has opened an antitrust investigation into the new rules.

Meta continues to argue that general-purpose AI chatbots place technical strain on systems designed for business messaging instead of acting as an open distribution platform for AI services.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Canada turns to AI to parse feedback on federal AI strategy consultation

Canada’s Innovation, Science and Economic Development (ISED) department saw an overwhelming volume of comments on its national AI strategy consultation, prompting officials to use AI tools to analyse and organise responses from citizens, organisations and stakeholders.

The consultation was part of a broader effort to shape Canada’s approach to AI governance, regulation and adoption, with the government seeking input on how to balance innovation, competitiveness and responsible AI development.

Analysts and advocates have highlighted Canadians’ demand for strong oversight, transparency, and protections related to privacy and data protection, misinformation and ethical uses of AI.

Public interest groups have urged that rights, privacy and sustainability be central pillars of the AI strategy rather than secondary considerations, and recommended risk-based, people-centred regulations similar to frameworks adopted in other jurisdictions.

The use of AI to process feedback illustrates both the scale of engagement and the government’s willingness to employ the very technology it seeks to govern in drafting its policy.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

EU lawmakers push limits on AI nudity apps

More than 50 EU lawmakers have called on the European Commission to clarify whether AI-powered applications for nudity are prohibited under existing EU legislation, citing concerns about online harm and legal uncertainty.

The request follows public scrutiny of the Grok, owned by xAI, which was found to generate manipulated intimate images involving women and minors.

Lawmakers argue that such systems enable gender-based online violence and the production of child sexual abuse material instead of legitimate creative uses.

In their letter, lawmakers questioned whether current provisions under the EU AI Act sufficiently address nudification tools or whether additional prohibitions are required. They also warned that enforcement focused only on substantial online platforms risks leaving similar applications operating elsewhere.

While EU authorities have taken steps under the Digital Services Act to assess platform responsibilities, lawmakers stressed the need for broader regulatory clarity and consistent application across the digital market.

Further political debate on the issue is expected in the coming days.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Australia’s social media age limit prompts restrictions on millions of under-16 accounts

Major social media platforms restricted access to approximately 4.7 million accounts linked to children under 16 across Australia during early December, following the introduction of the national social media minimum age requirement.

Initial figures collected by eSafety indicate that platforms with high youth usage are already engaging in early compliance efforts.

Since the obligation took effect on 10 December, regulatory focus has shifted towards monitoring and enforcement instead of preparation, targeting services assessed as age-restricted.

Early data suggests meaningful steps are being taken, although authorities stress it remains too soon to determine whether platforms have achieved full compliance.

eSafety has emphasised continuous improvement in age-assurance accuracy, alongside the industry’s responsibility to prevent circumvention.

Reports indicate some under-16 accounts remain active, although early signals point towards reduced exposure and gradual behavioural change rather than immediate elimination.

Officials note that the broader impact of the minimum age policy will emerge over time, supported by a planned independent, longitudinal evaluation involving academic and youth mental health experts.

Data collection will continue to monitor compliance, platform migration trends and long-term safety outcomes for children and families in Australia.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Wikipedia marks 25 years with new global tech partnerships

Wikipedia marked its 25th anniversary by showcasing the rapid expansion of Wikimedia Enterprise and its growing tech partnerships. The milestone reflects Wikipedia’s evolution into one of the most trusted and widely used knowledge sources in the digital economy.

Amazon, Meta, Microsoft, Mistral AI, and Perplexity have joined the partner roster for the first time, alongside Google, Ecosia, and several other companies already working with Wikimedia Enterprise.

These organisations integrate human-curated Wikipedia content into search engines, AI models, voice assistants, and data platforms, helping deliver verified knowledge to billions of users worldwide.

Wikipedia remains one of the top ten most visited websites globally and the only one in that group operated by a non-profit organisation. With over 65 million articles in 300+ languages, the platform is a key dataset for training large language models.

Wikimedia Enterprise provides structured, high-speed access to this content through on-demand, snapshot, and real-time APIs, allowing companies to use Wikipedia data at scale while supporting its long-term sustainability.

As Wikipedia continues to expand into new languages and subject areas, its value for AI development, search, and specialised knowledge applications is expected to grow further.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

New TranslateGemma models support 55 languages efficiently

A new suite of open translation models, TranslateGemma, has been launched, bringing advanced multilingual capabilities to users worldwide. Built on the Gemma 3 architecture, the models support 55 languages and come in 4B, 12B, and 27B parameter sizes.

The release aims to make high-quality translation accessible across devices without compromising efficiency.

TranslateGemma delivers impressive performance gains, with the 12B model surpassing the 27B Gemma 3 baseline on WMT24++ benchmarks. The models achieve higher accuracy while requiring fewer parameters, enabling faster translations with lower latency.

The 4B model also performs on par with larger models, making it ideal for mobile deployment.

The development combines supervised fine-tuning on diverse parallel datasets with reinforcement learning guided by advanced metrics. TranslateGemma performs well in high- and low-resource languages and supports accurate text translation within images.

Designed for flexible deployment, the models cater to mobile devices, consumer laptops, and cloud environments. Researchers and developers can use TranslateGemma to build customised translation solutions and improve coverage for low-resource languages.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

Why young people across South Asia turn to AI

Children and young adults across South Asia are increasingly turning to AI tools for emotional reassurance, schoolwork and everyday advice, even while acknowledging their shortcomings.

Easy access to smartphones, cheap data and social pressures have made chatbots a constant presence, often filling gaps left by limited human interaction.

Researchers and child safety experts warn that growing reliance on AI risks weakening critical thinking, reducing social trust and exposing young users to privacy and bias-related harms.

Studies show that many children understand AI can mislead or oversimplify, yet receive little guidance at school or home on how to question outputs or assess risks.

Rather than banning AI outright, experts argue for child-centred regulation, stronger safeguards and digital literacy that involves parents, educators and communities.

Without broader social support systems and clear accountability from technology companies, AI risks becoming a substitute for human connection instead of a tool that genuinely supports learning and wellbeing.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

X restricts Grok image editing after global backlash

Elon Musk’s X has limited the image editing functions of its Grok AI tool after criticism over the creation of sexualised images of real people.

The platform said technological safeguards have been introduced to block such content in regions where it is illegal, following growing concern from governments and regulators.

UK officials described the move as a positive step, although regulatory scrutiny remains ongoing.

Authorities are examining whether X complied with existing laws, while similar investigations have been launched in the US amid broader concerns over the misuse of AI-generated imagery.

International pressure has continued to build, with some countries banning Grok entirely instead of waiting for platform-led restrictions.

Policy experts have welcomed stronger controls but questioned how effectively X can identify real individuals and enforce its updated rules across different jurisdictions.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Winnipeg schools embrace AI as classroom learning tool

At General Wolfe School and other Winnipeg classrooms, students are using AI tools to help with tasks such as translating language and understanding complex terms, with teachers guiding them on how to verify AI-generated information against reliable sources.

Teachers are cautious but optimistic, developing a thinking framework that prioritises critical thinking and human judgement alongside AI use rather than rigid policies as the technology evolves.

Educators in the Winnipeg School Division are adapting teaching methods to incorporate AI while discouraging over-reliance, stressing that students should use AI as an aid rather than a substitute for learning.

This reflects broader discussions in education about how to balance innovation with foundational skills as AI becomes more commonplace in school environments.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!