New AI accountability toolkit unveiled by Amnesty International

Amnesty International has introduced a toolkit to help investigators, activists, and rights defenders hold governments and corporations accountable for harms caused by AI and automated decision-making systems. The resource draws on investigations across Europe, India, and the United States and focuses on public sector uses in welfare, policing, healthcare, and education.

The toolkit offers practical guidance for researching and challenging opaque algorithmic systems that often produce bias, exclusion, and human rights violations rather than improving public services. It emphasises collaboration with impacted communities, journalists, and civil society organisations to uncover discriminatory practices.

One key case study highlights Denmark’s AI-powered welfare system, which risks discriminating against disabled individuals, migrants, and low-income groups while enabling mass surveillance. Amnesty International underlines human rights law as a vital component of AI accountability, addressing gaps left by conventional ethical audits and responsible AI frameworks.

With growing state and corporate investments in AI, Amnesty International stresses the urgent need to democratise knowledge and empower communities to demand accountability. The toolkit equips civil society, journalists, and affected individuals with the strategies and resources to challenge abusive AI systems and protect fundamental rights.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

Australia enforces under-16 social media ban as new rules took effect

Australia has finally introduced the world’s first nationwide prohibition on social media use for under-16s, forcing platforms to delete millions of accounts and prevent new registrations.

Instagram, TikTok, Facebook, YouTube, Snapchat, Reddit, Twitch, Kick and Threads are removing accounts held by younger users. At the same time, Bluesky has agreed to apply the same standard despite not being compelled to do so. The only central platform yet to confirm compliance is X.

The measure follows weeks of age-assurance checks, which have not been flawless, with cases of younger teenagers passing facial-verification tests designed to keep them offline.

Families are facing sharply different realities. Some teenagers feel cut off from friends who managed to bypass age checks, while others suddenly gain a structure that helps reduce unhealthy screen habits.

A small but vocal group of parents admit they are teaching their children how to use VPNs and alternative methods instead of accepting the ban, arguing that teenagers risk social isolation when friends remain active.

Supporters of the legislation counter that Australia imposes clear age limits in other areas of public life for reasons of well-being and community standards, and the same logic should shape online environments.

Regulators are preparing to monitor the transition closely.

The eSafety Commissioner will demand detailed reports from every platform covered by the law, including the volume of accounts removed, evidence of efforts to stop circumvention and assessments of whether reporting and appeals systems are functioning as intended.

Companies that fail to take reasonable steps may face significant fines. A government-backed academic advisory group will study impacts on behaviour, well-being, learning and unintended shifts towards more dangerous corners of the internet.

Global attention is growing as several countries weigh similar approaches. Denmark, Norway and Malaysia have already indicated they may replicate Australia’s framework, and the EU has endorsed the principle in a recent resolution.

Interest from abroad signals a broader debate about how societies should balance safety and autonomy for young people in digital spaces, instead of relying solely on platforms to set their own rules.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

G7 ministers meet in Montreal to boost industrial cooperation

Canada has opened the G7 Industry, Digital and Technology Ministers’ Meeting in Montreal, bringing together ministers, industry leaders, and international delegates to address shared industrial and technological challenges.

The meeting is being led by Industry Minister Melanie Joly and AI and Digital Innovation Minister Evan Solomon, with discussions centred on strengthening supply chains, accelerating innovation, and boosting industrial competitiveness across advanced economies.

Talks will focus on building resilient economies, expanding trusted digital infrastructure, and supporting growth while aligning industrial policy with economic security and national security priorities shared among G7 members.

The agenda builds on outcomes from the recent G7 leaders’ summit in Kananaskis, Canada, including commitments on quantum technologies, critical minerals cooperation, and a shared statement on AI and prosperity.

Canadian officials said closer coordination among trusted partners is essential amid global uncertainty and rapid technological change, positioning innovation-driven industry as a long-term foundation for economic growth, productivity, and shared prosperity.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

EU AI Act changes aim to ease high-risk compliance pressure

The European Commission has proposed a series of amendments to the EU AI Act to ensure a timely, smooth, and proportionate rollout of the bloc’s landmark AI rules.

Set out in the Digital Omnibus on AI published in November, the changes would delay some of the most demanding obligations of the AI Act, particularly for high-risk AI systems, linking compliance deadlines to the availability of supporting standards and guidance.

The proposal also introduces new grace periods for certain transparency requirements, especially for generative AI and deepfake systems, while leaving existing prohibitions on manipulative or exploitative uses of AI fully intact.

Other revisions include removing mandatory AI literacy requirements for providers and deployers and expanding the powers of the European AI Office, allowing it to directly supervise some general-purpose AI systems and AI embedded in large online platforms.

While the package includes simplification measures designed to ease burdens on smaller firms and encourage innovation, the amendments now face a complex legislative process, adding uncertainty for companies preparing to comply with the AI Act’s long-term obligations.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

UNESCO strengthens Caribbean disaster reporting

UNESCO has launched a regional programme to improve disaster reporting across the Caribbean after Hurricane Melissa and rising misinformation.

The initiative equips journalists and emergency communicators with advanced tools such as AI, drones and geographic information systems to support accurate and ethical communication.

The 30-hour online course, funded through UNESCO’s Media Development Program, brings together twenty-three participants from ten Caribbean countries and territories.

Delivered in partnership with GeoTechVision/Jamaica Flying Labs, the training combines practical exercises with disaster simulations to help participants map hazards, collect aerial evidence and verify information using AI-supported methods.

Participants explore geospatial mapping, drone use and ethics while completing a capstone project in realistic scenarios. The programme aims to address gaps revealed by recent disasters and strengthen the region’s ability to deliver trusted information.

UNESCO’s wider Media in Crisis Preparedness and Response programme supports resilient media institutions, ensuring that communities receive timely and reliable information before, during and after crises.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

Teen chatbot use surges across the US

Nearly a third of US teenagers engage with AI chatbots each day, according to new Pew data. Researchers say nearly 70% have tried a chatbot, reflecting growing dependence on digital tools during schoolwork and leisure time. Concerns remain over exposure to mature content and possible mental health harms.

Pew surveyed almost 1,500 US teens aged 13 to 17, finding broadly similar usage patterns across gender and income. Older teens reported higher engagement, while Black and Hispanic teens showed slightly greater adoption than White peers.

Experts warn that frequent chatbot use may hinder development or encourage cheating in academic settings. Safety groups have urged parents to limit access to companion-like AI tools, citing risks posed by romantic or intimate interactions with minors.

Companies are now rolling out safeguards in response to public scrutiny and legal pressure. OpenAI and Character.AI have tightened controls, while Meta says it has adjusted policies following reports of inappropriate exchanges.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

Mitigated ads personalisation coming to Meta platforms in the EU

Meta has agreed to introduce a less personalised ads option for Facebook and Instagram users in the EU, as part of efforts to comply with the bloc’s Digital Markets Act and address concerns over data use and user consent.

Under the revised model, users will be able to access Meta’s social media platforms without agreeing to extensive personal data processing for fully personalised ads. Instead, they can opt for an alternative experience based on significantly reduced data inputs, resulting in more limited ad targeting.

The option is set to roll out across the EU from January 2026. It marks the first time Meta has offered users a clear choice between highly personalised advertising and a reduced-data model across its core platforms.

The change follows months of engagement between Meta and Brussels after the European Commission ruled in April that the company had breached the DMA. Regulators stated that Meta’s previous approach had failed to provide users with a genuine and effective choice over how their data was used for advertising.

Once implemented, the Commission said it will gather evidence and feedback from Meta, advertisers, publishers, and other stakeholders. The goal is to assess the extent to which the new option is adopted and whether it significantly reshapes competition and data practices in the EU digital advertising market.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

Google faces renewed EU scrutiny over AI competition

The European Commission has opened a formal antitrust investigation into whether AI features embedded in online search are being used to unfairly squeeze competitors in newly emerging digital markets shaped by generative AI.

The probe targets Alphabet-owned Google, focusing on allegations that the company imposes restrictive conditions on publishers and content creators while giving its own AI-driven services preferential placement over rival technologies and alternative search offerings.

Regulators are examining products such as AI Overviews and AI Mode, assessing how publisher content is reused within AI-generated summaries and whether media organisations are compensated in a clear, fair, and transparent manner.

EU competition chief Teresa Ribera said the European Commission’s action reflects a broader effort to protect online media and preserve competitive balance as artificial intelligence increasingly shapes how information is produced, discovered, and monetised.

The case adds to years of scrutiny by the European Commission over Google’s search and advertising businesses, even as the company proposes changes to its ad tech operations and continues to challenge earlier antitrust rulings.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

New spyware threat alerts issued by Apple and Google

Apple and Google have issued a fresh round of cyber threat notifications, warning users worldwide they may have been targeted by sophisticated surveillance operations linked to state-backed actors.

Apple said it sent alerts on 2 December, confirming it has now notified users in more than 150 countries, though it declined to disclose how many people were affected or who was responsible.

Google followed on 3 December, announcing warnings for several hundred accounts targeted by Intellexa spyware across multiple countries in Africa, Central Asia, and the Middle East.

The Alphabet-owned company said Intellexa continues to evade restrictions despite US sanctions, highlighting persistent challenges in limiting the spread of commercial surveillance tools.

Researchers say such alerts raise costs for cyber spies by exposing victims, often triggering investigations that can lead to public scrutiny and accountability over spyware misuse.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

Japan weighs easing rules on personal data use

Japan is preparing to relax restrictions on personal data use to support rapid AI development. Government sources say a draft bill aims to expand third-party access to sensitive information.

Plans include allowing medical histories and criminal records to be obtained without consent for statistical purposes. Japanese officials argue such access could accelerate research while strengthening domestic competitiveness.

New administrative fines would target companies that profit from unlawfully acquired data affecting large groups. Penalties would match any gains made through misconduct, reflecting growing concern over privacy abuses.

A government panel has reviewed the law since 2023 and intends to present reforms soon. Debate is expected to intensify as critics warn of increased risks to individual rights if support for AI development in this regard continues.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot