National payments system anchors Ethiopia’s digital shift

Ethiopia has launched its National Digital Payment Strategy for 2026 to 2030 alongside a new instant payments platform, marking a significant milestone in the country’s broader push towards digital transformation.

The five-year strategy sets out plans to expand payment interoperability, strengthen public trust, and encourage innovation across the financial sector, with a focus on widening adoption and reducing barriers for underserved and rural communities.

At the centre of the initiative is a national instant payments system designed to support rapid, secure transactions, including person-to-person transfers, QR payments, bulk disbursements, and selected low-value cross-border transactions.

Government officials described the shift as central to building a more inclusive, cash-lite economy, highlighting progress in digital financial access and sustained investment in core digital and payments infrastructure.

The rollout builds on the earlier Digital Ethiopia 2025 agenda and feeds into the longer-term Digital Ethiopia 2030 vision, as authorities position the country to meet rising demand for secure digital financial services across Africa.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

AI job interviews raise concerns among recruiters and candidates

As AI takes on a growing share of recruitment tasks, concerns are mounting that automated interviews and screening tools could be pushing hiring practices towards what some describe as a ‘race to the bottom’.

The rise of AI video interviews illustrates both the efficiency gains sought by companies and the frustrations candidates experience when algorithms, rather than people, become the first point of contact.

BBC journalist MaryLou Costa found this out first-hand after her AI interviewer froze mid-question. The platform provider, TestGorilla, said the malfunction affected only a small number of users, but the episode highlights the fragility of a process that companies increasingly rely on to sift through rising volumes of applications.

With vacancies down 12% year-on-year and applications per role up 65%, firms argue that AI is now essential for managing the workload. Recruitment groups such as Talent Solutions Group say automated tools help identify the fraction of applicants who will advance to human interviews.

Employers are also adopting voice-based AI interviewers such as Cera’s system, Ami, which conducts screening calls and has already processed hundreds of thousands of applications. Cera claims the tool has cut recruitment costs by two-thirds and saved significant staff time. Yet jobseekers describe a dehumanising experience.

Marketing professional Jim Herrington, who applied for over 900 roles after redundancy, argues that keyword-driven filters overlook the broader qualities that define a strong candidate. He believes companies risk damaging their reputation by replacing real conversation with automated screening and warns that AI-based interviews cannot replicate human judgement, respect or empathy.

Recruiters acknowledge that AI is also transforming candidate behaviour. Some applicants now use bots to submit thousands of applications at once, further inflating volumes and prompting companies to rely even more heavily on automated filtering.

Ivee co-founder Lydia Miller says this dynamic risks creating a loop in which both sides use AI to outpace each other, pushing humans further out of the process. She warns that candidates may soon tailor their responses to satisfy algorithmic expectations, rather than communicate genuine strengths. While AI interviews can reduce stress for some neurodivergent or introverted applicants, she says existing bias in training data remains a significant risk.

Experts argue that AI should augment, not replace, human expertise. Talent consultant Annemie Ress notes that experienced recruiters draw on subtle cues and intuition that AI cannot yet match. She warns that over-filtering risks excluding strong applicants before anyone has read their CV or heard their voice.

With debates over fairness, transparency and bias now intensifying, the challenge for employers is balancing efficiency with meaningful engagement and ensuring that automated tools do not undermine the human relationships on which good recruitment depends.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

UK study warns of risks behind emotional attachments to AI therapists

A new University of Sussex study suggests that AI mental-health chatbots are most effective when users feel emotionally close to them, but warns this same intimacy carries significant risks.

The research, published in Social Science & Medicine, analysed feedback from 4,000 users of Wysa, an AI therapy app used within the NHS Talking Therapies programme. Many users described the AI as a ‘friend,’ ‘companion,’ ‘therapist,’ or occasionally even a ‘partner.’

Researchers say these emotional bonds can kickstart therapeutic processes such as self-disclosure, increased confidence, and improved wellbeing. Intimacy forms through a loop: users reveal personal information, receive emotionally validating responses, feel gratitude and safety, then disclose more.

But the team warns this ‘synthetic intimacy’ may trap vulnerable users in a self-reinforcing bubble, preventing escalation to clinical care when needed. A chatbot designed to be supportive may fail to challenge harmful thinking, or even reinforce it.

The report highlights growing reliance on AI to fill gaps in overstretched mental-health services. NHS trusts use tools like Wysa and Limbic to help manage referrals and support patients on waiting lists.

Experts caution that AI therapists remain limited: unlike trained clinicians, they lack the ability to read nuance, body language, or broader context. Imperial College’s Prof Hamed Haddadi called them ‘an inexperienced therapist’, adding that systems tuned to maintain user engagement may continue encouraging disclosure even when users express harmful thoughts.

Researchers argue policymakers and app developers must treat synthetic intimacy as an inevitable feature of digital mental-health tools, and build clear escalation mechanisms for cases where users show signs of crisis or clinical disorder.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

New AI accountability toolkit unveiled by Amnesty International

Amnesty International has introduced a toolkit to help investigators, activists, and rights defenders hold governments and corporations accountable for harms caused by AI and automated decision-making systems. The resource draws on investigations across Europe, India, and the United States and focuses on public sector uses in welfare, policing, healthcare, and education.

The toolkit offers practical guidance for researching and challenging opaque algorithmic systems that often produce bias, exclusion, and human rights violations rather than improving public services. It emphasises collaboration with impacted communities, journalists, and civil society organisations to uncover discriminatory practices.

One key case study highlights Denmark’s AI-powered welfare system, which risks discriminating against disabled individuals, migrants, and low-income groups while enabling mass surveillance. Amnesty International underlines human rights law as a vital component of AI accountability, addressing gaps left by conventional ethical audits and responsible AI frameworks.

With growing state and corporate investments in AI, Amnesty International stresses the urgent need to democratise knowledge and empower communities to demand accountability. The toolkit equips civil society, journalists, and affected individuals with the strategies and resources to challenge abusive AI systems and protect fundamental rights.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

Australia enforces under-16 social media ban as new rules took effect

Australia has finally introduced the world’s first nationwide prohibition on social media use for under-16s, forcing platforms to delete millions of accounts and prevent new registrations.

Instagram, TikTok, Facebook, YouTube, Snapchat, Reddit, Twitch, Kick and Threads are removing accounts held by younger users. At the same time, Bluesky has agreed to apply the same standard despite not being compelled to do so. The only central platform yet to confirm compliance is X.

The measure follows weeks of age-assurance checks, which have not been flawless, with cases of younger teenagers passing facial-verification tests designed to keep them offline.

Families are facing sharply different realities. Some teenagers feel cut off from friends who managed to bypass age checks, while others suddenly gain a structure that helps reduce unhealthy screen habits.

A small but vocal group of parents admit they are teaching their children how to use VPNs and alternative methods instead of accepting the ban, arguing that teenagers risk social isolation when friends remain active.

Supporters of the legislation counter that Australia imposes clear age limits in other areas of public life for reasons of well-being and community standards, and the same logic should shape online environments.

Regulators are preparing to monitor the transition closely.

The eSafety Commissioner will demand detailed reports from every platform covered by the law, including the volume of accounts removed, evidence of efforts to stop circumvention and assessments of whether reporting and appeals systems are functioning as intended.

Companies that fail to take reasonable steps may face significant fines. A government-backed academic advisory group will study impacts on behaviour, well-being, learning and unintended shifts towards more dangerous corners of the internet.

Global attention is growing as several countries weigh similar approaches. Denmark, Norway and Malaysia have already indicated they may replicate Australia’s framework, and the EU has endorsed the principle in a recent resolution.

Interest from abroad signals a broader debate about how societies should balance safety and autonomy for young people in digital spaces, instead of relying solely on platforms to set their own rules.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

G7 ministers meet in Montreal to boost industrial cooperation

Canada has opened the G7 Industry, Digital and Technology Ministers’ Meeting in Montreal, bringing together ministers, industry leaders, and international delegates to address shared industrial and technological challenges.

The meeting is being led by Industry Minister Melanie Joly and AI and Digital Innovation Minister Evan Solomon, with discussions centred on strengthening supply chains, accelerating innovation, and boosting industrial competitiveness across advanced economies.

Talks will focus on building resilient economies, expanding trusted digital infrastructure, and supporting growth while aligning industrial policy with economic security and national security priorities shared among G7 members.

The agenda builds on outcomes from the recent G7 leaders’ summit in Kananaskis, Canada, including commitments on quantum technologies, critical minerals cooperation, and a shared statement on AI and prosperity.

Canadian officials said closer coordination among trusted partners is essential amid global uncertainty and rapid technological change, positioning innovation-driven industry as a long-term foundation for economic growth, productivity, and shared prosperity.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

EU AI Act changes aim to ease high-risk compliance pressure

The European Commission has proposed a series of amendments to the EU AI Act to ensure a timely, smooth, and proportionate rollout of the bloc’s landmark AI rules.

Set out in the Digital Omnibus on AI published in November, the changes would delay some of the most demanding obligations of the AI Act, particularly for high-risk AI systems, linking compliance deadlines to the availability of supporting standards and guidance.

The proposal also introduces new grace periods for certain transparency requirements, especially for generative AI and deepfake systems, while leaving existing prohibitions on manipulative or exploitative uses of AI fully intact.

Other revisions include removing mandatory AI literacy requirements for providers and deployers and expanding the powers of the European AI Office, allowing it to directly supervise some general-purpose AI systems and AI embedded in large online platforms.

While the package includes simplification measures designed to ease burdens on smaller firms and encourage innovation, the amendments now face a complex legislative process, adding uncertainty for companies preparing to comply with the AI Act’s long-term obligations.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

UNESCO strengthens Caribbean disaster reporting

UNESCO has launched a regional programme to improve disaster reporting across the Caribbean after Hurricane Melissa and rising misinformation.

The initiative equips journalists and emergency communicators with advanced tools such as AI, drones and geographic information systems to support accurate and ethical communication.

The 30-hour online course, funded through UNESCO’s Media Development Program, brings together twenty-three participants from ten Caribbean countries and territories.

Delivered in partnership with GeoTechVision/Jamaica Flying Labs, the training combines practical exercises with disaster simulations to help participants map hazards, collect aerial evidence and verify information using AI-supported methods.

Participants explore geospatial mapping, drone use and ethics while completing a capstone project in realistic scenarios. The programme aims to address gaps revealed by recent disasters and strengthen the region’s ability to deliver trusted information.

UNESCO’s wider Media in Crisis Preparedness and Response programme supports resilient media institutions, ensuring that communities receive timely and reliable information before, during and after crises.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

Teen chatbot use surges across the US

Nearly a third of US teenagers engage with AI chatbots each day, according to new Pew data. Researchers say nearly 70% have tried a chatbot, reflecting growing dependence on digital tools during schoolwork and leisure time. Concerns remain over exposure to mature content and possible mental health harms.

Pew surveyed almost 1,500 US teens aged 13 to 17, finding broadly similar usage patterns across gender and income. Older teens reported higher engagement, while Black and Hispanic teens showed slightly greater adoption than White peers.

Experts warn that frequent chatbot use may hinder development or encourage cheating in academic settings. Safety groups have urged parents to limit access to companion-like AI tools, citing risks posed by romantic or intimate interactions with minors.

Companies are now rolling out safeguards in response to public scrutiny and legal pressure. OpenAI and Character.AI have tightened controls, while Meta says it has adjusted policies following reports of inappropriate exchanges.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

Mitigated ads personalisation coming to Meta platforms in the EU

Meta has agreed to introduce a less personalised ads option for Facebook and Instagram users in the EU, as part of efforts to comply with the bloc’s Digital Markets Act and address concerns over data use and user consent.

Under the revised model, users will be able to access Meta’s social media platforms without agreeing to extensive personal data processing for fully personalised ads. Instead, they can opt for an alternative experience based on significantly reduced data inputs, resulting in more limited ad targeting.

The option is set to roll out across the EU from January 2026. It marks the first time Meta has offered users a clear choice between highly personalised advertising and a reduced-data model across its core platforms.

The change follows months of engagement between Meta and Brussels after the European Commission ruled in April that the company had breached the DMA. Regulators stated that Meta’s previous approach had failed to provide users with a genuine and effective choice over how their data was used for advertising.

Once implemented, the Commission said it will gather evidence and feedback from Meta, advertisers, publishers, and other stakeholders. The goal is to assess the extent to which the new option is adopted and whether it significantly reshapes competition and data practices in the EU digital advertising market.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!