Dubai charities open doors to crypto donations

Dubai charities now accept donations in cryptocurrencies and virtual assets through a new service launched by the Islamic Affairs and Charitable Activities Department. The move signals a shift towards modernised fundraising channels across the emirate.

The service supports Dubai’s wider digital transformation strategy and aims to improve efficiency within the charitable donation ecosystem. Donors can now use globally recognised payment options, highlighting the rising use of virtual assets as valid financial tools.

Regulation remains central to the initiative, with IACAD introducing clear policies to protect donors, enhance transparency, and ensure compliance with approved standards. Introductory workshops have also been organised to guide charities through operational and procedural requirements.

Officials stressed that charities need preliminary authorisation to ensure donations are processed securely and in accordance with regulations. The initiative further reinforces Dubai’s ambition to lead in innovative and technology-driven humanitarian work.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

Google study shows teens embrace AI

Google’s new study, The Future Report, surveyed over 7,000 teenagers across Europe about their use of digital technologies. Most respondents describe themselves as curious, critical, and optimistic about AI in their daily lives.

Many teens use AI daily or several times a week for learning, creativity, and exploring new topics. They report benefits such as instant feedback and more engaging learning while remaining cautious about over-reliance.

Young people value personalised content recommendations and algorithmic suggestions, but emphasise verifying information and avoiding bias. They adopt strategies to verify sources and ensure the trustworthiness of online content.

The report emphasises the importance of digital literacy, safety, balanced technology use, and youth engagement in shaping the digital future. Participants request guidance from educators and transparent AI design to promote the responsible and ethical use of AI.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

Amazon considers 10 billion investment in OpenAI

Amazon is reportedly considering a $10 billion investment in OpenAI, highlighting its growing focus on the generative AI market. The investment follows OpenAI’s October restructuring, giving it more flexibility to raise funds and form new tech partnerships.

OpenAI has recently secured major infrastructure agreements, including a $38 billion cloud computing deal with Amazon Web Services (AWS). Deals with Nvidia, AMD, and Broadcom boost OpenAI’s access to computing power for its AI development.

Amazon has invested $8 billion in Anthropic and continues developing AI hardware through AWS’s Inferentia and Trainium chips. The move into OpenAI reflects Amazon’s strategy to expand its influence across the AI sector.

OpenAI’s prior $13 billion Microsoft exclusivity has ended, enabling it to pursue new partnerships. The combination of fresh funding, cloud capacity, and hardware support positions OpenAI for continued growth in the AI industry.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

PwC automates AI governance with Agent Mode

The global professional services network, PwC, has expanded its Model Edge platform with the launch of Agent Mode, an AI assistant designed to automate governance, compliance and documentation across enterprise AI model lifecycles.

The capability targets the growing administrative burden faced by organisations as AI model portfolios scale and regulatory expectations intensify.

Agent Mode allows users to describe governance tasks in natural language, instead of manually navigating workflows.

A system that executes actions directly within Model Edge, generates leadership-ready documentation and supports common document and reporting formats, significantly reducing routine compliance effort.

PwC estimates weekly time savings of between 20 and 50 percent for governance and model risk teams.

Behind the interface, a secure orchestration engine interprets user intent, verifies role based permissions and selects appropriate large language models based on task complexity. The design ensures governance guardrails remain intact while enabling faster and more consistent oversight.

PwC positions Agent Mode as a step towards fully automated, agent-driven AI governance, enabling organisations to focus expert attention on risk assessment and regulatory judgement instead of process management as enterprise AI adoption accelerates.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

AI reshapes media in North Macedonia with new regulatory guidance

A new analysis examines the impact of AI on North Macedonia’s media sector, offering guidance on ethical standards, human rights, and regulatory approaches.

Prepared in both Macedonian and English, the study benchmarks the country’s practices against European frameworks and provides actionable recommendations for future regulation and self-regulation.

The research, supported by the EU and Council of Europe’s PRO-FREX initiative and in collaboration with the Agency for Audio and Audiovisual Media Services (AVMU), was presented during Media Literacy Days 2025 in Skopje.

It highlights the relevance of EU and Council of Europe guidelines, including the Framework Convention on AI and Human Rights, and guidance on responsible AI in journalism.

AVMU’s involvement underlines its role in ensuring media freedom, fairness, and accountability amid rapid technological change. Participants highlighted the need for careful policymaking to manage AI’s impact, protecting media diversity, journalistic standards, and public trust online.

The analysis forms part of broader efforts under the Council of Europe and the EU’s Horizontal Facility for the Western Balkans and Türkiye, aiming to support North Macedonia in aligning media regulation with European standards while responsibly integrating AI technologies.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

AI and security trends shape the internet in 2025

Cloudflare released its sixth annual Year in Review, providing a comprehensive snapshot of global Internet trends in 2025. The report highlights rising digital reliance, AI progress, and evolving security threats across Cloudflare’s network and Radar data.

Global Internet traffic rose 19 percent year-on-year, reflecting increased use for personal and professional activities. A key trend was the move from large-scale AI training to continuous AI inference, alongside rapid growth in generative AI platforms.

Google and Meta remained the most popular services, while ChatGPT led in generative AI usage.

Cybersecurity remained a critical concern. Post-quantum encryption now protects 52 percent of Internet traffic, yet record-breaking DDoS attacks underscored rising cyber risks.

Civil society and non-profit organisations were the most targeted sectors for the first time, while government actions caused nearly half of the major Internet outages.

Connectivity varied by region, with Europe leading in speed and quality and Spain ranking highest globally. The report outlines 2025’s Internet challenges and progress, providing insights for governments, businesses, and users aiming for greater resilience and security.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

Crypto theft soars in 2025 with fewer but bigger attacks

Cryptocurrency theft intensified in 2025, with total stolen funds exceeding $3.4 billion despite fewer large-scale incidents. Losses became increasingly concentrated, with a few major breaches driving most of the annual damage and widening the gap between typical hacks and extreme outliers.

North Korea remained the dominant threat actor, stealing at least $2.02 billion in digital assets during the year, a 51% increase compared with 2024.

Larger thefts were achieved through fewer operations, often relying on insider access, executive impersonation, and long-term infiltration of crypto firms rather than frequent attacks.

Laundering activity linked to North Korean actors followed a distinctive and disciplined pattern. Stolen funds moved in smaller tranches through Chinese-language laundering networks, bridges, and mixing services, usually following a structured 45-day cycle.

Individual wallet attacks surged, impacting tens of thousands of victims, while the total value stolen from personal wallets fell. Decentralised finance remained resilient, with hack losses low despite rising locked capital, indicating stronger security practices.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Healthcare faces growing compliance pressure from AI adoption

AI is becoming a practical tool across healthcare as providers face rising patient demand, chronic disease and limited resources.

These AI systems increasingly support tasks such as clinical documentation, billing, diagnostics and personalised treatment instead of relying solely on manual processes, allowing clinicians to focus more directly on patient care.

At the same time, AI introduces significant compliance and safety risks. Algorithmic bias, opaque decision-making, and outdated training data can affect clinical outcomes, raising questions about accountability when errors occur.

Regulators are signalling that healthcare organisations cannot delegate responsibility to automated systems and must retain meaningful human oversight over AI-assisted decisions.

Regulatory exposure spans federal and state frameworks, including HIPAA privacy rules, FDA oversight of AI-enabled medical devices and enforcement under the False Claims Act.

Healthcare providers are expected to implement robust procurement checks, continuous monitoring, governance structures and patient consent practices as AI regulation evolves towards a more coordinated national approach.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

US platforms signal political shift in DSA risk reports

Major online platforms have submitted their 2025 systemic risk assessments under the Digital Services Act as the European Commission moves towards issuing its first fine against a Very Large Online Platform.

The reports arrive amid mounting political friction between Brussels and Washington, placing platform compliance under heightened scrutiny on both regulatory and geopolitical fronts.

Several US-based companies adjusted how risks related to hate speech, misinformation and diversity are framed, reflecting political changes in the US while maintaining formal alignment with EU law.

Meta softened enforcement language, reclassified hate speech under broader categories and reduced visibility of civil rights structures, while continuing to emphasise freedom of expression as a guiding principle.

Google and YouTube similarly narrowed references to misinformation, replaced established terminology with less charged language and limited enforcement narratives to cases involving severe harm.

LinkedIn followed comparable patterns, removing references to earlier commitments on health misinformation, civic integrity and EU voluntary codes that have since been integrated into the DSA framework.

X largely retained its prior approach, although its report continues to reference cooperation with governments and civil society that contrasts with the platform’s public positioning.

TikTok diverged from other platforms by expanding disclosures on hate speech, election integrity and fact-checking, likely reflecting its vulnerability to regulatory action in both the EU and the US.

European regulators are expected to assess whether these shifts represent genuine risk mitigation or strategic alignment with US political priorities.

As systemic risk reports increasingly inform enforcement decisions, subtle changes in language, scope and emphasis may carry regulatory consequences well beyond their formal compliance function.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Nigeria reaches AI training milestone under Microsoft skills initiative

Microsoft, in partnership with the Federal Government of Nigeria, Data Science Nigeria and Lagos Business School, has announced that its AI National Skills Initiative (AINSI) has reached more than 350,000 Nigerians with AI training, building on a wider effort that has delivered digital education to over four million people since 2021.

The programme aims to equip individuals, including everyday tech users, business leaders and public sector officials, with AI competencies to strengthen Nigeria’s position in the digital economy.

Key components include digital literacy workshops, business leadership sessions, an AI hackathon, and targeted developer courses covering analytics, DevOps, machine learning and data science.

Microsoft and its partners are also working with government-driven initiatives such as the Developers in Government and Three Million Technical Talent programmes to build a robust pipeline of technical talent.

Leadership training for public sector executives seeks to foster evidence-driven policymaking and responsible AI adoption.

Looking ahead, the Nigeria initiative aims to train up to one million citizens over three years, helping build a future-ready workforce capable of driving innovation, economic growth and national competitiveness in the AI era.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!