UN calls for global action against online scam networks

Online scam networks operating across Southeast Asia are defrauding victims worldwide, using AI, impersonation techniques, and complex cyber tools to steal billions of dollars.

At the Global Fraud Summit in Vienna, the UN Office on Drugs and Crime (UNODC) and INTERPOL brought together governments, law enforcement, and private-sector actors to strengthen international cooperation against these crimes.

Victims include individuals from diverse backgrounds, often highly educated and financially experienced. One Australian couple, Kim and Allan Sawyer, lost more than $2.5 million after engaging with what appeared to be a legitimate investment opportunity. ‘The scammer was extraordinarily believable,’ Kim Sawyer said. ‘He had a British accent, used all the right financial market terms and knew how to induce us by appearing credible every time.’

UNODC officials warn that these operations extend beyond fraud, forming part of a broader criminal ecosystem driven by organised scam networks, involving human trafficking, corruption, and money laundering.

‘We need to be looking into prosecuting high-level criminals, following the money through financial investigations and identifying the giant networks that operate behind these operations,’ said Delphine Schantz, UNODC’s regional representative for Southeast Asia and the Pacific.

Authorities say the scale and complexity of these crimes require a coordinated global response to dismantle scam networks effectively. ‘The complexity of these crimes requires an equally complex, whole-of-government approach and enhanced coordination among governments, financial intelligence units and digital banks,’ Schantz added.

Investigations in countries such as the Philippines and Cambodia have revealed how scam networks operate on the ground. In Manila, a former scam compound uncovered facilities used to control trafficked workers and evidence of corruption linked to local officials. ‘How do you prove a cybercrime in 36 hours? It is not possible,’ said the Philippines’ Presidential Anti-Organised Crime Commission (PAOCC) operations director, recalling the challenges investigators faced during early raids.

In Cambodia, international prosecutors and investigators have focused on improving cooperation mechanisms, including extradition, asset recovery, and the handling of digital evidence. These efforts are seen as critical in addressing the cross-border nature of scam networks.

Despite increased enforcement efforts, these networks continue to adapt and relocate, maintaining a global reach. At recent international meetings, including a summit in Bangkok involving nearly 60 countries and major technology firms, officials agreed on the need for shared intelligence, joint investigations and coordinated prosecutions.

Victims continue to call for stronger responses. ‘The scammer works twice: they take your money, and they take your soul. They really do. They take your self-worth. And then, you feel like you’re being scammed again, by the authorities’ lack of response,’ Sawyer said.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

AI agents test limits of EU rules

AI agents are rapidly gaining traction, raising questions about whether existing EU rules can keep pace. Unlike chatbots, these systems can act autonomously and interact with digital tools on behalf of users.

Experts warn that AI agents require deeper access to personal data and online services to function effectively. Regulators in Europe are monitoring potential risks as the technology becomes more integrated into daily life.

Lawmakers are examining whether current legislation, such as the AI Act and GDPR, adequately covers agent-based systems. Legal experts highlight challenges around contracts, liability and accountability when AI acts independently.

Despite concerns, many governments remain reluctant to introduce new rules, citing regulatory fatigue. Policymakers may rely on existing frameworks unless major incidents force a reassessment of AI oversight.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

ECA Digital law raises pressure on Big Tech in Brazil

Brazil is set to enforce a new law aimed at strengthening protections for children online, marking a significant shift in how digital platforms are regulated in the country. The legislation, known as ECA Digital, introduces stricter rules for technology companies and will test whether stronger oversight can translate into real-world impact.

The law, which takes effect this week, allows authorities to impose warnings and fines of up to $10 million for violations. In severe cases, courts may order the suspension or banning of platforms operating in Brazil. The measure was passed rapidly following public outrage over online content involving the sexualisation of minors.

ECA Digital builds on Brazil’s existing child protection framework and adapts it to the digital environment. It introduces obligations such as age verification, stricter content moderation, and mechanisms to remove harmful material involving minors without requiring a court order.

The law also targets platform design, requiring companies to limit features that may encourage compulsive use among children. This includes restrictions on excessive notifications, profiling for targeted advertising, and design elements that prolong user engagement.

Enforcement of ECA Digital will be led by Brazil’s data protection authority, ANPD, alongside a new screening centre within the Federal Police. However, implementation challenges remain, including limited regulatory capacity and the short timeline between the law’s approval and enforcement.

Experts say the law reflects a broader global trend, with dozens of countries considering similar measures. While technology companies have introduced tools such as age verification and parental controls, critics argue that bigger changes to platform design and content moderation are still needed.

Brazil’s experience may serve as a test case for how governments balance child protection, platform responsibility, and enforcement capacity. The effectiveness of ECA Digital will depend not only on its legal framework but also on how rigorously it is applied in practice.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

Google launches AI skills initiative to support Europe’s workforce transition

At the Future of Work Forum, Google introduced ‘AI Works for Europe’, a programme aimed at strengthening digital skills and supporting workforce adaptation to AI across the region.

Funding of $30 million will be directed through Google.org to expand training opportunities, alongside broader access to AI certification programmes designed to help individuals and businesses adopt new technologies in practical contexts.

A central focus involves preparing workers and students for labour market changes.

Partnerships with organisations such as INCO are supporting the development of targeted training programmes, particularly in sectors where demand for AI-related skills is increasing, including finance, logistics and marketing.

New educational pathways are also being introduced, including an expanded AI Professional Certificate available in multiple European languages. These initiatives aim to improve AI literacy and provide hands-on experience aligned with employer expectations.

Collaboration with local organisations and institutions remains a key element, reflecting a broader strategy to ensure access to training across different regions and communities.

Efforts to expand AI capabilities across Europe highlight the growing importance of skills development as AI becomes more integrated into economic activity.

Would you like to learn more about AI, tech and digital diplomacyIf so, ask our Diplo chatbot!

xAI faces lawsuit over alleged misuse of AI image generation

Legal action has been filed against xAI in a US federal court, with plaintiffs alleging that its AI system Grok was used to generate harmful and explicitly manipulated images of minors.

The lawsuit claims that xAI failed to implement adequate safeguards to prevent the creation of such content, despite similar protections adopted by other AI developers.

According to the filing, the technology enabled the transformation of real images into explicit material without sufficient restrictions.

Plaintiffs seek to establish a class action, arguing that the company should be held accountable for both direct and third-party uses of its models. Legal arguments focus on whether responsibility extends to external applications built using the same underlying AI systems.

The case also highlights broader regulatory challenges surrounding AI-generated content, particularly the difficulty of preventing misuse when systems can modify real images. Questions around platform liability, safety standards, and enforcement are likely to shape future policy discussions.

Growing scrutiny of AI developers reflects increasing concern over how generative systems are deployed, especially in contexts involving sensitive or harmful content.

Would you like to learn more about AI, tech and digital diplomacyIf so, ask our Diplo chatbot!  

New licensing rules for crypto platforms in Australia

Australia is advancing plans to regulate digital asset platforms under its financial services framework. The Senate committee recommended passing the Digital Assets Framework Bill 2025, bringing Australia closer to licensing crypto exchanges and tokenisation platforms.

Industry groups have raised concerns about definitions such as ‘digital token’ and ‘factual control.’ Broad wording could inadvertently cover infrastructure providers, including multi-party wallet systems, potentially classifying them as financial service operators.

Ripple Labs emphasised the need for precise language to avoid unintended regulation.

The committee supported the Treasury’s approach while planning to refine technical details through future regulations. Coinbase welcomed the progress but noted ongoing banking challenges for crypto firms.

The bill now proceeds to the Senate for debate and a final vote, which could reshape digital asset operations in Australia.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

Britain targets quantum leadership with £1bn investment

UK Secretary of State for Science, Innovation and Technology Liz Kendall has announced a £1bn funding package to boost UK quantum computing and retain domestic talent.

The initiative reflects growing concern over the country’s ability to compete globally, particularly after the US established dominance in AI.

Officials emphasised the need to retain British startups, engineers, and researchers who often relocate abroad in search of better funding and scaling opportunities. The UK produces top talent, but Google and OpenAI own many leading firms.

The investment will support the development of large-scale quantum computers for use across science, industry, and the public sector. Another £1bn will fund real-world use in finance, pharmaceuticals, and energy.

The government aims to build a fully operational domestic quantum system by the early 2030s.

Quantum computing uses qubits that can exist in multiple states simultaneously, enabling far greater computational power than classical systems. Fully fault-tolerant machines are still in development, but the technology could drive advances in drug discovery, materials science, and complex modelling.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

AI tool could help detect domestic violence risk years earlier

Researchers in the United States have developed an AI system designed to help doctors identify patients who may be at risk of intimate partner violence. The tool analyses hospital data to detect patterns associated with abuse, potentially enabling healthcare professionals to intervene earlier.

Intimate partner violence refers to abuse from current or former partners and can lead to serious injuries, chronic pain, and long-term mental health problems. According to the European Commission, 18 percent of women who have had a partner reported experiencing physical or sexual violence from a partner in 2021.

The study, published in the journal Nature, examined hospital records from nearly 850 women who had experienced intimate partner violence and more than 5,200 similar patients in a control group. Researchers used the data to train three different machine learning systems to detect patterns associated with abuse.

One model analysed structured hospital data, such as age and medical history. A second model examined written clinical notes, including doctors’ observations and radiology reports. A third system combined both data types and achieved the strongest results, correctly identifying risk in 88 percent of cases.

Researchers found that the system could flag potential abuse more than three years before some patients later entered hospital-based intervention programmes. By analysing large datasets, the tool can detect patterns of physical trauma linked to abuse and alert clinicians so they can approach the issue carefully and offer support.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

Tinder tests AI matchmaking features for modern dating

Popular dating platform Tinder is testing a new AI-powered feature called ‘Chemistry’ designed to improve matchmaking. The tool analyses user profiles to identify more relevant connections while the app’s familiar swipe system remains central to the experience.

Developed by parent company Match Group, the feature uses AI to understand personality traits, interests and preferences through profile data. Future updates may allow users to answer questionnaires or share photo archives to refine recommendations.

Additional modes are also being introduced to further personalise matches. Music preferences and astrology signs can now influence suggested profiles, reflecting evolving trends among younger online daters.

The platform is also testing in-person events and virtual video speed dating to encourage real-world interaction. AI moderation tools are also being deployed, helping detect inappropriate messages and verify that profiles belong to real people.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbotf

Seoul deepens ties with global AI developers

South Korea is pursuing a partnership with AI company Anthropic as part of a national strategy to strengthen technological capabilities. Officials are working toward a memorandum of understanding with the developer of the Claude AI system.

The initiative follows discussions between South Korea’s science minister and Anthropic’s chief executive, Dario Amodei, during an AI summit in New Delhi. Authorities are also preparing for the company’s planned office opening in the city in 2026.

Government leaders in South Korea have already expanded cooperation with OpenAI. Policymakers say the strategy aims to build ties with leading global AI developers while supporting domestic innovation.

Officials are also developing a homegrown AI foundation model with local companies. The programme forms part of a national plan to position the country among the world’s leading AI powers.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot