UN warns of rising AI-driven threats to child safety

UN agencies have issued a stark warning over the accelerating risks AI poses to children online, citing rising cases of grooming, deepfakes, cyberbullying and sexual extortion.

A joint statement published on 19 January urges urgent global action, highlighting how AI tools increasingly enable predators to target vulnerable children with unprecedented precision.

Recent data underscores the scale of the threat, with technology-facilitated child abuse cases in the US surging from 4,700 in 2023 to more than 67,000 in 2024.

During the COVID-19 pandemic, online exploitation intensified, particularly affecting girls and young women, with digital abuse frequently translating into real-world harm, according to officials from the International Telecommunication Union.

Governments are tightening policies, led by Australia’s social media ban for under-16s, as the UK, France and Canada consider similar measures. UN agencies urged tech firms to prioritise child safety and called for stronger AI literacy across society.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

Apple accuses the EU of blocking App Store compliance changes

Apple has accused the European Commission of preventing it from implementing App Store changes designed to comply with the Digital Markets Act, following a €500 million fine for breaching the regulation.

The company claims it submitted a formal compliance plan in October and has yet to receive a response from EU officials.

In a statement, Apple argued that the Commission requested delays while gathering market feedback, a process the company says lasted several months and lacked a clear legal basis.

The US tech giant described the enforcement approach as politically motivated and excessively burdensome, accusing the EU of unfairly targeting an American firm.

The Commission has rejected those claims, saying discussions with Apple remain ongoing and emphasising that any compliance measures must support genuinely viable alternative app stores.

Officials pointed to the emergence of multiple competing marketplaces after the DMA entered into force as evidence of market demand.

Scrutiny has increased following the decision by SetApp mobile to shut down its iOS app store in February, with the developer citing complex and evolving business terms.

Questions remain over whether Apple’s proposed shift towards commission-based fees and expanded developer communication rights will satisfy EU regulators.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Grok faces regulatory scrutiny in South Korea over explicit AI content

South Korea has moved towards regulatory action against Grok, the generative AI chatbot developed by xAI, following allegations that the system was used to generate and distribute sexually exploitative deepfake images.

The country’s Personal Information Protection Commission has launched a preliminary fact-finding review to assess whether violations occurred and whether the matter falls within its legal remit.

The review follows international reports accusing Grok of facilitating the creation of explicit and non-consensual images of real individuals, including minors.

Under the Personal Information Protection Act of South Korea, generating or altering sexual images of identifiable people without consent may constitute unlawful handling of personal data, exposing providers to enforcement action.

Concerns have intensified after civil society groups estimated that millions of explicit images were produced through Grok over a short period, with thousands involving children.

Several governments, including those in the US, Europe and Canada, have opened inquiries, while parts of Southeast Asia have opted to block access to the service altogether.

In response, xAI has introduced technical restrictions preventing users from generating or editing images of real people. Korean regulators have also demanded stronger youth protection measures from X, warning that failure to address criminal content involving minors could result in administrative penalties.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

France accelerates rapid ban on social media for under-15s

French President Emmanuel Macron has called for an accelerated legislative process to introduce a nationwide ban on social media for children under 15 by September.

Speaking in a televised address, Macron said the proposal would move rapidly through parliament so that explicit rules are in place before the new school year begins.

Macron framed the initiative as a matter of child protection and digital sovereignty, arguing that foreign platforms or algorithmic incentives should not shape young people’s cognitive and emotional development.

He linked excessive social media use to manipulation, commercial exploitation and growing psychological harm among teenagers.

Data from France’s health watchdog show that almost half of teenagers spend between two and five hours a day on their smartphones, with the vast majority accessing social networks daily.

Regulators have associated such patterns with reduced self-esteem and increased exposure to content linked to self-harm, drug use and suicide, prompting legal action by families against major platforms.

The proposal from France follows similar debates in the UK and Australia, where age-based access restrictions have already been introduced.

The French government argues that decisive national action is necessary instead of waiting for a slower Europe-wide consensus, although Macron has reiterated support for a broader EU approach.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

New phishing attacks exploit visual URL tricks to impersonate major brands

Generative phishing techniques are becoming harder to detect as attackers use subtle visual tricks in web addresses to impersonate trusted brands. A new campaign reported by Cybersecurity News shows how simple character swaps create fake websites that closely resemble real ones on mobile browsers.

The phishing attacks rely on a homoglyph technique where the letters ‘r’ and ‘n’ are placed together to mimic the appearance of an ‘m’ in a domain name. On smaller screens, the difference is difficult to spot, allowing phishing pages to appear almost identical to real Microsoft or Marriott login sites.

Cybersecurity researchers observed domains such as rnicrosoft.com being used to send fake security alerts and invoice notifications designed to lure victims into entering credentials. Once compromised, accounts can be hijacked for financial fraud, data theft, or wider access to corporate systems.

Experts warn that mobile browsing increases the risk, as users are less likely to inspect complete URLs before logging in. Directly accessing official apps or typing website addresses manually remains the safest way to avoid falling into these traps.

Security specialists also continue to recommend passkeys, strong, unique passwords, and multi-factor authentication across all major accounts, as well as heightened awareness of domains that visually resemble familiar brands through character substitution.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

Major European banks unite to develop euro-backed stablecoin

A consortium of 10 central European banks has established a new company, Qivalis, to develop and issue a euro-pegged stablecoin, targeting a launch in the second half of 2026, subject to regulatory approval.

The initiative seeks to offer a European alternative to US dollar-dominated digital payment systems and strengthen the region’s strategic autonomy in digital finance.

The participating banks include BNP Paribas, ING, UniCredit, KBC, Danske Bank, SEB, Caixabank, DekaBank, Banca Sella, and Raiffeisen Bank International, with BNP Paribas joining after the initial announcement.

Former Coinbase Germany chief executive Jan-Oliver Sell will lead Qivalis as CEO, while former NatWest chair Howard Davies has been appointed chair. The Amsterdam-based company plans to build a workforce of up to 50 employees over the next two years.

Initial use cases will focus on crypto trading, enabling fast, low-cost payments and settlements, with broader applications planned later. The project emerges as stablecoins grow rapidly, led by dollar-backed tokens, while limited € alternatives drive regulatory interest and ECB engagement.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Oklahoma advances voluntary Bitcoin payments framework

Oklahoma lawmakers have introduced Senate Bill 2064, proposing a legal framework that allows businesses, state employees, and residents to receive payments in Bitcoin without designating it as legal tender.

The bill recognises Bitcoin as a financial instrument, aligning with constitutional limits while enabling its voluntary use across payroll, procurement, and private transactions.

Under the proposal, state employees could opt to receive wages in Bitcoin, US dollars, or a combination of both at the start of each pay period. Payments would be settled at prevailing market rates and deposited into either self-hosted wallets or approved custodial accounts.

Vendors contracting with the state could also choose Bitcoin on a per-transaction basis, while crypto-native firms would benefit from reduced regulatory friction.

The legislation instructs the State Treasurer to appoint a payment processor and develop operational rules, with contracts targeted for completion by early 2027.

If approved, the framework would take effect in November 2026, positioning Oklahoma among a small group of US states exploring direct Bitcoin integration into public finance, alongside initiatives already launched in Texas and New Hampshire.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Hollywood figures back anti-AI campaign

More than 800 creatives in the US have signed an anti-AI campaign accusing big technology companies of exploiting human work. High-profile figures from film and television in the country have backed the initiative, which argues that training AI on creative content without consent amounts to theft.

The campaign was launched by the Human Artistry Campaign, a coalition representing creators, unions and industry groups in the country. Supporters say AI systems should not be allowed to use artistic work without permission and fair compensation.

Actors and filmmakers in the US warned that unchecked AI adoption threatens livelihoods across film, television and music. Campaign organisers said innovation should not come at the expense of creators’ rights or ownership of their work.

The statement adds to growing pressure on lawmakers and technology firms in the US. Creative workers are calling for clearer rules on how AI can be developed and deployed across the entertainment industry.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

LinkedIn phishing campaign exposes dangerous DLL sideloading attack

A multi-faceted phishing campaign is abusing LinkedIn private messages to deliver weaponised malware using DLL sideloading, security researchers have warned. The activity relies on PDFs and archive files that appear trustworthy to bypass conventional security controls.

Attackers contact targets on LinkedIn and send self-extracting archives disguised as legitimate documents. When opened, a malicious DLL is sideloaded into a trusted PDF reader, triggering memory-resident malware that establishes encrypted command-and-control channels.

Using LinkedIn messages increases engagement by exploiting professional trust and bypassing email-focused defences. DLL sideloading allows malicious code to run inside legitimate applications, complicating detection.

The campaign enables credential theft, data exfiltration and lateral movement through in-memory backdoors. Encrypted command-and-control traffic makes containment more difficult.

Organisations using common PDF software or Python tooling face elevated risk. Defenders are advised to strengthen social media phishing awareness, monitor DLL loading behaviour and rotate credentials where compromise is suspected.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

Analysis reveals Grok generated 3 million sexualised images

A new analysis found Grok generated an estimated three million sexualised images in 11 days, including around 23,000 appearing to depict children. The findings raise serious concerns over safeguards, content moderation, and platform responsibility.

The surge followed the launch of Grok’s one-click image editing feature in late December, which quickly gained traction among users. Restrictions were later introduced, including paid access limits and technical measures to prevent image undressing.

Researchers based their estimates on a random sample of 20,000 images, extrapolating from these results to more than 4.6 million images generated during the study period. Automated tools and manual review identified sexualised content and confirmed cases involving individuals appearing under 18.

Campaigners have warned that the findings expose significant gaps in AI safety controls, particularly in protecting children. Calls are growing for stricter oversight, stronger accountability, and more robust safeguards before large-scale AI image deployment.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot