New Jersey criminalises AI-generated nude deepfakes of minors

New Jersey has become the first US state to criminalise the creation and sharing of AI-generated nude images of minors, following a high-profile campaign led by 14-year-old Francesca Mani. The US legislation, signed into law on 2 April by Governor Phil Murphy, allows victims to sue perpetrators for up to $1,000 per image and includes criminal penalties of up to five years in prison and fines of up to $30,000.

Mani launched her campaign after discovering that boys at her school had used an AI “nudify” website to target her and other girls. Refusing to accept the school’s minimal disciplinary response, she called for lawmakers to take decisive action against such deepfake abuses. Her efforts gained national attention, including a feature on 60 Minutes, and helped drive the new legal protections.

The law defines deepfakes as media that convincingly depicts someone doing something they never actually did. It also prohibits the use of such technology for election interference or defamation. Although the law’s focus is on malicious misuse, questions remain about whether exemptions will be made for legitimate uses in film, tech, or education sectors.

For more information on these topics, visit diplomacy.edu.

Metro Bank teams up with Ask Silver to fight fraud

Metro Bank has introduced an AI-powered scam detection tool, becoming the first UK bank to offer customers instant scam checks through a simple WhatsApp service.

Developed in partnership with Ask Silver, the Scam Checker allows users to upload images or screenshots of suspicious emails, websites, or documents for rapid analysis and safety advice.

The tool is free for personal and business customers, who receive alerts if the communication is flagged as fraudulent. Ask Silver’s technology not only identifies potential scams but also automatically reports them to relevant authorities.

The company was founded after one of the co-founders’ family members lost £150,000 to a scam, fuelling its mission to prevent similar crimes.

The launch comes amid a surge in impersonation scams across the United Kingdom, with over £1 billion lost to fraud in 2023. Metro Bank’s head of fraud, Baz Thompson, said the tool helps counter tactics that rely on urgency and pressure.

Customers are also reminded that the bank will never request sensitive information or press them to act quickly via emails or texts.

For more information on these topics, visit diplomacy.edu.

Thailand strengthens cybersecurity with Google Cloud

Thailand’s National Cyber Security Agency (NCSA) has joined forces with Google Cloud to strengthen the country’s cyber resilience, using AI-based tools and shared threat intelligence instead of relying solely on traditional defences.

The collaboration aims to better protect public agencies and citizens against increasingly sophisticated cyber threats.

A key part of the initiative involves deploying Google Cloud Cybershield for centralised monitoring of security events across government bodies. Instead of having fragmented monitoring systems, this unified approach will help streamline incident detection and response.

The partnership also brings advanced training for cybersecurity personnel in the public sector, alongside regular threat intelligence sharing.

Google Cloud Web Risk will be integrated into government operations to automatically block websites hosting malware and phishing content, instead of relying on manual checks.

Google further noted the impact of its anti-scam technology in Google Play Protect, which has prevented over 6.6 million high-risk app installation attempts in Thailand since its 2024 launch—enhancing mobile safety for millions of users.

For more information on these topics, visit diplomacy.edu.

OpenAI backs Adaptive Security in the battle against AI threats

AI-driven cyber threats are on the rise, making it easier than ever for hackers to deceive employees through deepfake scams and phishing attacks.

OpenAI, a leader in generative AI, has recognised the growing risk and made its first cybersecurity investment in New York-based startup Adaptive Security. The company has secured $43 million in Series A funding, co-led by OpenAI’s startup fund and Andreessen Horowitz.

Adaptive Security helps companies prepare for AI-driven cyberattacks by simulating deepfake calls, texts, and emails. Employees may receive a phone call that sounds like their CTO, asking for sensitive information, but in reality, it is an AI-generated test.

The platform identifies weak points in a company’s security and trains staff to recognise potential threats. Social engineering scams, which trick employees into revealing sensitive data, have already led to massive financial losses, such as the $600 million Axie Infinity hack in 2022.

CEO Brian Long, a seasoned entrepreneur, says the funding will go towards hiring engineers and improving the platform to keep pace with evolving AI threats.

The investment comes amid a surge in cybersecurity funding, with companies like Cyberhaven, Snyk, and GetReal also securing major investments.

As cyber risks become more advanced, Long advises employees to take simple precautions, including deleting voicemails to prevent hackers from cloning their voices.

For more information on these topics, visit diplomacy.edu.

National Crime Agency responds to AI crime warning

The National Crime Agency (NCA) has pledged to ‘closely examine’ recommendations from the Alan Turing Institute after a recent report highlighted the UK’s insufficient preparedness for AI-enabled crime.

The report, from the Centre for Emerging Technology and Security (CETaS), urges the NCA to create a task force to address AI crime within the next five years.

Despite AI-enabled crime being in its early stages, the report warns that criminals are rapidly advancing their use of AI, outpacing law enforcement’s ability to respond.

CETaS claims that UK police forces have been slow to adopt AI themselves, which could leave them vulnerable to increasingly sophisticated crimes, such as child sexual abuse, cybercrime, and fraud.

The Alan Turing Institute emphasises that although AI-specific legislation may be needed eventually, the immediate priority is for law enforcement to integrate AI into their crime-fighting efforts.

An initiative like this would involve using AI tools to combat AI-enabled crimes effectively, as fraudsters and criminals exploit AI’s potential to deceive.

While AI crime remains a relatively new phenomenon, recent examples such as the $25 million Deepfake CFO fraud show the growing threat.

The report also highlights the role of AI in phishing scams, romance fraud, and other deceptive practices, warning that future AI-driven crimes may become harder to detect as technology evolves.

For more information on these topics, visit diplomacy.edu.

The US House Committee passes a bill to strengthen stablecoin oversight

The US House Financial Services Committee has passed a bill aimed at regulating stablecoins, moving it to a full House vote. On 2 April, the Committee approved the Stablecoin Transparency and Accountability for a Better Ledger Economy (STABLE) Act in a 32-17 vote.

The bill outlines a regulatory framework for payment stablecoins such as USDT and USDC. It mandates transparency in token reserves, ensuring issuers hold sufficient dollar-equivalent assets to back their circulating supply.

Key provisions focus on consumer protection and reducing risk for stablecoin users. The bill also aims to strengthen the role of the dollar in digital finance.

Supporters argue the bill will modernise the US payment infrastructure, making transactions faster and more cost-effective. They also emphasise the importance of maintaining space for innovation.

Congressman Dan Meuser highlighted that the legislation reinforces the dollar’s position as the world’s reserve currency. Meanwhile, Congressman Troy Downing emphasised his role in balancing innovation with strong consumer protections.

For more information on these topics, visit diplomacy.edu

North Korean hacker group cashes in on crypto trade

A wallet linked to North Korea’s notorious Lazarus Group has reportedly sold 40.78 Wrapped Bitcoin (WBTC) for $3.51 million, exchanging it for 1,847 Ethereum (ETH), according to data from SpotOnChain.

Instead of holding onto the ETH, the wallet redistributed 2,507 ETH across three separate addresses, with the largest portion of 1,865 ETH sent to another wallet allegedly tied to the hacker group.

The wallet originally purchased the 40.78 WBTC in February 2023 for around $999,900, when the price of WBTC averaged $24,521. Instead of selling earlier, the group waited until WBTC surged to $83,459, securing a realised profit of $2.51 million, representing a 251% gain over two years.

Lazarus Group, instead of operating openly, has been using complex laundering techniques to move stolen funds, particularly after its attack on crypto exchange Bybit.

In March, the group allegedly laundered nearly 500,000 ETH—worth $1.39 billion—through various transactions in just ten days, instead of keeping the stolen assets in a single location. At least $605 million was processed via the THORChain platform in a single day.

According to Arkham Intelligence, a wallet linked to the group still holds approximately $1.1 billion in crypto, with substantial reserves in Bitcoin, Ethereum, and Tether.

Meanwhile, Google’s Threat Intelligence Group has reported increased efforts by North Korean IT workers to infiltrate European tech and crypto firms, acting as insider operatives for state-sponsored cybercrime networks like Lazarus Group instead of working as legitimate employees.

For more information on these topics, visit diplomacy.edu.

AI transforms autism therapy in China

In Shenzhen, a quiet breakthrough is unfolding in autism rehabilitation as AI-powered tools begin to transform how young children receive therapy.

At a local centre, a therapist guides a three-year-old boy through speech exercises, while an AI system documents progress and instantly generates a tailored home-training plan, offering much-needed support to both therapists and families.

China faces a severe shortage of autism therapists, with only around 100,000 professionals serving a community of over 10 million individuals, including 3 million children.

Traditional diagnosis and treatment rely on time-consuming behavioural assessments. Now, AI is streamlining this process.

Centres like Dami & Xiaomi, in partnership with Amazon Web Services, have developed RICE AI, a system trained on over 80 million behavioural data points to generate faster, personalised interventions and even custom visual materials for home learning.

By dramatically reducing workloads and enhancing precision, AI is helping to close the gap in early intervention and support.

More facilities are following suit, with efforts underway to unify and open-source these tools across the country. As one mother tearfully recalled her autistic son’s first spoken word, the emotional impact of this technological shift was clear, AI is not replacing care, but deepening it.

For more information on these topics, visit diplomacy.edu.

Google report exposes North Korea’s growing cyber presence in blockchain industry

North Korean cyber operatives have expanded their activities by targeting blockchain startups in the United Kingdom and European Union.

A report from Google’s Threat Intelligence Group (GTIG) revealed that IT workers linked to the Democratic People’s Republic of Korea (DPRK) have embedded themselves in crypto projects beyond the United States, across the UK, Germany, Portugal, and Serbia.

These operatives, posing as remote developers, have left compromised data and extortion attempts in their wake.

Affected projects include blockchain marketplaces, AI web applications, and Solana-based smart contracts. Some developers worked under multiple fake identities, using falsified university degrees and residency documents to gain employment.

Payments were routed through services like TransferWise and Payoneer, obscuring funds flowing back to the North Korean regime. Cybersecurity experts warn that companies hiring these workers risk espionage, data theft, and security breaches.

GTIG reports that these cyber operations are generating revenue for North Korea, which has been accused of using overseas IT specialists to finance its sanctioned weapons programmes.

Financial service providers, including Wise, have stated that they monitor transactions closely and report any suspicious activity. With increasing global scrutiny, experts caution businesses to remain vigilant against fraudulent hires in the blockchain sector.

For more information on these topics, visit diplomacy.edu.

Japan targets Apple and Google with new law

The Japan Fair Trade Commission (JFTC) announced on Monday that it has designated Apple Inc., its Japanese subsidiary iTunes K.K., and Google LLC under the new smartphone software competition promotion law.

The law targets dominant IT companies in the smartphone app market, regulating areas like smartphone operating systems, app stores, web browsing software, and search engines.

The primary aim of the law is to prevent these giants from blocking market entry for other companies or giving preferential treatment to their own services. The law will take full effect in December, with the designated companies required to correct any problematic practices.

Apple will be required to allow other companies into the App Store business instead of monopolising it, fostering price competition. Google will be prohibited from displaying its services in search results instead of favouring them.

In response, both companies expressed concerns, with Apple questioning the impact on user experience and Google vowing to engage in discussions to ensure fairness.

For more information on these topics, visit diplomacy.edu.