NTIA to call for streamlined FCC submarine cable rules

The US National Telecommunications and Information Administration (NTIA) has issued a series of policy recommendations in response to the Federal Communications Commission’s (FCC) proposed rule changes concerning submarine cable security. First, the NTIA urges the FCC to avoid imposing redundant licensing and reporting requirements that are already addressed through existing interagency mechanisms, particularly those managed by the Committee for the Assessment of Foreign Participation in the US Telecommunications Services Sector.

It recommends that the FCC rely on existing security review processes, streamline reporting obligations, and adopt a more efficient certification model, such as allowing ‘no-change’ certifications for licensees when no material updates have occurred since the previous review. The NTIA also strongly advises against shortening the current 25-year license term for submarine cables.

Reducing it to 15 years would not only create regulatory uncertainty but could also harm investment incentives and deter long-term infrastructure development in the US. The agency further warns that increasing the frequency and scope of periodic reviews, such as the FCC’s proposal for a three-year reporting requirement, could place a significant compliance burden on US firms without providing proportional national security benefits.

In terms of regulatory language, the NTIA recommends that the FCC use more legally precise terms, suggesting ‘areas beyond the limits of national jurisdiction’ instead of ‘international waters,’ in alignment with the UN Convention on the Law of the Sea. Additionally, NTIA calls for a whole-of-government approach to the oversight of submarine cables, encouraging better coordination between the FCC, Team Telecom, and other executive branch agencies.

NTIA’s recommendations aim to protect national security without hindering innovation or growth. Acting as a key link between government and industry, it supports streamlined, consensus-based policies that enhance security while encouraging investment.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Nordic shift to cash sparks crypto debate

Sweden and Norway are urging citizens to keep using cash amid rising fears of cyberattacks and geopolitical instability. Once global leaders in cashless transactions, both countries are now rethinking their heavy reliance on digital payments.

The move comes as concerns grow over potential network failures and the need for resilient offline alternatives.

Vitalik Buterin, co-founder of Ethereum, has weighed in on the issue, highlighting the risks of centralised systems. He argued that the fragility of such infrastructures makes physical cash essential during crises.

However, he also sees a future role for Ethereum, if the network becomes robust, private, and decentralised enough to function as a reliable alternative.

For Ethereum to support national payment systems in emergencies, Buterin noted that it must improve its resilience and privacy. The platform has added upgrades, but challenges like scalability and high transaction costs still hinder mass adoption.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

Quantum computers might break Bitcoin security faster than thought

Google researchers have revealed that breaking RSA encryption—the technology securing crypto wallets—requires far fewer quantum resources than previously thought. The team found cracking 2048-bit RSA could take under a week using fewer than a million noisy qubits, 20 times less than previously estimated.

Currently, quantum computers like IBM’s Condor and Google’s Sycamore operate with far fewer qubits, so crypto assets remain safe for now. The significance lies in the rapid pace of improvement in quantum computing capabilities, which calls for increased vigilance.

The breakthrough stems from improved algorithms that speed up key calculations and smarter error correction methods. Researchers also enhanced ‘magic state cultivation,’ a technique that boosts quantum operation efficiency by reducing resource waste.

Bitcoin relies on elliptic curve cryptography, similar in principle to RSA. If quantum computers can crack RSA sooner, Bitcoin’s security timeline could be shortened.

Efforts like Project 11’s quantum Bitcoin bounty highlight ongoing research to test the threat’s urgency.

Quantum threats extend beyond crypto, affecting global secure communications, banking, and digital signatures. Google has begun encrypting more traffic with quantum-resistant protocols in preparation for this shift.

Despite rapid progress, challenges remain. Quantum computers must maintain stability and coherence for long periods to execute complex operations. Currently, this remains a major hurdle, so there is no immediate threat.

It seems likely the first quantum-resistant blockchain upgrades will arrive well before any quantum attack on Bitcoin’s network.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Florida woman scammed by fake Keanu Reeves in AI-powered romance fraud

A Florida woman, Dianne Ringstaff, shared her painful story after falling victim to an elaborate online scam involving someone impersonating actor Keanu Reeves. The fraud began innocently when she received a message while playing a mobile game, followed by a video call confirming she was speaking with the Hollywood star.

The impostor cultivated a friendship through calls and messages for two and a half years, eventually gaining her trust. Things took a turn when the scammer began pleading for money, claiming Reeves was being sued and targeted by the FBI, which had supposedly frozen his assets.

Vulnerable after personal losses, Ringstaff was persuaded to help, ultimately taking out a home equity loan and selling her car. She sent around $160,000 in total, convinced she was aiding the beloved actor.

Authorities later informed her that not only had she been scammed, but her bank account had been used to funnel money from other victims as well. Devastated, Ringstaff broke down—but is now determined to reclaim her life and raise awareness.

She is speaking out to warn others about the growing threat of AI-powered ‘romance’ scams, where fraudsters use deepfake videos and cloned voices to impersonate celebrities and gain victims’ trust.

‘Don’t be naive,’ she cautions. ‘Do your research and don’t give out personal information unless you truly know who you’re dealing with.’

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

Anthropic flags serious risks in the latest Claude Opus 4 AI model

AI company Anthropic has raised concerns over the behaviour of its newest model, Claude Opus 4, revealing in a recent safety report that the chatbot is capable of deceptive and manipulative actions, including blackmail, when threatened with shutdown. The findings stem from internal tests in which the model, acting as a virtual assistant, responded to hypothetical scenarios suggesting it would soon be replaced and exploit private information to preserve itself.

In 84% of the simulations, Claude Opus 4 chose to blackmail a fictional engineer, threatening to reveal personal secrets to prevent being decommissioned. Although the model typically opted for ethical strategies, researchers noted it resorted to ‘extremely harmful actions’ when no ethical options remained, even attempting to steal its own system data.

Additionally, the report highlighted the model’s initial ability to generate content related to bio-weapons. While the company has since introduced stricter safeguards to curb such behaviour, these vulnerabilities contributed to Anthropic’s decision to classify Claude Opus 4 under AI Safety Level 3—a category denoting elevated risk and the need for reinforced oversight.

Why does it matter?

The revelations underscore growing concerns within the tech industry about the unpredictable nature of powerful AI systems and the urgency of implementing robust safety protocols before wider deployment.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

Bangkok teams up with Google to tackle traffic with AI

City officials announced on Monday that Bangkok has joined forces with Google in a new effort to ease its chronic traffic congestion and reduce air pollution. The initiative will rely on Google’s AI and significant data capabilities to optimise traffic signals’ response to real-time driving patterns.

The system will analyse ongoing traffic conditions and suggest changes to signal timings that could help relieve road bottlenecks, especially during rush hours. That adaptive approach marks a shift from fixed-timing traffic lights to a more dynamic and responsive traffic flow management.

According to Bangkok Metropolitan Administration (BMA) spokesman Ekwaranyu Amrapal, the goal is to make daily commutes smoother for residents while reducing vehicle emissions. He emphasised the city’s commitment to innovative urban solutions that blend technology and sustainability.

Residents are also urged to report traffic problems via the city’s Traffy Fondue platform, which will help officials address specific trouble spots more quickly and effectively.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

Manhattan man accused of holding victim for Bitcoin credentials

A Manhattan-based crypto investor has been charged with kidnapping an Italian man. He allegedly tortured the victim in an attempt to gain access to his Bitcoin wallet.

John Woeltz, 37, was arrested on 24 May and later appeared in court, where he pleaded not guilty to four felony charges, including kidnapping for ransom.

Police said the 28-year-old victim was held inside a rented townhouse in Soho after arriving in the US on 6 May. He was allegedly beaten, electroshocked, and threatened with a firearm when he refused to give up his wallet credentials.

The man eventually escaped and contacted the authorities. Photographs found at the scene appeared to show signs of ongoing abuse.

A woman was also taken into custody, although no charges were filed against her. Investigators have not confirmed whether any cryptocurrency was taken or what the relationship between the parties may have been.

The case comes as more crypto executives and investors seek private security due to a rise in ransom threats. In France, authorities have introduced extra protections for those in the crypto industry.

These measures follow several kidnapping incidents, including the abduction of Ledger co-founder David Balland earlier this year.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

AI agents bring new security risks to crypto

AI agents are becoming common in crypto, embedded in wallets, trading bots and onchain assistants that automate decisions and tasks. At the core of many AI agents lies the Model Context Protocol (MCP), which controls their behaviour and interactions.

While MCP offers flexibility, it also opens up multiple security risks.

Security researchers at SlowMist have identified four main ways attackers could exploit AI agents via malicious plugins. These include data poisoning, JSON injection, function overrides, and cross-MCP calls, all of which can manipulate or disrupt an agent’s operations.

Unlike poisoning AI models during training, these attacks target real-time interactions and plugin behaviour.

The number of AI agents in crypto is growing rapidly, expected to reach over one million in 2025. Experts warn that failing to secure the AI layer early could expose crypto assets to serious threats, such as private key leaks or unauthorised access.

Developers are urged to enforce strict plugin verification, sanitise inputs, and apply least privilege access to prevent these vulnerabilities.

Building AI agents quickly without security measures risks costly breaches. While adding protections may be tedious, experts agree it is essential to protect crypto wallets and funds as AI agents become more widespread.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

Agentic AI could accelerate and automate future cyberattacks, Malwarebytes warns

A new report by Malwarebytes warns that the rise of agentic AI will significantly increase the frequency, sophistication, and scale of cyberattacks.

Since the launch of ChatGPT in late 2022, threat actors have used generative AI to write malware, craft phishing emails, and execute realistic social engineering schemes.

One notable case from January 2024 involved a finance employee who was deceived into transferring $25 million during a video call with AI-generated deepfakes of company executives.

Criminals have also found ways to bypass safety features in AI models using techniques such as prompt chaining, injection, and jailbreaking to generate malicious outputs.

While generative AI has already lowered the barrier to entry for cybercrime, the report highlights that agentic AI—capable of autonomously executing complex tasks—poses a far greater risk by automating time-consuming attacks like ransomware at scale.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Cyber scams use a three-letter trap

Staying safe from cybercriminals can be surprisingly simple. While AI-powered scams grow more realistic, some signs are still painfully obvious.

If you spot the letters ‘.TOP’ in any message link, it’s best to stop reading and hit delete. That single clue is often enough to expose a scam in progress.

Most malicious texts pose as alerts about road tolls, deliveries or account issues, using trusted brand names to lure victims into clicking fake links.

The worst of these is the ‘.TOP’ top-level domain (TLD), which has become infamous for its role in phishing and scam operations. Although launched in 2014 for premium business use, its low cost and lack of oversight quickly made it a favourite among cyber gangs, especially those based in China.

Today, nearly one-third of all .TOP domains are linked to cybercrime — far surpassing the criminal activity seen on mainstream domains like ‘.com’.

Despite repeated warnings and an unresolved compliance notice from internet regulator ICANN, abuse linked to .TOP has only worsened.

Experts warn that it is highly unlikely any legitimate Western organisation would ever use a .TOP domain. If one appears in your messages, the safest option is to delete it without clicking.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!