Small businesses battle rising cyber attacks in the US

Many small businesses in the US are facing a sharp rise in cyber attacks, yet large numbers still try to manage the risk on their own.

A recent survey by Guardz found that more than four in ten SMBs have already experienced a cyber incident, while most owners believe the overall threat level is continuing to increase.

Rather than relying on specialist teams, over half of small businesses still leave critical cybersecurity tasks to untrained staff or the owner. Only a minority have a formal incident response plan created with a cybersecurity professional, and more than a quarter do not carry cyber insurance.

Phishing, ransomware and simple employee mistakes remain the most common dangers, with negligence seen as the biggest internal risk.

Recovery times are improving, with most affected firms able to return to normal operations quickly and very few suffering lasting damage.

However, many still fail to conduct routine security assessments, and outdated technology remains a widespread concern. Some SMBs are increasing cybersecurity budgets, yet a significant share still spend very little or do not know how much is being invested.

More small firms are now turning to managed service providers instead of trying to cope alone.

The findings suggest that preparation, professional support and clearly defined response plans can greatly improve resilience, helping organisations reduce disruption and maintain business continuity when an attack occurs.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

South Korea fake news law sparks fears for press freedom

A significant debate has erupted in South Korea after the National Assembly passed new legislation aimed at tackling so-called fake news.

The revised Information and Communications Network Act bans the circulation of false or fabricated information online. It allows courts to impose punitive damages up to five times the losses suffered when media outlets or YouTubers intentionally spread disinformation for unjust profit.

Journalists, unions and academics warn that the law could undermine freedom of expression and weaken journalism’s watchdog function instead of strengthening public trust.

Critics argue that ambiguity over who decides what constitutes fake news could shift judgement away from the courts and toward regulators or platforms, encouraging self-censorship and increasing the risk of abusive lawsuits by influential figures.

Experts also highlight the lack of strong safeguards in South Korea against malicious litigation compared with the US, where plaintiffs must prove fault by journalists.

The controversy reflects more profound public scepticism about South Korean media and long-standing reporting practices that sometimes rely on relaying statements without sufficient verification, suggesting that structural reform may be needed instead of rapid, punitive legislation.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Nomani investment scam spreads across social media

Fraudulent investment platform Nomani has surged, spreading from Facebook to YouTube. ESET blocked tens of thousands of malicious links this year, mainly in Czech Republic, Japan, Slovakia, Spain, and Poland.

The scam utilises AI-generated videos, branded posts, and social media advertisements to lure victims into fake investments that promise high returns. Criminals then request extra fees or sensitive personal data, and often attempt a secondary scam posing as Europol or INTERPOL.

Recent improvements make Nomani’s AI videos more realistic, using trending news or public figures to appear credible. Campaigns run briefly and misuse social media forms and surveys to harvest information while avoiding detection.

Despite overall growth, detections fell 37% in the second half of 2025, suggesting that scammers are adapting to more stringent law enforcement measures. Meta’s ad platforms earned billions from scams, demonstrating the global reach of Nomani fraud.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

AI search services face competition probe in Japan

Japan’s competition authority will probe AI search services from major domestic and international tech firms. The investigation aims to identify potential antitrust violations rather than impose immediate sanctions.

The probe is expected to cover LY Corp., Google, Microsoft and AI providers such as OpenAI and Perplexity AI. Concerns centre on how AI systems present and utilise news content within search results.

Legal action by Japanese news organisations alleges unauthorised use of articles by AI services. Regulators are assessing whether such practices constitute abuse of market dominance.

The inquiry builds on a 2023 review of news distribution contracts that warned against the use of unfair terms for publishers. Similar investigations overseas, including within the EU, have guided the commission’s approach.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

South Korea tightens ID checks with facial verification for phone accounts

Mandatory facial verification will be introduced in South Korea for anyone opening a new mobile phone account, as authorities try to limit identity fraud.

Officials said criminals have been using stolen personal details to set up phone numbers that later support scams such as voice phishing instead of legitimate services.

Major mobile carriers, including LG Uplus, Korea Telecom and SK Telecom, will validate users by matching their faces against biometric data stored in the PASS digital identity app.

Such a requirement expands the country’s identity checks rather than replacing them outright, and is intended to make it harder for fraud rings to exploit stolen data at scale.

The measure follows a difficult year for data security in South Korea, marked by cyber incidents affecting more than half the population.

SK Telecom reported a breach involving all 23 million of its customers and now faces more than $1.5 billion in penalties and compensation.

Regulators also revealed that mobile virtual network operators were linked to 92% of counterfeit phones uncovered in 2024, strengthening the government’s case for tougher identity controls.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Romania investigates large scale cyber attack on national water body

Authorities in Romania have confirmed a severe ransomware attack on the national water administration ‘Apele Române’, which encrypted around 1,000 IT systems across most regional water basin offices.

Attackers used Microsoft’s BitLocker tool to lock files and then issued a ransom note demanding contact within seven days, although cybersecurity officials continue to reject any negotiation with criminals.

The disruption affected email systems, databases, servers and workstations instead of operational technology, meaning hydrotechnical structures and critical water management systems continued to function safely.

Staff coordinated activity by radio and telephone, and flood defence operations remained in normal working order while investigations and recovery progressed.

National cyber agencies, including the National Directorate of Cyber Security and the Romanian Intelligence Service’s cyber centre, are now restoring systems and moving to include water infrastructure within the state cyber protection framework.

The case underlines how ransomware groups increasingly target essential utilities rather than only private companies, making resilience and identity controls a strategic priority.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

South Korea plans huge fines for major data breaches

Prime Minister Kim Min-seok has called for punitive fines of up to 10 percent of company sales for repeated and serious data breaches, as public anger grows over large-scale leaks.

The government is seeking swift legislation to impose stronger sanctions on firms that fail to safeguard personal data, reflecting President Lee Jae Myung’s stance that violations require firm penalties instead of lenient warnings.

Kim said corporate responses to recent breaches had fallen far short of public expectations and stressed that companies must take full responsibility for protecting customer information.

Under the proposed framework, affected individuals would receive clearer notifications that include guidance on their rights to seek damages.

The government of South Korea also plans to strengthen investigative powers through coercive fines for noncompliance, while pursuing rapid reforms aimed at preventing further harm.

The tougher line follows a series of major incidents, including a leak at Shinhan Card that affected around 190,000 merchant records and a large-scale breach at Coupang that exposed the data of 33.7 million users.

Officials have described the Coupang breach as a serious social crisis that has eroded public trust.

Authorities have launched an interagency task force to identify responsibility and ensure tighter data protection across South Korea’s digital economy instead of relying on voluntary company action.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Facial recognition trial targets repeat offenders in New Zealand supermarkets

Teenagers account for most of the serious threats reported against supermarket staff across South Island stores, according to a privacy report released on Foodstuffs South Island’s facial recognition trial.

The company is testing the technology in three Christchurch supermarkets to identify only adult repeat offenders, rather than minors, even though six out of the ten worst offenders are under eighteen.

A system that creates a biometric template of every shopper at the trial stores and deletes it if there is no match with a watchlist. Detections remain stored within the Auror platform for seven years, while personal images are deleted on the same day.

The technology is supplied by the Australian firm Vix Vizion, in collaboration with Auror, which is already known for its vehicle plate recognition systems.

Foodstuffs argues the trial is justified by rising threatening and violent behaviour towards staff across all age groups.

A previous North Island pilot scanned 226 million faces and generated more than 1700 alerts, leading the Privacy Commissioner of New Zealand to conclude that strong safeguards could reduce privacy intrusion to an acceptable level.

The watchlist only includes adults previously involved in violence or serious threats, and any matches undergo human checks before action is taken.

Foodstuffs continues to provide regular updates to the Office of the Privacy Commissioner as the South Island trial proceeds.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Fake weight loss adverts removed from TikTok

TikTok removed fake adverts for weight loss drugs after a company impersonating UK retailer Boots used AI-generated videos. The clips falsely showed healthcare professionals promoting prescription-only medicines.

Boots said it contacted TikTok after becoming aware of the misleading adverts circulating on the platform. TikTok confirmed the videos were removed for breaching its rules on deceptive and harmful advertising.

BBC reporting found the account was briefly able to repost the same videos before being taken down. The account appeared to be based in Hong Kong and directed users to a website selling the drugs.

UK health regulators warned that prescription-only weight loss medicines must only be supplied by registered pharmacies. TikTok stated that it continues to strengthen its detection systems and bans the promotion of controlled substances.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

Atlas agent mode fortifies OpenAI’s ChatGPT security

ChatGPT Atlas has introduced an agent mode that allows an AI browser agent to view webpages and perform actions directly. The feature supports everyday workflows using the same context as a human user. Expanded capability also increases security exposure.

Prompt injection has emerged as a key threat to browser-based agents, targeting AI behaviour rather than software flaws. Malicious instructions embedded in content can redirect an agent from the user’s intended action. Successful attacks may trigger unauthorised actions.

To address the risk, OpenAI has deployed a security update to Atlas. The update includes an adversarially trained model and strengthened safeguards. It followed internal automated red teaming.

Automated red teaming uses reinforcement learning to train AI attackers that search for complex exploits. Simulations test how agents respond to injected prompts. Findings are used to harden models and system-level defences.

Prompt injection is expected to remain a long-term security challenge for AI agents. Continued investment in testing, training, and rapid mitigation aims to reduce real-world risk. The goal is to achieve reliable and secure AI assistance.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!