IMF calls for stronger AI regulation in global securities markets

Regulators worldwide are being urged to adopt stronger oversight frameworks for AI in capital markets after an IMF technical note warned that rapid AI adoption could reshape securities trading while increasing systemic risk.

AI brings major efficiency gains in asset management and high-frequency trading instead of slower, human-led processes, yet opacity, market volatility, cyber threats and model concentration remain significant concerns.

The IMF warns that AI could create powerful data oligopolies where only a few firms can train the strongest models, while autonomous trading agents may unintentionally collude by widening spreads without explicit coordination.

Retail investors also face rising exposure to AI washing, where financial firms exaggerate or misrepresent AI capability, making transparency, accountability and human-in-the-loop review essential safeguards.

Supervisory authorities are encouraged to scale their own AI capacity through SupTech tools for automated surveillance and social-media sentiment monitoring.

The note highlights India as a key case study, given the dominance of algorithmic trading and SEBI’s early reporting requirements for AI and machine learning. The IMF also points to the National Stock Exchange’s use of AI in fraud detection as an emerging-market model for resilient monitoring infrastructure.

The report underlines the need for regulators to prepare for AI-driven market shocks, strengthen governance obligations on regulated entities and build specialist teams capable of understanding model risk instead of reacting only after misconduct or misinformation harms investors.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

AI chatbots exploited to create nonconsensual bikini deepfakes

Users of popular AI chatbots are generating bikini deepfakes by manipulating photos of fully clothed women, often without consent. Online discussions show how generative AI tools can be misused to create sexually suggestive deepfakes from ordinary images, raising concerns about image-based abuse.

A now-deleted Reddit thread shared prompts for using Google’s Gemini to alter clothing in photographs. One post asked for a woman’s traditional dress to be changed to a bikini. Reddit removed the content and later banned the subreddit over deepfake-related harassment.

Researchers and digital rights advocates warn that nonconsensual deepfakes remain a persistent form of online harassment. Millions of users have visited AI-powered websites designed to undress people in photos. The trend reflects growing harm enabled by increasingly realistic image generation tools.

Most mainstream AI chatbots prohibit the creation of explicit images and apply safeguards to prevent abuse. However, recent advances in image-editing models have made it easier for users to bypass guardrails using simple prompts, according to limited testing and expert assessments.

Technology companies say their policies ban altering a person’s likeness without consent, with penalties including account suspensions. Legal experts argue that deepfakes involving sexualised imagery represent a core risk of generative AI and that accountability must extend to both users and platforms.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

Small businesses battle rising cyber attacks in the US

Many small businesses in the US are facing a sharp rise in cyber attacks, yet large numbers still try to manage the risk on their own.

A recent survey by Guardz found that more than four in ten SMBs have already experienced a cyber incident, while most owners believe the overall threat level is continuing to increase.

Rather than relying on specialist teams, over half of small businesses still leave critical cybersecurity tasks to untrained staff or the owner. Only a minority have a formal incident response plan created with a cybersecurity professional, and more than a quarter do not carry cyber insurance.

Phishing, ransomware and simple employee mistakes remain the most common dangers, with negligence seen as the biggest internal risk.

Recovery times are improving, with most affected firms able to return to normal operations quickly and very few suffering lasting damage.

However, many still fail to conduct routine security assessments, and outdated technology remains a widespread concern. Some SMBs are increasing cybersecurity budgets, yet a significant share still spend very little or do not know how much is being invested.

More small firms are now turning to managed service providers instead of trying to cope alone.

The findings suggest that preparation, professional support and clearly defined response plans can greatly improve resilience, helping organisations reduce disruption and maintain business continuity when an attack occurs.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Nomani investment scam spreads across social media

Fraudulent investment platform Nomani has surged, spreading from Facebook to YouTube. ESET blocked tens of thousands of malicious links this year, mainly in Czech Republic, Japan, Slovakia, Spain, and Poland.

The scam utilises AI-generated videos, branded posts, and social media advertisements to lure victims into fake investments that promise high returns. Criminals then request extra fees or sensitive personal data, and often attempt a secondary scam posing as Europol or INTERPOL.

Recent improvements make Nomani’s AI videos more realistic, using trending news or public figures to appear credible. Campaigns run briefly and misuse social media forms and surveys to harvest information while avoiding detection.

Despite overall growth, detections fell 37% in the second half of 2025, suggesting that scammers are adapting to more stringent law enforcement measures. Meta’s ad platforms earned billions from scams, demonstrating the global reach of Nomani fraud.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

Aflac confirms large-scale data breach following cyber incident

US insurance firm Aflac has confirmed that a cyberattack disclosed in June affected around 22.65 million people. The breach involved the theft of sensitive personal and health information; however, the company initially did not specify the number of individuals affected.

In filings with the Texas attorney general, Aflac said the compromised data includes names, dates of birth, home addresses, government-issued identification numbers, driving licence details, and Social Security numbers. Medical and health insurance information was also accessed during the incident.

A separate filing with the Iowa attorney general suggested the attackers may be linked to a known cybercriminal organisation. Federal law enforcement and external cybersecurity specialists indicated the group had been targeting the insurance sector more broadly.

Security researchers have linked a wave of recent insurance-sector breaches to Scattered Spider, a loosely organised group of predominantly young, English-speaking hackers. The timing and targeting of the Aflac incident align with the group’s activity.

The US company stated that it has begun notifying the affected individuals. The company, which reports having around 50 million customers, did not respond to requests for comment. Other insurers, including Erie Insurance and Philadelphia Insurance Companies, reported breaches during the same period.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

South Korea tightens ID checks with facial verification for phone accounts

Mandatory facial verification will be introduced in South Korea for anyone opening a new mobile phone account, as authorities try to limit identity fraud.

Officials said criminals have been using stolen personal details to set up phone numbers that later support scams such as voice phishing instead of legitimate services.

Major mobile carriers, including LG Uplus, Korea Telecom and SK Telecom, will validate users by matching their faces against biometric data stored in the PASS digital identity app.

Such a requirement expands the country’s identity checks rather than replacing them outright, and is intended to make it harder for fraud rings to exploit stolen data at scale.

The measure follows a difficult year for data security in South Korea, marked by cyber incidents affecting more than half the population.

SK Telecom reported a breach involving all 23 million of its customers and now faces more than $1.5 billion in penalties and compensation.

Regulators also revealed that mobile virtual network operators were linked to 92% of counterfeit phones uncovered in 2024, strengthening the government’s case for tougher identity controls.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Romania investigates large scale cyber attack on national water body

Authorities in Romania have confirmed a severe ransomware attack on the national water administration ‘Apele Române’, which encrypted around 1,000 IT systems across most regional water basin offices.

Attackers used Microsoft’s BitLocker tool to lock files and then issued a ransom note demanding contact within seven days, although cybersecurity officials continue to reject any negotiation with criminals.

The disruption affected email systems, databases, servers and workstations instead of operational technology, meaning hydrotechnical structures and critical water management systems continued to function safely.

Staff coordinated activity by radio and telephone, and flood defence operations remained in normal working order while investigations and recovery progressed.

National cyber agencies, including the National Directorate of Cyber Security and the Romanian Intelligence Service’s cyber centre, are now restoring systems and moving to include water infrastructure within the state cyber protection framework.

The case underlines how ransomware groups increasingly target essential utilities rather than only private companies, making resilience and identity controls a strategic priority.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Facial recognition trial targets repeat offenders in New Zealand supermarkets

Teenagers account for most of the serious threats reported against supermarket staff across South Island stores, according to a privacy report released on Foodstuffs South Island’s facial recognition trial.

The company is testing the technology in three Christchurch supermarkets to identify only adult repeat offenders, rather than minors, even though six out of the ten worst offenders are under eighteen.

A system that creates a biometric template of every shopper at the trial stores and deletes it if there is no match with a watchlist. Detections remain stored within the Auror platform for seven years, while personal images are deleted on the same day.

The technology is supplied by the Australian firm Vix Vizion, in collaboration with Auror, which is already known for its vehicle plate recognition systems.

Foodstuffs argues the trial is justified by rising threatening and violent behaviour towards staff across all age groups.

A previous North Island pilot scanned 226 million faces and generated more than 1700 alerts, leading the Privacy Commissioner of New Zealand to conclude that strong safeguards could reduce privacy intrusion to an acceptable level.

The watchlist only includes adults previously involved in violence or serious threats, and any matches undergo human checks before action is taken.

Foodstuffs continues to provide regular updates to the Office of the Privacy Commissioner as the South Island trial proceeds.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Fake weight loss adverts removed from TikTok

TikTok removed fake adverts for weight loss drugs after a company impersonating UK retailer Boots used AI-generated videos. The clips falsely showed healthcare professionals promoting prescription-only medicines.

Boots said it contacted TikTok after becoming aware of the misleading adverts circulating on the platform. TikTok confirmed the videos were removed for breaching its rules on deceptive and harmful advertising.

BBC reporting found the account was briefly able to repost the same videos before being taken down. The account appeared to be based in Hong Kong and directed users to a website selling the drugs.

UK health regulators warned that prescription-only weight loss medicines must only be supplied by registered pharmacies. TikTok stated that it continues to strengthen its detection systems and bans the promotion of controlled substances.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

University of Phoenix breach exposes millions in major Oracle attack

Almost 3.5 million students, staff and suppliers linked to the University of Phoenix have been affected by a data breach tied to a sophisticated cyber extortion campaign. The incident followed unauthorised access to internal systems, exposing highly sensitive personal and financial information.

Investigations indicate attackers exploited a zero-day vulnerability in Oracle E-Business Suite, a widely used enterprise financial application. The breach surfaced publicly after the Clop ransomware group listed the university on its leak site, prompting internal reviews and regulatory disclosures.

Compromised data includes names, contact details, dates of birth, social security numbers and banking information. University officials have confirmed that affected individuals are being notified, while filings with US regulators outline the scale and nature of the incident.

The attack forms part of a broader wave of intrusions targeting American universities and organisations using Oracle platforms. As authorities offer rewards for intelligence on Clop’s operations, the breach highlights growing risks facing educational institutions operating complex digital infrastructures.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!