SK Telecom unveils $700B cybersecurity upgrade

SK Telecom has announced a major cybersecurity initiative worth KRW 700 billion, designed to restore trust and enhance information security after a recent incident.

The company’s new programme, called the Accountability and Commitment Program, includes four elements to protect customers and reinforce transparency.

A central part of the initiative is the Information Protection Innovation Plan, which involves a five-year investment to build a world-class cybersecurity system.

The project will follow the US National Institute of Standards and Technology’s Cybersecurity Framework and aims to position SK Telecom as Korea’s leader in information security by 2028.

To further support affected customers, the company is upgrading its Customer Assurance Package and introducing a Customer Appreciation Package to thank users for their patience and loyalty.

A subscription cancellation fee waiver has also been included to reduce friction for those reconsidering their service.

SK Telecom says it will maintain its commitment to customer safety and service reliability, pledging to fully address all concerns and enhance security and service quality across the board.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Spotify hit by AI band hoax controversy

A band called The Velvet Sundown has gone viral on Spotify, gaining over 850,000 monthly listeners, yet almost nothing is known about the people behind it.

With no live performances, interviews, or social media presence for its supposed members, the group has fuelled growing speculation that both it and its music may be AI-generated.

The mystery deepened after Rolling Stone first reported that a spokesperson had admitted the tracks were made using an AI tool called Suno, only to later reveal the spokesperson himself was fake.

The band denies any connection to the individual, stating on Spotify that the account impersonating them on X is also false.

AI detection tools have added to the confusion. Rival platform Deezer flagged the music as ‘100% AI-generated’, although Spotify has remained silent.

While CEO Daniel Ek has said AI music isn’t banned from the platform, he expressed concerns about mimicking real artists.

The case has reignited industry fears over AI’s impact on musicians. Experts warn that public trust in online content is weakening.

Musicians and advocacy groups argue that AI is undercutting creativity by training on human-made songs without permission. As copyright battles continue, pressure is mounting for stronger government regulation.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

LFR tech helps catch dangerous offenders, but Liberty urges legal safeguards

Live facial recognition (LFR) technology used by the Metropolitan Police has led to more than 1,000 arrests, including dangerous offenders wanted for serious crimes, such as rape, robbery and child protection breaches.

Among those arrested was David Cheneler, 73, a registered sex offender spotted by LFR cameras in Camberwell, south London. He was found with a young girl and later jailed for two years for breaching a sexual harm prevention order.

Another arrest included Adenola Akindutire, linked to a machete robbery in Hayes that left a man with life-changing injuries. Stopped during an LFR operation in Stratford, he was carrying a false passport and admitted to several violent offences.

LFR also helped identify Darren Dubarry, 50, who was wanted for theft. He was stopped with stolen designer goods after passing an LFR-equipped van in east London.

The Met says the technology has helped arrest over 100 people linked to serious violence against women and girls, including domestic abuse, stalking, and strangulation.

Lindsey Chiswick, who leads the Met’s LFR work, said the system is helping deliver justice more efficiently, calling it a ‘powerful tool’ that is removing dangerous offenders from the streets of London.

While police say biometric data is not retained for those not flagged, rights groups remain concerned. Liberty says nearly 1.9 million faces were scanned between January 2022 and March 2024, and is calling for new laws to govern police use of facial recognition.

Charlie Whelton of Liberty said the tech risks infringing rights and must be regulated. ‘We shouldn’t leave police forces to come up with frameworks on their own,’ he warned, urging Parliament to legislate before further deployment.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

xAI gets Memphis approval to run 15 gas turbines

xAI, Elon Musk’s AI company, has secured permits to operate 15 natural gas turbines at its Memphis data centre, despite facing legal threats over alleged Clean Air Act violations.

The Shelby County Health Department approved the generators, which can produce up to 247 megawatts, provided specific emissions controls are in place.

Environmental lawyers say xAI had already been running as many as 35 generators without permits. The Southern Environmental Law Center (SELC), acting on behalf of the NAACP, has accused the company of serious pollution and is preparing to sue.

Even under the new permit, xAI is allowed to emit substantial pollutants annually, including nearly 10 tons of formaldehyde — a known carcinogen.

Community concerns about the health impact remain strong. A local group pledged $250,000 for an independent air quality study, and although the City of Memphis carried out its own tests, the SELC questioned their validity.

The tests missed ozone levels and were reportedly conducted in favourable wind conditions, with equipment placed too close to buildings.

Officials previously argued that the turbines were exempt from regulation due to their ‘mobile’ status, a claim the SELC refuted as legally flawed. Meanwhile, xAI has recently raised $10 billion, split between debt and equity, highlighting its rapid expansion, even as regulatory scrutiny grows.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

India’s top darknet dealer laundered crypto with Monero for two years

India’s Narcotics Control Bureau (NCB) has arrested a 35-year-old engineer from Kerala accused of running the country’s largest darknet drug network alone. The suspect, ‘Ketamelon,’ reportedly ran a Level 4 darknet drug operation for two years without his family knowing.

Authorities seized more than 1,100 LSD blots, over 130 grams of ketamine, and cryptocurrency assets valued at over $82,000 during the four-month investigation. The drugs were reportedly sourced from international suppliers, including a UK-based vendor believed to be the world’s largest LSD supplier.

Shipments reached cities such as Bengaluru, Chennai, Delhi, and Himachal Pradesh.

The suspect laundered proceeds using Monero, a privacy-focused cryptocurrency designed to hide transaction details, making it popular among darknet criminals.

While privacy coins like Monero offer enhanced anonymity, experts warn they are not entirely untraceable, as blockchain ledgers permanently record all transactions.

The operation comes amid wider global efforts targeting cybercrime and crypto-facilitated illegal markets.

Recently, the US Treasury sanctioned a Russian hosting provider linked to ransomware and darknet drug sales, highlighting increasing international pressure on digital criminal networks.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Deepfake abuse in schools raises legal and ethical concerns

Deepfake abuse is emerging as a troubling form of peer-on-peer harassment in schools, targeting mainly girls with AI-generated explicit imagery. Tools that once required technical skill are now easily accessible to young people, allowing harmful content to be created and shared in seconds.

Though all US states and Washington, D.C. have laws addressing the distribution of nonconsensual intimate images, many do not cover AI-generated content or address the fact that minors are often both victims and perpetrators.

Some states have begun adapting laws to include proportional sentencing and behavioural interventions for minors. Advocates argue that education on AI, consent and digital literacy is essential to address the root causes and help young people understand the consequences of their actions.

Regulating tech platforms and app developers is also key, as companies continue to profit from tools used in digital exploitation. Experts say schools, families, lawmakers and platforms must share responsibility for curbing the spread of AI-generated abuse and ensuring support for those affected.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

UK plans new laws to tackle undersea cable sabotage

The UK government’s evolving defence and security policies aim to close legal gaps exposed by modern threats such as cyberattacks and sabotage of undersea cables. As set out in the recent Strategic Defence Review, ministers plan to introduce a new defence readiness bill to protect critical subsea infrastructure better and prepare for hostile acts that fall outside traditional definitions of war.

The government is also considering revising the outdated Submarine Telegraph Act of 1885, whose penalties, last raised in 1982 to £1,000, are now recognised as inadequate. Instead of merely increasing fines, officials from the Ministry of Defence and the Department for Science, Innovation and Technology intend to draft comprehensive legislation that balances civil and military needs, clarifies how to prosecute sabotage, and updates the UK’s approach to national defence in the digital age.

These policy initiatives reflect growing concern about ‘grey zone’ threats—deliberate acts of sabotage or cyber aggression that stop short of open conflict yet pose serious national security risks. Recent suspected sabotage incidents, including damage to subsea cables connecting Sweden, Latvia, Finland, and Estonia, have highlighted how vulnerable undersea infrastructure remains.

Investigations have linked several of these operations to Russian and Chinese interests, emphasising the urgency of modernising UK law. By updating its legislative framework, the UK government aims to ensure it can respond effectively to attacks that blur the line between peace and conflict, safeguarding both national interests and critical international data flows.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Beware of fake deals as Prime Day approaches

A surge in online scams is expected ahead of Amazon’s Prime Day, which runs from 8 to 11 July, as fraudsters use increasingly sophisticated tactics. Advice Direct Scotland is issuing a warning to shoppers across Scotland: AI-enhanced phishing emails, bogus renewal notices, and fake refund offers are on the rise.

In one common ruse, scammers impersonate Amazon in messages stating your Prime membership has expired or that your account needs urgent verification. Others go further, claiming your Amazon account has been hacked and demanding remote access to your device, something the real company never does. Victims in Scotland reportedly lost around £860,000 last year to similar crime, as scam technology becomes more convincing.

Advice Direct Scotland reminds shoppers not to rush and to trust their instincts. Genuine Amazon communications will never ask for remote access, passwords, or financial information over email or phone. If in doubt, hang up and check your account via official channels, or reach out to the charity’s ScamWatch hotline.

Those seeking guidance can contact Advice Direct Scotland via phone or online chat, or report suspected scams using the free ScamWatch tool. With Prime Day bargains tempting many, staying vigilant could mean avoiding a costly mistake.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Hackers use AI to create phishing sites in seconds

Hackers are now using generative AI tools to build convincing phishing websites in under a minute, researchers at Okta have warned. The company discovered that a tool developed by Vercel had been abused to replicate login portals for platforms such as Okta, Microsoft 365 and crypto services.

Using simple prompts like ‘build a copy of the website login.okta.com’, attackers can create fake login pages with little effort or technical skill. Okta’s investigation found no evidence of successful breaches, but noted that threat actors repeatedly used v0 to target new platforms.

Vercel has since removed the fraudulent sites and is working with Okta to create a system for reporting abuse. Security experts are concerned the speed and accessibility of generative AI tools could accelerate low-effort cybercrime on a massive scale.

Researchers also found cloned versions of the v0 tool on GitHub, which may allow continued abuse even if access to the original is restricted. Okta urges organisations to adopt passwordless systems, as traditional phishing detection methods are becoming obsolete.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Cyberattacks drain millions from hospitality sector

The booming hospitality sector handles sensitive guest information daily, from passports to payment details, making it a prime target for cybercriminals. Recent figures reveal the average cost of a data breach in hospitality rose to $3.86 million in 2024, with over 14,000 critical vulnerabilities detected in hotel networks worldwide.

Complex systems connecting guests, staff, vendors, and devices like smart locks multiply entry points for attackers. High staff turnover and frequent reliance on temporary workers add to the sector’s cybersecurity challenges.

New employees are often more susceptible to phishing and social engineering attacks, as demonstrated by costly breaches such as the 2023 MGM Resorts incident. Artificial intelligence helps boost defences but isn’t a cure-all and must be used with staff training and clear policies.

Recent attacks on major hotel brands have exposed millions of customer records, intensifying pressure on hospitality firms to meet privacy regulations like GDPR. Maintaining robust cybersecurity requires continuous updates to policies, vendor checks, and committed leadership support.

Hotels lagging in these areas risk severe financial and reputational damage in an increasingly hostile cyber landscape.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!