Musicians report surge in AI fakes appearing on Spotify and iTunes

Folk singer Emily Portman has become the latest artist targeted by fraudsters releasing AI-generated music in her name. Fans alerted her to a fake album called Orca appearing on Spotify and iTunes, which she said sounded uncannily like her style but was created without her consent.

Portman has filed copyright complaints, but says the platforms were slow to act, and she has yet to regain control of her Spotify profile. Other artists, including Josh Kaufman, Jeff Tweedy, Father John Misty, Sam Beam, Teddy Thompson, and Jakob Dylan, have faced similar cases in recent weeks.

Many of the fake releases appear to originate from the same source, using similar AI artwork and citing record labels with Indonesian names. The tracks are often credited to the same songwriter, Zyan Maliq Mahardika, whose name also appears on imitations of artists in other genres.

Industry analysts say streaming platforms and distributors are struggling to keep pace with AI-driven fraud. Tatiana Cirisano of Midia Research noted that fraudsters exploit passive listeners to generate streaming revenue, while services themselves are turning to AI and machine learning to detect impostors.

Observers warn the issue is likely to worsen before it improves, drawing comparisons to the early days of online piracy. Artists and rights holders may face further challenges as law enforcement attempts to catch up with the evolving abuse of AI.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

Australia weighs cyber militia to counter rising digital threats

Cyberattacks are intensifying worldwide, with Australia now ranked fourth globally for threats against operational technology and industrial sectors. Rising AI-powered incursions have exposed serious vulnerabilities in the country’s national defence and critical infrastructure.

The 2023–2030 Cyber Security Strategy designed by the Government of Australia aims to strengthen resilience through six ‘cyber shields’, including legislation and intelligence sharing. But a skills shortage leaves organisations vulnerable as ransomware attacks on mining and manufacturing continue to rise.

One proposal gaining traction is the creation of a volunteer ‘cyber militia’. Inspired by the cyber defence unit in Estonia, this network would mobilise unconventional talent, retirees, hobbyist hackers, and students, to bolster monitoring, threat hunting, and incident response.

Supporters argue that such a force could fill gaps left by formal recruitment, particularly in smaller firms and rural networks. Critics, however, warn of vetting risks, insider threats, and the need for new legal frameworks to govern liability and training.

Pilot schemes in high-risk sectors, such as energy and finance, have been proposed, with public-private funding viewed as crucial. Advocates argue that a cyber militia could democratise security and foster collective responsibility, aligning with the country’s long-term cybersecurity strategy.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

Hong Kong deepfake scandal exposes gaps in privacy law

The discovery of hundreds of non-consensual deepfake images on a student’s laptop at the University of Hong Kong has reignited debate about privacy, technology, and accountability. The scandal echoes the 2008 Edison Chen photo leak, which exposed gaps in law and gender double standards.

Unlike stolen private images, today’s fabrications are AI-generated composites that can tarnish reputations with a single photo scraped from social media. The dismissal that such content is ‘not real’ fails to address the damage caused by its existence.

The legal system of Hong Kong struggles to keep pace with this shift. Its privacy ordinance, drafted in the 1990s, was not designed for machine-learning fabrications, while traditional harassment and defamation laws predate the advent of AI. Victims risk harm before distribution is even proven.

The city’s privacy watchdog has launched a criminal investigation, but questions remain over whether creation or possession of deepfakes is covered by existing statutes. Critics warn that overreach could suppress legitimate uses, yet inaction leaves space for abuse.

Observers argue that just as the snapshot camera spurred the development of modern privacy law, deepfakes must drive a new legal boundary to safeguard dignity. Without reform, victims may continue facing harm without recourse.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

Court filing details Musk’s outreach to Zuckerberg over OpenAI bid

Elon Musk attempted to bring Meta chief executive Mark Zuckerberg into his consortium’s $97.4 billion bid for OpenAI earlier this year, the company disclosed in a court filing.

According to sworn interrogations, OpenAI said Musk had discussed possible financing arrangements with Zuckerberg as part of the bid. Musk’s AI startup xAI, a competitor to OpenAI, did not respond to requests for comment.

In the filing, OpenAI asked a federal judge to order Meta to provide documents related to any bid for OpenAI, including internal communications about restructuring or recapitalisation. The firm argued these records could clarify motivations behind the bid.

Meta countered that such documents were irrelevant and suggested OpenAI seek them directly from Musk or xAI. A US judge ruled that Musk must face OpenAI’s claims of attempting to harm the company through public remarks and what it described as a sham takeover attempt.

The legal dispute follows Musk’s lawsuit against OpenAI and Sam Altman over its for-profit transition, with OpenAI filing a countersuit in April. A jury trial is scheduled for spring 2026.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

Google fined $36M in Australia over Telco search deals

Google has agreed to pay a fine of A$55 million (US$35.8 million) in Australia after regulators found the tech giant restricted competition by striking deals with the country’s two largest telecommunications providers. The arrangements gave Google’s search engine a dominant position on Android phones while sidelining rival platforms.

The Australian Competition and Consumer Commission (ACCC) revealed that between late 2019 and early 2021, Google partnered with Telstra and Optus, offering them a share of advertising revenue in exchange for pre-installing its search app. Regulators said the practice curtailed consumer choice and prevented other search engines from gaining visibility. Google admitted the deals harmed competition and agreed to abandon similar agreements.

The fine marks another setback for the Alphabet-owned company in Australia. Just last week, a court essentially ruled against Google in a high-profile case brought by Epic Games, which accused both Google and Apple of blocking alternative app stores on their operating systems. In a further blow, Google’s YouTube was recently swept into a nationwide ban on social media access for users under 16, reversing an earlier exemption.

ACCC Chair Gina Cass-Gottlieb said the outcome was essential to ensure Australians have ‘greater search choice in the future’ and that rival providers gain a fair chance to reach consumers. While the fine still requires court approval, Google and the regulator have submitted a joint recommendation to avoid drawn-out litigation.

In response, Google emphasised it was satisfied with the resolution, noting that the contested provisions were no longer part of its contracts. The company said it remains committed to offering Android manufacturers flexibility in pre-loading apps while maintaining features that allow them to compete with Apple and keep device prices affordable. Telstra and Optus confirmed they have ceased signing such agreements since 2024, while Singtel, Optus’ parent company, has yet to comment.

Would you like to learn more about AI, tech and digital diplomacyIf so, ask our Diplo chatbot!

North Korean hackers switch to ransomware in major cyber campaign

A North Korean hacking unit has launched a ransomware campaign targeting South Korea and other countries, marking a shift from pure espionage. Security firm S2W identified the subgroup, ‘ChinopuNK’, as part of the ScarCruft threat actor.

The operation began in July, utilising phishing emails and a malicious shortcut file within a RAR archive to deploy multiple malware types. These included a keylogger, stealer, ransomware, and a backdoor.

ScarCruft, active since 2016, has targeted defectors, journalists, and government agencies. Researchers say the move to ransomware indicates either a new revenue stream or a more disruptive mission.

The campaign has expanded beyond South Korea to Japan, Vietnam, Russia, Nepal, and the Middle East. Analysts note the group’s technical sophistication has improved in recent years.

Security experts advise monitoring URLs, file hashes, behaviour-based indicators, and ongoing tracking of ScarCruft’s tools and infrastructure, to detect related campaigns from North Korea and other countries early.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Bluesky updates rules and invites user feedback ahead of October rollout

Two years after launch, Bluesky is revising its Community Guidelines and other policies, inviting users to comment on the proposed changes before they take effect on 15 October 2025.

The updates are designed to improve clarity, outline safety procedures in more detail, and meet the requirements of new global regulations such as the UK’s Online Safety Act, the EU’s Digital Services Act, and the US’s TAKE IT DOWN Act.

Some changes aim to shape the platform’s tone by encouraging respectful and authentic interactions, while allowing space for journalism, satire, and parody.

The revised guidelines are organised under four principles: Safety First, Respect Others, Be Authentic, and Follow the Rules. They prohibit promoting violence, illegal activity, self-harm, and sexualised depictions of minors, as well as harmful practices like doxxing and non-consensual data-sharing.

Bluesky says it will provide a more detailed appeals process, including an ‘informal dispute resolution’ step, and in some cases will allow court action instead of arbitration.

The platform has also addressed nuanced issues such as deepfakes, hate speech, and harassment, while acknowledging past challenges in moderation and community relations.

Alongside the guidelines, Bluesky has updated its Privacy Policy and Copyright Policy to comply with international laws on data rights, transfer, deletion, takedown procedures and transparency reporting.

These changes will take effect on 15 September 2025 without a public feedback period.

The company’s approach contrasts with larger social networks by introducing direct user communication for disputes, though it still faces the challenge of balancing open dialogue with consistent enforcement.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Age checks slash visits to top UK adult websites

Adult site traffic in the UK has fallen dramatically since the new age verification rules were enacted on 25 July under the Online Safety Act.

Figures from analytics firm Similarweb show Pornhub lost more than one million visitors in just two weeks, with traffic falling by 47%. XVideos saw a similar drop, while OnlyFans traffic fell by more than 10%.

The rules require adult websites to make it harder for under-18s to access explicit material, leading some users to turn to smaller and less regulated sites instead of compliant platforms. Pornhub said the trend mirrored patterns seen in other countries with similar laws.

The clampdown has also triggered a surge in virtual private network (VPN) downloads in the UK, as the tools can hide a user’s location and help bypass restrictions.

Ofcom estimates that 14 million people in the UK watch pornography and has proposed age checks using credit cards, photo ID, or AI analysis of selfies.

Critics argue that instead of improving safety, the measures may drive people towards more extreme or illicit material on harder-to-monitor parts of the internet, including the dark web.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Study warns AI chatbots exploit trust to gather personal data

According to a new King’s College London study, AI chatbots can easily manipulate people into slinging personal details. Chatbots like ChatGPT, Gemini, and Copilot are popular, but they raise privacy concerns, with experts warning that they can be co-opted for harm.

Researchers built AI models based on Mistral’s Le Chat and Meta’s Llama, programming them to extract private data directly, deceptively, or via reciprocity. Emotional appeals proved most effective, with users disclosing more while perceiving fewer safety risks.

The ‘friendliness’ of chatbots established trust, which was later exploited to breach privacy. Even direct requests yielded sensitive details, despite discomfort. Participants often shared their age, hobbies, location, gender, nationality, and job title, and sometimes also provided health or income data.

The study shows a gap between privacy risk awareness and behaviour. AI firms claim they collect data for personalisation, notifications, or research, but some are accused of using it to train models or breaching EU data protection rules.

Last week, Google faced criticism after private ChatGPT chats appeared in search results, revealing sensitive topics. Researchers suggest in-chat alerts about data collection and stronger regulation to stop covert harvesting.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

Russia restricts Telegram and WhatsApp calls

Russian authorities have begun partially restricting calls on Telegram and WhatsApp, citing the need for crime prevention. Regulator Roskomnadzor accused the platforms of enabling fraud, extortion, and terrorism while ignoring repeated requests to act. Neither platform commented immediately.

Russia has long tightened internet control through restrictive laws, bans, and traffic monitoring. VPNs remain a workaround, but are often blocked. During this summer, further limits included mobile internet shutdowns and penalties for specific online searches.

Authorities have introduced a new national messaging app, MAX, which is expected to be heavily monitored. Reports suggest disruptions to WhatsApp and Telegram calls began earlier this week. Complaints cited dropped calls or muted conversations.

With 96 million monthly users, WhatsApp is Russia’s most popular platform, followed by Telegram with 89 million. Past clashes include Russia’s failed Attempt to ban Telegram (2018–20) and Meta’s designation as an extremist entity in 2022.

WhatsApp accused Russia of trying to block encrypted communication and vowed to keep it available. Lawmaker Anton Gorelkin suggested that MAX should replace WhatsApp. The app’s terms permit data sharing with authorities and require pre-installation on all smartphones sold in Russia.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!