Musicians report surge in AI fakes appearing on Spotify and iTunes

Folk singer Emily Portman has become the latest artist targeted by fraudsters releasing AI-generated music in her name. Fans alerted her to a fake album called Orca appearing on Spotify and iTunes, which she said sounded uncannily like her style but was created without her consent.

Portman has filed copyright complaints, but says the platforms were slow to act, and she has yet to regain control of her Spotify profile. Other artists, including Josh Kaufman, Jeff Tweedy, Father John Misty, Sam Beam, Teddy Thompson, and Jakob Dylan, have faced similar cases in recent weeks.

Many of the fake releases appear to originate from the same source, using similar AI artwork and citing record labels with Indonesian names. The tracks are often credited to the same songwriter, Zyan Maliq Mahardika, whose name also appears on imitations of artists in other genres.

Industry analysts say streaming platforms and distributors are struggling to keep pace with AI-driven fraud. Tatiana Cirisano of Midia Research noted that fraudsters exploit passive listeners to generate streaming revenue, while services themselves are turning to AI and machine learning to detect impostors.

Observers warn the issue is likely to worsen before it improves, drawing comparisons to the early days of online piracy. Artists and rights holders may face further challenges as law enforcement attempts to catch up with the evolving abuse of AI.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

Nexon investigates AI-generated TikTok ads for The First Descendant

Nexon launched an investigation after players spotted several suspicious adverts for The First Descendant on TikTok that appeared to have been generated by AI.

One advertisement allegedly used a content creator’s likeness without permission, sparking concerns about the misuse of digital identities.

The company issued a statement acknowledging ‘irregularities’ in its TikTok Creative Challenge, a campaign that lets creators voluntarily submit content for advertising.

While Nexon confirmed that all videos had been verified through TikTok’s system, it admitted that some submissions may have been produced in inappropriate circumstances.

Nexon apologised for the delay in informing players, saying the review took longer than expected. It confirmed that a joint investigation with TikTok is underway to determine what happened, and it was promised that updates would be provided once the process is complete.

The developer has not yet addressed the allegation from creator DanieltheDemon, who claims his likeness was used without consent.

The controversy has added to ongoing debates about AI’s role in advertising and protecting creators’ rights.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

The dark side of AI: Seven fears that won’t go away

AI has been hailed as the most transformative technology of our age, but with that power comes unease. From replacing jobs to spreading lies online, the risks attached to AI are no longer abstract; they are already reshaping lives. While governments and tech leaders promise safeguards, uncertainty fuels public anxiety.

Perhaps the most immediate concern is employment. Machines are proving cheaper and faster than humans in the software development and graphic design industries. Talk of a future “post-scarcity” economy, where robot labour frees people from work, remains speculative. Workers see only lost opportunities now, while policymakers struggle to offer coordinated solutions.

Environmental costs are another hidden consequence. Training large AI models demands enormous data centres that consume vast amounts of electricity and water. Critics argue that supposed future efficiencies cannot justify today’s pollution, which sometimes rivals small nations’ carbon footprint.

Privacy fears are also escalating. AI-driven surveillance—from facial recognition in public spaces to workplace monitoring—raises questions about whether personal freedom will survive in an era of constant observation. Many fear that “smart” devices and cameras may soon leave nowhere to hide.

Then there is the spectre of weaponisation. AI is already integrated into warfare, with autonomous drones and robotic systems assisting soldiers. While fully self-governing lethal machines are not yet in use, military experts warn that it is only a matter of time before battlefields become dominated by algorithmic decision-makers.

Artists and writers, meanwhile, worry about intellectual property theft. AI systems trained on creative works without permission or payment have sparked lawsuits and protests, leaving cultural workers feeling exploited by tech giants eager for training data.

Misinformation represents another urgent risk. Deepfakes and AI-generated propaganda are flooding social media, eroding trust in institutions and amplifying extremist views. The danger lies not only in falsehoods themselves but in the echo chambers algorithms create, where users are pushed toward ever more radical beliefs.

And hovering above it all is the fear of runaway AI. Although science fiction often exaggerates this threat, researchers take seriously the possibility of systems evolving in ways we cannot predict or control. Calls for global safeguards and transparency have grown louder, yet solutions remain elusive.

In the end, fear alone cannot guide us. Addressing these risks requires not just caution but decisive governance and ethical frameworks. Only then can humanity hope to steer AI toward progress rather than peril.

Source: Forbes

Would you like to learn more about AI, tech and digital diplomacyIf so, ask our Diplo chatbot!

Musk threatens legal action against Apple over AI App rankings

Elon Musk has announced plans to sue Apple, accusing the company of unfairly favouring OpenAI’s ChatGPT over his xAI app Grok on the App Store.

Musk claims that Apple’s ranking practices make it impossible for any AI app except OpenAI’s to reach the top spot, calling this behaviour an ‘unequivocal antitrust violation’. ChatGPT holds the number one position on Apple’s App Store, while Grok ranks fifth.

Musk expressed frustration on social media, questioning why his X app, which he describes as ‘the number one news app in the world,’ has not received higher placement. He suggested that Apple’s ranking decisions might be politically motivated.

The dispute highlights growing tensions as AI companies compete for prominence on major platforms.

Apple and Musk’s xAI have not responded yet to requests for comment.

The controversy unfolds amid increasing scrutiny of App Store policies and their impact on competition, especially within the fast-evolving AI sector.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Fintiv accuses Apple for allegedy stealing its trade secrets

US Texas-based company Fintiv, which provides mobile financial services, has filed a legal complaint against Apple, alleging that it stole trade secrets from its predecessor, CorFire, to develop Apple Pay.

The complaint claims that Apple employees held several meetings with CorFire to discuss the implementation of CorFire’s mobile wallet solutions and that CorFire had uploaded proprietary information to a shared site maintained by Apple.

According to the lawsuit, the two companies signed a non-disclosure agreement (NDA), giving Apple access to CorFire’s confidential information, which Apple allegedly misappropriated after abandoning plans for a partnership.

According to the IPWatchdog, Fintiv has been involved in ongoing litigation over its mobile wallet patents. Recently, it lost two key appeals: one against PayPal, which upheld the dismissal of its patent infringement claims, and another against Apple, in which the court confirmed specific patent claims were invalid.

However, in May, Fintiv secured a partial victory when the Federal Circuit reversed a lower court’s ruling that Apple did not infringe one of its patents (US Patent No. 8,843,125), allowing that part of the case to proceed. Apple had not commented publicly as of the article’s publication.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

Riverlane deploys Deltaflow 2 QEC in first UK commercial quantum integration

Riverlane has deployed its Deltaflow 2 quantum error correction (QEC) technology in a UK commercial quantum setting for the first time. The system introduces streaming quantum memory, enabling real-time error correction fast enough to preserve data across thousands of operations.

Deltaflow 2 combines a custom QEC chip with FPGA hardware and Riverlane’s software stack, supporting superconducting, spin, trapped-ion, and neutral-atom qubit platforms. It has been integrated with high-performance classical systems and a digital twin for noise simulation and monitoring.

Control hardware from Qblox delivers high-fidelity readout and ultra-low-latency links to enable real-time QEC. The deployment will validate error correction routines and benchmark system performance, forming the basis for future integration with OQC’s superconducting qubits.

The project is part of the UK Government-funded DECIDE programme, which aims to strengthen national capability in quantum error correction. Riverlane and OQC plan to demonstrate live QEC during quantum operations, supporting the creation of logical qubits for scalable systems.

Riverlane is also partnering with Infleqtion, Rigetti Computing, and others through the UK’s National Quantum Computing Centre. The company says growing industry demand reflects QEC’s shift from research to deployment, positioning Deltaflow 2 as a commercially viable, universally compatible tool.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Autonomous AI coding tool Jules now publicly available from Google

Google has released its autonomous coding agent Jules for free public use, offering AI-powered code generation, debugging, and optimisation. Built on the Gemini 2.5 Pro model, the tool completed a successful beta phase before entering general availability with both free and paid plans.

Jules supports a range of users, from developers to non-technical staff, automating tasks like building features or integrating APIs. The free version allows 15 tasks per day, while the Pro tier significantly raises the limits, providing access to more powerful tools.

Beta testing began in May 2025 and saw Jules process hundreds of thousands of tasks. Its new interface now includes visual explanations and bug fixes, refining usability. Integrated with GitHub and Gemini CLI, Jules can suggest optimisations, write tests, and even provide audio summaries.

Google positions Jules as a step beyond traditional code assistants by enabling autonomy. However, former researchers warn that oversight remains essential to avoid misuse, especially in sensitive systems where AI errors could be costly.

While its free tier may appeal to startups and hobbyists, concerns over code originality and job displacement persist. Nonetheless, Jules could reshape development workflows and lower barriers to coding for a much broader user base.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

News Corp CEO warns AI could ‘vandalise’ creativity and IP rights

News Corp chief executive Robert Thomson has warned that AI could damage creativity by undermining intellectual property rights.

At the company’s full-year results briefing in New York, he described the AI era as a historic turning point. He called for stronger protections to preserve America’s ‘comparative advantage in creativity’.

Thomson said allowing AI systems to consume and profit from copyrighted works without permission was akin to ‘vandalising virtuosity’.

He cited Donald Trump’s The Art of the Deal, published by News Corp’s book division, questioning whether it should be used to train AI that might undermine book sales. Despite the criticism, the company has rolled out its AI newsroom tools, NewsGPT and Story Cutter.

News Corp reported a two percent revenue rise to US$8.5 billion ($A13.1 billion), with net income from continuing operations climbing 71 percent to US$648 million.

Growth in the Dow Jones and REA Group segments offset news media subscriptions and advertising declines.

Digital subscribers fell across several mastheads, although The Times and The Sunday Times saw gains. Profitability in news media rose 15 percent, aided by editorial efficiencies and cost-cutting measures.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

AI news summaries to affect the future of journalism

Generative AI tools like ChatGPT significantly impact traditional online news by reducing search traffic to media websites.

As these AI assistants summarise news content directly in search results, users are less likely to click through to the sources, threatening already struggling publishers who depend on ad revenue and subscriptions.

A Pew Research Centre study found that when AI summaries appear in search, users click suggested links half as often as in traditional search formats.

Matt Karolian of the Boston Globe Media warns that the next few years will be especially difficult for publishers, urging them to adapt or risk being ‘swept away.’

While some, like the Boston Globe, have gained a modest number of new subscribers through ChatGPT, these numbers pale compared to other traffic sources.

To adapt, publishers are turning to Generative Engine Optimisation (GEO), tailoring content so AI tools can be used and cited more effectively. Some have blocked crawlers to prevent data harvesting, while others have reopened access to retain visibility.

Legal battles are unfolding, including a major lawsuit from The New York Times against OpenAI and Microsoft. Meanwhile, licensing deals between tech giants and media organisations are beginning to take shape.

With nearly 15% of under-25s now relying on AI for news, concerns are mounting over the credibility of information. As AI reshapes how news is consumed, the survival of original journalism and public trust in it face grave uncertainty.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Cloudflare claims Perplexity circumvented website scraping blocks

Cloudflare has accused AI startup Perplexity of ignoring explicit website instructions not to scrape their content.

According to the internet infrastructure company, Perplexity has allegedly disguised its identity and used technical workarounds to bypass restrictions set out in Robots.txt files, which tell bots which pages they may or may not access.

The behaviour was reportedly detected after multiple Cloudflare customers complained about unauthorised scraping attempts.

Instead of respecting these rules, Cloudflare claims Perplexity altered its bots’ user agent to appear as a Google Chrome browser on macOS and switched its network identifiers to avoid detection.

The company says these tactics were seen across tens of thousands of domains and millions of daily requests, and that it used machine learning and network analysis to identify the activity.

Perplexity has denied the allegations, calling Cloudflare’s report a ‘sales pitch’ and disputing that the bot named in the findings belongs to the company. Cloudflare has since removed Perplexity’s bots from its verified list and introduced new blocking measures.

The dispute arises as Cloudflare intensifies its efforts to grant website owners greater control over AI crawlers. Last month, it launched a marketplace enabling publishers to charge AI firms for scraping, alongside free tools to block unauthorised data collection.

Perplexity has previously faced criticism over content use, with outlets such as Wired accusing it of plagiarism in 2024.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!