Spotify under fire for AI-generated songs on memorial artist pages

Spotify is facing criticism after AI-generated songs were uploaded to the pages of deceased artists without consent from estates or rights holders.

The latest case involves country singer-songwriter Blaze Foley, who died in 1989. A track titled ‘Together’ was posted to his official Spotify page over the weekend. The song sounded vaguely like a slow country ballad and was paired with AI-generated cover art showing a man who bore no resemblance to Foley.

Craig McDonald, whose label manages Foley’s catalogue, confirmed the track had nothing to do with the artist and described it as inauthentic and harmful. ‘I can clearly tell you that this song is not Blaze, not anywhere near Blaze’s style, at all,’ McDonald told 404 Media. ‘It has the authenticity of an algorithm.’

He criticised Spotify for failing to prevent such uploads and said the company had a duty to stop AI-generated music from appearing under real artists’ names.

‘It’s kind of surprising that Spotify doesn’t have a security fix for this type of action,’ he said. ‘They could fix this problem if they had the will to do so.’ Spotify said it had flagged the track to distributor SoundOn and removed it for violating its deceptive content policy.

However, other similar uploads have already emerged. The same company, Syntax Error, was linked to another AI-generated song titled ‘Happened To You’, uploaded last week under the name of Grammy-winning artist Guy Clark, who died in 2016.

Both tracks have since been removed, but Spotify has not explained how Syntax Error was able to post them using the names and likenesses of late musicians. The controversy is the latest in a wave of AI music incidents slipping through streaming platforms’ content checks.

Earlier this year, an AI-generated band called The Velvet Sundown amassed over a million Spotify streams before disclosing that all their vocals and instrumentals were made by AI.

Another high-profile case involved a fake Drake and The Weeknd collaboration, ‘Heart on My Sleeve’, which gained viral traction before being taken down by Universal Music Group.

Rights groups and artists have repeatedly warned about AI-generated content misrepresenting performers and undermining creative authenticity. As AI tools become more accessible, streaming platforms face mounting pressure to improve detection and approval processes to prevent further misuse.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Smart tab management comes to Firefox with local AI

Mozilla released Firefox 141, bringing a smart upgrade to tab management with local AI-powered grouping. Users can right-click a tab group and select ‘Suggest more tabs for group.’ Firefox will automatically identify similar tabs based on titles and descriptions.

The AI runs fully on-device, ensuring privacy and allowing users to accept or reject suggested tabs.

This feature complements other improvements in this release: Firefox now uses less memory on Linux, avoids forced restarts after updates, offers unit conversions directly in the address bar, and lets users resize the bottom tools panel in vertical tab mode.

It also boosts Picture-in-Picture support for more streaming services and enables WebGPU on Windows by default. These updates collectively enhance both performance and usability for power users.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Teens struggle to spot misinformation despite daily social media use

Misinformation online now touches every part of life, from fake products and health advice to political propaganda. Its influence extends beyond beliefs, shaping actions like voting behaviour and vaccination decisions.

Unlike traditional media, online platforms rarely include formal checks or verification, allowing false content to spread freely.

It is especially worrying as teenagers increasingly use social media as a main source of news and search results. Despite their heavy usage, young people often lack the skills needed to spot false information.

In one 2022 Ofcom study, only 11% of 11 to 17-year-olds could consistently identify genuine posts online.

Research involving 11 to 14-year-olds revealed that many wrongly believed misinformation only related to scams or global news, so they didn’t see themselves as regular targets. Rather than fact-check, teens relied on gut feeling or social cues, such as comment sections or the appearance of a post.

These shortcuts make it easier for misinformation to appear trustworthy, especially when many adults also struggle to verify online content.

The study also found that young people thought older adults were more likely to fall for misinformation, while they believed their parents were better than them at spotting false content. Most teens felt it wasn’t their job to challenge false posts, instead placing the responsibility on governments and platforms.

In response, researchers have developed resources for young people, partnering with organisations like Police Scotland and Education Scotland to support digital literacy and online safety in practical ways.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Louis Vuitton Australia confirms customer data breach after cyberattack

Louis Vuitton has admitted to a significant data breach in Australia, revealing that an unauthorised third party accessed its internal systems and stole sensitive client details.

The breach, first detected on 2 July, included names, contact information, birthdates, and shopping preferences — though no passwords or financial data were taken.

The luxury retailer emailed affected customers nearly three weeks later, urging them to stay alert for phishing, scam calls, or suspicious texts.

While Louis Vuitton claims it acted quickly to contain the breach and block further access, questions remain about the delay in informing customers and the number of individuals affected.

Authorities have been notified, and cybersecurity specialists are now investigating. The incident adds to a growing list of cyberattacks on major Australian companies, prompting experts to call for stronger data protection laws and the right to demand deletion of personal information from corporate databases.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

ChatGPT stuns users by guessing object in viral video using smart questions

A video featuring ChatGPT Live has gone viral after it correctly guessed an object hidden in a user’s hand using only a series of questions.

The clip, shared on the social media platform X, shows the chatbot narrowing down its guesses until it lands on the correct answer — a pen — within less than a minute. The video has fascinated viewers by showing how far generative AI has come since its initial launch.

Multimodal AI like ChatGPT can now process audio, video and text together, making interactions more intuitive and lifelike.

Another user attempted the same challenge with Gemini AI by holding an AC remote. Gemini described it as a ‘control panel for controlling temperature’, which was close but not entirely accurate.

The fun experiment also highlights the growing real-world utility of generative AI. During Google’s I/O conference during the year, the company demonstrated how Gemini Live can help users troubleshoot and repair appliances at home by understanding both spoken instructions and visual input.

Beyond casual use, these AI tools are proving helpful in serious scenarios. A UPSC aspirant recently explained how uploading her Detailed Application Form to a chatbot allowed it to generate practice questions.

She used those prompts to prepare for her interview and credited the AI with helping her boost her confidence.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

New AI device brings early skin cancer diagnosis to remote communities

A Scottish research team has developed a pioneering AI-powered tool that could transform how skin cancer is diagnosed in some of the world’s most isolated regions.

The device, created by PhD student Tess Watt at Heriot-Watt University, enables rapid diagnosis without needing internet access or direct contact with a dermatologist.

Patients use a compact camera connected to a Raspberry Pi computer to photograph suspicious skin lesions.

The system then compares the image against thousands of preloaded examples using advanced image recognition and delivers a diagnosis in real time. These results are then shared with local GP services, allowing treatment to begin without delay.

The self-contained diagnostic system is among the first designed specifically for remote medical use. Watt said that home-based healthcare is vital, especially with growing delays in GP appointments.

The device, currently 85 per cent accurate, is expected to improve further with access to more image datasets and machine learning enhancements.

The team plans to trial the tool in real-world settings after securing NHS ethical approval. The initial rollout is aimed at rural Scottish communities, but the technology could benefit global populations with poor access to dermatological care.

Heriot-Watt researchers also believe the device will aid patients who are infirm or housebound, making early diagnosis more accessible than ever.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Perplexity CEO predicts that AI browser could soon replace recruiters and assistants

Perplexity AI CEO Aravind Srinivas believes that the company’s new AI-powered browser, Comet, could soon replace two key white-collar roles in most offices: recruiters and executive assistants.

Speaking on The Verge podcast, Srinivas explained that with the integration of more advanced reasoning models like GPT-5 or Claude 4.5, Comet will be able to handle tasks traditionally assigned to these positions.

He also described how a recruiter’s week-long workload could be reduced to a single AI prompt.

From sourcing candidates to scheduling interviews, tracking responses in Google Sheets, syncing calendars, and even briefing users ahead of meetings, Comet is built to manage the entire process—often without any follow-up input.

The tool remains in an invite-only phase and is currently available to premium users.

Srinivas also framed Comet as the early foundation of a broader AI operating system for knowledge workers, enabling users to issue natural language commands for complex tasks.

He emphasised the importance of adopting AI early, warning that those who fail to keep pace with the technology’s rapid growth—where breakthroughs arrive every few months—risk being left behind in the job market.

In a separate discussion, he urged younger generations to reduce time spent scrolling on Instagram and instead focus on mastering AI tools. According to him, the shift is inevitable, and those who embrace it now will hold a long-term professional advantage.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

DuckDuckGo adds new tool to block AI-generated images from search results

Privacy-focused search engine DuckDuckGo has launched a new feature that allows users to filter out AI-generated images from search results.

Although the company admits the tool is not perfect and may miss some content, it claims it will significantly reduce the number of synthetic images users encounter.

The new filter uses open-source blocklists, including a more aggressive ‘nuclear’ option, sourced from tools like uBlock Origin and uBlacklist.

Users can access the setting via the Images tab after performing a search or use a dedicated link — noai.duckduckgo.com — which keeps the filter always on and also disables AI summaries and the browser’s chatbot.

The update responds to growing frustration among internet users. Platforms like X and Reddit have seen complaints about AI content flooding search results.

In one example, users searching for ‘baby peacock’ reported seeing just as many or more AI images than real ones, making it harder to distinguish between fake and authentic content.

DuckDuckGo isn’t alone in trying to tackle unwanted AI material. In 2024, Hiya launched a Chrome extension aimed at spotting deepfake audio across major platforms.

Microsoft’s Bing has also partnered with groups like StopNCII to remove explicit synthetic media from its results, showing that the fight against AI content saturation is becoming a broader industry trend.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Irish hospital turns to AI for appointment management

Beaumont Hospital in Dublin plans to deploy AI to predict patient no-shows and late cancellations, aiming to reduce wasted resources.

Instead of relying solely on reminders, the hospital will pilot AI software costing up to €110,000, using patient data to forecast missed appointments. Currently, no-shows account for 15.5% of its outpatient slots.

The system will integrate with Beaumont’s existing two-way text messaging service. Rather than sending uniform reminders, the AI model will tailor messages based on the likelihood of attendance while providing hospital staff with real-time insights to better manage clinic schedules.

The pilot is expected to begin in late 2025 or early 2026, potentially expanding into a full €1.2 million contract.

The move forms part of Beaumont Hospital’s strategic plan through 2030 to reduce outpatient non-attendance. It follows the broader adoption of AI in Irish healthcare, including Mater Hospital’s recent launch of an AI and Digital Health centre designed to tackle clinical challenges using new technologies.

Instead of viewing AI as a future option, Irish hospitals now increasingly treat it as an immediate solution to operational inefficiencies, hoping it will transform healthcare delivery and improve patient service.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Library cuts across Massachusetts deepen digital divide

Massachusetts libraries face sweeping service reductions as federal funding cuts threaten critical educational and digital access programmes. Local and major libraries are bracing for the loss of key resources including summer reading initiatives, online research tools, and English language classes.

The Massachusetts Board of Library Commissioners (MBLC) said it has already lost access to 30 of 34 databases it once offered. Resources such as newspaper archives, literacy support for the blind and incarcerated, and citizenship classes have also been cancelled due to a $3.6 million shortfall.

Communities unable to replace federal grants with local funds will be disproportionately affected. With over 800 library applications for mobile internet hot spots now frozen, officials warn that students and jobseekers may lose vital lifelines to online learning, healthcare and employment.

The cuts are part of broader efforts by the Trump administration to shrink federal institutions, targeting what it deems anti-American programming. Legislators and library leaders say the result will widen the digital divide and undercut libraries’ role as essential pillars of equitable access

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!