AI could help Alex Van Halen finish unreleased songs

Alex Van Halen is exploring AI to complete unreleased Van Halen music left unfinished by his late brother Eddie. The drummer revealed that while the band has a vault of material, many tracks are incomplete and lack vocals. He hopes AI can analyse Eddie’s guitar style to generate new solos.

Alex has reached out to OpenAI, seeking their help in recreating his brother’s signature playing patterns. He envisions using AI-generated guitar parts alongside vocals from Led Zeppelin’s Robert Plant, despite not having spoken to the singer in decades. Completing the project could take years.

Eddie Van Halen, who passed away in 2020, left a significant legacy in rock music. His son Wolfgang, who toured with Van Halen, has said there is no chance of a reunion, preferring not to play the band’s music without his father.

AI is already playing a role in the music industry. Randy Travis, who lost his singing ability after a stroke, recently released a song with AI-generated vocals, recreating his voice through advanced technology. The success of that project offers hope for similar ventures, including Alex’s plans for Van Halen’s unfinished work.

South Korea targets stronger AI capabilities through cloud expansion

South Korea plans to accelerate the growth of its private cloud industry to enhance competitiveness in AI. The Ministry of Science and ICT outlined a strategy to double the local cloud market to 10 trillion won (£6 billion) by 2027 through partnerships with global companies.

The government acknowledged that South Korea trails over a year behind global cloud leaders, with underdeveloped AI infrastructure. Key initiatives include encouraging the use of private cloud systems across public sectors, such as education and defence, and easing regulations to facilitate the transition. Incentives such as expanded tax benefits are also planned for AI and cloud enterprises.

A national AI computing centre with supercomputer capabilities will be established to further bolster infrastructure. In addition, an AI innovation fund will launch with an initial government investment of 45 billion won (£27 million) in 2025, encouraging private-sector contributions to the cloud ecosystem’s growth.

Plans are also underway for an AI safety research institute under the Electronics and Telecommunications Research Institute. This initiative follows Seoul’s AI safety summit earlier this year, where global leaders agreed on collaborative efforts to promote safe and inclusive AI development.

News Corp sues AI firm Perplexity over copyright violations

News Corp, the media giant behind outlets like The Wall Street Journal and the New York Post, has filed a lawsuit against the AI search engine Perplexity, accusing the company of infringing on its copyrighted content. According to the lawsuit, Perplexity allegedly copies and summarises large quantities of News Corp’s articles, analyses, and opinions without permission, potentially diverting revenue from the original publishers. The AI startup, which positions itself as a tool to help users ‘skip the links’ to full articles, is claimed to have harmed the financial interests of news outlets by discouraging users from visiting the sources.

The lawsuit goes beyond accusations of content scraping, stating that Perplexity has sometimes reproduced material verbatim and falsely attributed facts or even invented news stories under News Corp’s name. News Corp claims it sent a cease-and-desist letter to Perplexity in July but received no response, prompting the legal action. Perplexity has also faced similar accusations from other major publications like Wired, Forbes, and The New York Times, with concerns over scraping content, bypassing paywalls, and plagiarism.

In the lawsuit, News Corp asks the court to order Perplexity to stop using its content without authorisation and destroy any databases containing its works. CEO Robert Thomson condemned Perplexity’s practices as abusing intellectual property that harms journalists and content creators. Thomson did, however, commend other companies like OpenAI, which have made deals with News Corp and other outlets to use their content for AI training legally.

Perplexity has yet to comment on the lawsuit, though it has started paying some publishers, including Time and Fortune, for the use of their content. As the legal battle unfolds, the case highlights growing tensions between traditional media companies and AI platforms over the use of copyrighted material.

Massachusetts parents sue school over AI use dispute

The parents of a Massachusetts high school senior are suing Hingham High School and its district after their son received a “D” grade and detention for using AI in a social studies project. Jennifer and Dale Harris, the plaintiffs, argue that their son was unfairly punished, as there was no rule in the school’s handbook prohibiting AI use at the time. They claim the grade has impacted his eligibility for the National Honor Society and his applications to top-tier universities like Stanford and MIT.

The lawsuit, filed in Plymouth County District Court, alleges the school’s actions could cause “irreparable harm” to the student’s academic future. Jennifer Harris stated that their son’s use of AI should not be considered cheating, arguing that AI-generated content belongs to the creator. The school, however, classified it as plagiarism. The family’s lawyer, Peter Farrell, contends that there’s widespread information supporting their view that using AI isn’t plagiarism.

The Harrises are seeking to have their son’s grade changed and his academic record cleared. They emphasised that while they can’t reverse past punishments like detention, the school can still adjust his grade and confirm that he did not cheat. Hingham Public Schools has not commented on the ongoing litigation.

London-based company faces scrutiny for AI models misused in propaganda campaigns

A London-based company, Synthesia, known for its lifelike AI video technology, is under scrutiny after its avatars were used in deepfake videos promoting authoritarian regimes. These AI-generated videos, featuring people like Mark Torres and Connor Yeates, falsely showed their likenesses endorsing the military leader of Burkina Faso, causing distress to the models involved. Despite the company’s claims of strengthened content moderation, many affected models were unaware of their image’s misuse until journalists informed them.

In 2022, actors like Torres and Yeates were hired to participate in Synthesia’s AI model shoots for corporate projects. They later discovered their avatars had been used in political propaganda, which they had not consented to. This caused emotional distress, as they feared personal and professional damage from the fake videos. Despite Synthesia’s efforts to ban accounts using its technology for such purposes, the harmful content spread online, including on platforms like Facebook.

UK-based Synthesia has expressed regret, stating it will continue to improve its processes. However, the long-term impact on the actors remains, with some questioning the lack of safeguards in the AI industry and warning of the dangers involved when likenesses are handed over to companies without adequate protections.

Humanoid robot’s portrait of Alan Turing to sell at Sotheby’s

A portrait of Alan Turing created by Ai-Da, a humanoid robot artist, will be auctioned at Sotheby’s London in a pioneering art sale. Ai-Da, equipped with AI algorithms, cameras, and bionic hands, is among the world’s most advanced robots and is designed to resemble a woman.

The 2.2-metre-high painting, titled ‘AI God’, portrays Turing, a mathematician and WWII codebreaker, and highlights concerns about the role of AI. Its muted colours and fragmented facial planes reflect the challenges Turing warned about in managing AI.

Sotheby’s online auction, running from 31 October to 7 November, will explore the intersection of art and technology. The artwork is estimated to sell for £100,000–£150,000. Ai-Da’s previous work includes painting Glastonbury Festival performers like Billie Eilish and Paul McCartney.

Ai-Da’s creator, Aidan Meller, collaborated with AI experts from Oxford and Birmingham to develop the robot. Meller noted that Ai-Da’s haunting artworks continue to raise questions about the future of AI and the global race to control its potential.

Meta unveils Movie Gen in collaboration with Blumhouse

Meta, the owner of Facebook, announced a partnership with Blumhouse Productions, known for hit horror films like ‘The Purge’ and ‘Get Out,’ to test its new generative AI video model, Movie Gen. This follows the recent launch of Movie Gen, which can produce realistic video and audio clips based on user prompts. Meta claims that this tool could compete with offerings from leading media generation startups like OpenAI and ElevenLabs.

Blumhouse has chosen filmmakers Aneesh Chaganty, The Spurlock Sisters, and Casey Affleck to experiment with Movie Gen, with Chaganty’s film set to appear on Meta’s Movie Gen website. In a statement, Blumhouse CEO Jason Blum emphasised the importance of involving artists in the development of new technologies, noting that innovative tools can enhance storytelling for directors.

This partnership highlights Meta’s aim to connect with the creative industries, which have expressed hesitance toward generative AI due to copyright and consent concerns. Several copyright holders have sued companies like Meta, alleging unauthorised use of their works to train AI systems. In response to these challenges, Meta has demonstrated a willingness to compensate content creators, recently securing agreements with actors such as Judi Dench, Kristen Bell, and John Cena for its Meta AI chatbot.

Meanwhile, Microsoft-backed OpenAI has been exploring potential partnerships with Hollywood executives for its video generation tool, Sora, though no deals have been finalised yet. In September, Lions Gate Entertainment announced a collaboration with another AI startup, Runway, underscoring the increasing interest in AI partnerships within the film industry.

NYT issues cease-and-desist to Perplexity over AI content use

The New York Times has issued a cease-and-desist notice to the AI company Perplexity, demanding it halt the use of its content for generating summaries and other outputs. The newspaper claims that Perplexity’s practices violate copyright law, adding to the ongoing tensions between media publishers and AI firms.

The letter from NYT highlighted concerns over how Perplexity continues to use its articles despite promises to stop. Perplexity, which previously agreed to cease using crawling technology, assured it does not scrape data to train models but instead indexes web pages to provide factual citations when responding to user queries.

Perplexity is required to provide details on how it accesses the NYT website by 30 October. The startup has faced similar allegations from other media outlets, including Forbes and Wired, but has since introduced a revenue-sharing programme to address some concerns.

The NYT has taken a strong stance on generative AI, having also sued OpenAI for allegedly using millions of its articles without permission to train its chatbot. The conflict underlines broader worries among publishers about how AI companies are using their content.

Elon Musk reignites legal battle with OpenAI over non-profit to for-profit transition

Elon Musk has reignited his legal fight with OpenAI, accusing the company’s co-founders of manipulating him into investing in the nonprofit startup before turning it into a for-profit business. Musk claims they enriched themselves by draining OpenAI’s key assets and technology. OpenAI, however, has dismissed these claims, describing the lawsuit as part of Musk’s efforts to gain a competitive edge.

OpenAI, which transitioned to a for-profit subsidiary in 2019, attracted billions in outside funding, including from Microsoft. Musk argues the company deviated from its original mission, but OpenAI maintains it remains committed to developing safe and beneficial AI. The startup also suggested Musk’s departure came after his attempt to dominate the organisation failed.

OpenAI has had a turbulent year with leadership changes and rapid growth. The company’s headcount more than doubled, and despite losing key figures, it remains a major player in AI innovation. Recent investments pushed OpenAI’s valuation to $157 billion, underscoring continued investor confidence.

Musk’s ongoing rivalry with OpenAI coincides with his other AI ventures, including xAI, which he launched in 2023. He’s also facing allegations in a Delaware lawsuit accusing his AI company of draining talent and resources from Tesla, potentially harming shareholders.

New Adobe app ensures creator credit as AI grows

Adobe announced it will introduce a free web-based app in 2025 to help creators of images and videos get proper credit for their work, especially as AI systems increasingly rely on large datasets for training. The app will enable users to affix ‘Content Credentials,’ a digital signature, to their creations, indicating authorship and even specifying whether they want their work used for AI training.

Since 2019, Adobe has been developing Content Credentials as part of a broader industry push for transparency in how digital media is created and used. TikTok has already committed to using these credentials to label AI-generated content. However, major AI companies have yet to adopt Adobe’s system, though Adobe continues to advocate for industry-wide adoption.

The initiative comes as legal battles over AI data use intensify, with publishers like The New York Times suing OpenAI. Adobe sees this tool as a way to protect creators and promote transparency, as highlighted by Scott Belsky, Adobe’s chief strategy officer, who described it as a step towards preserving the integrity of creative work online.