Massachusetts parents sue school over AI use dispute

The parents of a Massachusetts high school senior are suing Hingham High School and its district after their son received a “D” grade and detention for using AI in a social studies project. Jennifer and Dale Harris, the plaintiffs, argue that their son was unfairly punished, as there was no rule in the school’s handbook prohibiting AI use at the time. They claim the grade has impacted his eligibility for the National Honor Society and his applications to top-tier universities like Stanford and MIT.

The lawsuit, filed in Plymouth County District Court, alleges the school’s actions could cause “irreparable harm” to the student’s academic future. Jennifer Harris stated that their son’s use of AI should not be considered cheating, arguing that AI-generated content belongs to the creator. The school, however, classified it as plagiarism. The family’s lawyer, Peter Farrell, contends that there’s widespread information supporting their view that using AI isn’t plagiarism.

The Harrises are seeking to have their son’s grade changed and his academic record cleared. They emphasised that while they can’t reverse past punishments like detention, the school can still adjust his grade and confirm that he did not cheat. Hingham Public Schools has not commented on the ongoing litigation.

London-based company faces scrutiny for AI models misused in propaganda campaigns

A London-based company, Synthesia, known for its lifelike AI video technology, is under scrutiny after its avatars were used in deepfake videos promoting authoritarian regimes. These AI-generated videos, featuring people like Mark Torres and Connor Yeates, falsely showed their likenesses endorsing the military leader of Burkina Faso, causing distress to the models involved. Despite the company’s claims of strengthened content moderation, many affected models were unaware of their image’s misuse until journalists informed them.

In 2022, actors like Torres and Yeates were hired to participate in Synthesia’s AI model shoots for corporate projects. They later discovered their avatars had been used in political propaganda, which they had not consented to. This caused emotional distress, as they feared personal and professional damage from the fake videos. Despite Synthesia’s efforts to ban accounts using its technology for such purposes, the harmful content spread online, including on platforms like Facebook.

UK-based Synthesia has expressed regret, stating it will continue to improve its processes. However, the long-term impact on the actors remains, with some questioning the lack of safeguards in the AI industry and warning of the dangers involved when likenesses are handed over to companies without adequate protections.

Humanoid robot’s portrait of Alan Turing to sell at Sotheby’s

A portrait of Alan Turing created by Ai-Da, a humanoid robot artist, will be auctioned at Sotheby’s London in a pioneering art sale. Ai-Da, equipped with AI algorithms, cameras, and bionic hands, is among the world’s most advanced robots and is designed to resemble a woman.

The 2.2-metre-high painting, titled ‘AI God’, portrays Turing, a mathematician and WWII codebreaker, and highlights concerns about the role of AI. Its muted colours and fragmented facial planes reflect the challenges Turing warned about in managing AI.

Sotheby’s online auction, running from 31 October to 7 November, will explore the intersection of art and technology. The artwork is estimated to sell for £100,000–£150,000. Ai-Da’s previous work includes painting Glastonbury Festival performers like Billie Eilish and Paul McCartney.

Ai-Da’s creator, Aidan Meller, collaborated with AI experts from Oxford and Birmingham to develop the robot. Meller noted that Ai-Da’s haunting artworks continue to raise questions about the future of AI and the global race to control its potential.

Meta unveils Movie Gen in collaboration with Blumhouse

Meta, the owner of Facebook, announced a partnership with Blumhouse Productions, known for hit horror films like ‘The Purge’ and ‘Get Out,’ to test its new generative AI video model, Movie Gen. This follows the recent launch of Movie Gen, which can produce realistic video and audio clips based on user prompts. Meta claims that this tool could compete with offerings from leading media generation startups like OpenAI and ElevenLabs.

Blumhouse has chosen filmmakers Aneesh Chaganty, The Spurlock Sisters, and Casey Affleck to experiment with Movie Gen, with Chaganty’s film set to appear on Meta’s Movie Gen website. In a statement, Blumhouse CEO Jason Blum emphasised the importance of involving artists in the development of new technologies, noting that innovative tools can enhance storytelling for directors.

This partnership highlights Meta’s aim to connect with the creative industries, which have expressed hesitance toward generative AI due to copyright and consent concerns. Several copyright holders have sued companies like Meta, alleging unauthorised use of their works to train AI systems. In response to these challenges, Meta has demonstrated a willingness to compensate content creators, recently securing agreements with actors such as Judi Dench, Kristen Bell, and John Cena for its Meta AI chatbot.

Meanwhile, Microsoft-backed OpenAI has been exploring potential partnerships with Hollywood executives for its video generation tool, Sora, though no deals have been finalised yet. In September, Lions Gate Entertainment announced a collaboration with another AI startup, Runway, underscoring the increasing interest in AI partnerships within the film industry.

NYT issues cease-and-desist to Perplexity over AI content use

The New York Times has issued a cease-and-desist notice to the AI company Perplexity, demanding it halt the use of its content for generating summaries and other outputs. The newspaper claims that Perplexity’s practices violate copyright law, adding to the ongoing tensions between media publishers and AI firms.

The letter from NYT highlighted concerns over how Perplexity continues to use its articles despite promises to stop. Perplexity, which previously agreed to cease using crawling technology, assured it does not scrape data to train models but instead indexes web pages to provide factual citations when responding to user queries.

Perplexity is required to provide details on how it accesses the NYT website by 30 October. The startup has faced similar allegations from other media outlets, including Forbes and Wired, but has since introduced a revenue-sharing programme to address some concerns.

The NYT has taken a strong stance on generative AI, having also sued OpenAI for allegedly using millions of its articles without permission to train its chatbot. The conflict underlines broader worries among publishers about how AI companies are using their content.

Elon Musk reignites legal battle with OpenAI over non-profit to for-profit transition

Elon Musk has reignited his legal fight with OpenAI, accusing the company’s co-founders of manipulating him into investing in the nonprofit startup before turning it into a for-profit business. Musk claims they enriched themselves by draining OpenAI’s key assets and technology. OpenAI, however, has dismissed these claims, describing the lawsuit as part of Musk’s efforts to gain a competitive edge.

OpenAI, which transitioned to a for-profit subsidiary in 2019, attracted billions in outside funding, including from Microsoft. Musk argues the company deviated from its original mission, but OpenAI maintains it remains committed to developing safe and beneficial AI. The startup also suggested Musk’s departure came after his attempt to dominate the organisation failed.

OpenAI has had a turbulent year with leadership changes and rapid growth. The company’s headcount more than doubled, and despite losing key figures, it remains a major player in AI innovation. Recent investments pushed OpenAI’s valuation to $157 billion, underscoring continued investor confidence.

Musk’s ongoing rivalry with OpenAI coincides with his other AI ventures, including xAI, which he launched in 2023. He’s also facing allegations in a Delaware lawsuit accusing his AI company of draining talent and resources from Tesla, potentially harming shareholders.

New Adobe app ensures creator credit as AI grows

Adobe announced it will introduce a free web-based app in 2025 to help creators of images and videos get proper credit for their work, especially as AI systems increasingly rely on large datasets for training. The app will enable users to affix ‘Content Credentials,’ a digital signature, to their creations, indicating authorship and even specifying whether they want their work used for AI training.

Since 2019, Adobe has been developing Content Credentials as part of a broader industry push for transparency in how digital media is created and used. TikTok has already committed to using these credentials to label AI-generated content. However, major AI companies have yet to adopt Adobe’s system, though Adobe continues to advocate for industry-wide adoption.

The initiative comes as legal battles over AI data use intensify, with publishers like The New York Times suing OpenAI. Adobe sees this tool as a way to protect creators and promote transparency, as highlighted by Scott Belsky, Adobe’s chief strategy officer, who described it as a step towards preserving the integrity of creative work online.

Google warns of drastic steps if New Zealand law passes

Google has announced it will stop linking to New Zealand news articles and end agreements with local news outlets if a proposed law to ensure fair revenue sharing moves forward. The New Zealand government is reviewing legislation aimed at making tech companies like Google pay for news content featured on their platforms, following a similar model introduced in Australia.

Google New Zealand’s Country Director, Caroline Rainsford, expressed concerns about the potential law, saying it would require major changes to Google’s services. She highlighted that Google could be forced to stop showing news content on platforms like Google Search and Google News in the country if the law passes.

The company also warned the legislation could negatively affect smaller publishers and create financial uncertainty. Despite these concerns, the New Zealand government remains in consultation, with Media and Communications Minister Paul Goldsmith considering feedback before any final decision.

While the minority coalition partner ACT opposes the law, it is expected to receive enough cross-party support to pass. Australia has already implemented a similar law, which has been deemed successful by a government review.

Paul McCartney returns with AI-aided Beatles song on new tour

Sir Paul McCartney has announced his return to the stage with the ‘Got Back’ tour, featuring a highly anticipated performance of the last Beatles song, Now and Then. The song, which includes vocals from the late John Lennon, was completed with the help of AI technology and marks a poignant moment in Beatles history.

Now and Then was created using Lennon’s vocals from an old cassette tape, recovered and refined using AI. McCartney and fellow Beatle Ringo Starr worked together on the project, adding guitar parts from the late George Harrison. The song, originally left unfinished in 1977, has now been brought to life, with McCartney singing alongside Lennon’s voice.

The tour will kick off in Montevideo, Uruguay, before moving through South America and Europe, with two dates at Manchester’s Co-op Live and two final shows at London’s O2 Arena in December. McCartney, who last played in the UK at Glastonbury four years ago, has expressed excitement about returning to his home country to end the tour.

Despite some complaints from Liverpool fans over the absence of a hometown gig, McCartney remains enthusiastic about his UK shows. He described the upcoming performances as a ‘special feeling’ and looks forward to closing out the year with a celebration on home soil.

New Cloudflare marketplace to help websites profit from AI scraping

Cloudflare is launching a marketplace that will let websites charge AI companies for scraping their content, aiming to give smaller publishers more control over how AI models use their data. Large AI models scrape thousands of websites to train their systems, often without compensating the content creators, which could threaten the business models of many smaller websites. The marketplace, launching next year, will allow website owners to negotiate deals with AI model providers, charging them based on how often they scrape the site or by setting their terms.

Cloudflare’s launch of AI Audit is a big step for website owners to gain better control over AI bot activity on their sites. Providing detailed analytics on which AI bots access their content empowers site owners to make informed decisions about managing bot traffic. The ability to block specific bots while allowing others can help mitigate issues related to unwanted scraping, which can negatively impact performance and increase operational costs. This tool could be handy for businesses and content creators who rely on their online presence and want to safeguard their resources.

Cloudflare’s CEO, Matthew Prince, believes this marketplace will create a more sustainable system for publishers and AI companies. While some AI firms may resist paying for currently free content, Prince argues that compensating creators is crucial for ensuring the continued production of quality content. The initiative could help balance the relationship between AI companies and content creators, allowing even small publishers to profit from their data in the AI age.