News Corp, the media giant behind outlets like The Wall Street Journal and the New York Post, has filed a lawsuit against the AI search engine Perplexity, accusing the company of infringing on its copyrighted content. According to the lawsuit, Perplexity allegedly copies and summarises large quantities of News Corp’s articles, analyses, and opinions without permission, potentially diverting revenue from the original publishers. The AI startup, which positions itself as a tool to help users ‘skip the links’ to full articles, is claimed to have harmed the financial interests of news outlets by discouraging users from visiting the sources.
The lawsuit goes beyond accusations of content scraping, stating that Perplexity has sometimes reproduced material verbatim and falsely attributed facts or even invented news stories under News Corp’s name. News Corp claims it sent a cease-and-desist letter to Perplexity in July but received no response, prompting the legal action. Perplexity has also faced similar accusations from other major publications like Wired, Forbes, and The New York Times, with concerns over scraping content, bypassing paywalls, and plagiarism.
In the lawsuit, News Corp asks the court to order Perplexity to stop using its content without authorisation and destroy any databases containing its works. CEO Robert Thomson condemned Perplexity’s practices as abusing intellectual property that harms journalists and content creators. Thomson did, however, commend other companies like OpenAI, which have made deals with News Corp and other outlets to use their content for AI training legally.
Perplexity has yet to comment on the lawsuit, though it has started paying some publishers, including Time and Fortune, for the use of their content. As the legal battle unfolds, the case highlights growing tensions between traditional media companies and AI platforms over the use of copyrighted material.
Imogen Heap, known for her creative innovations in pop music, is embarking on her most ambitious project yet, a digital twin called Mogen. Powered by AI, Mogen is trained on years of Heap’s data, allowing it to mimic her voice, personality, and creative process. Initially designed for fan interactions, the AI now aims to transform how Heap performs and produces music by integrating real-time improvisation and data-driven collaboration during live shows.
Heap sees Mogen as more than a digital assistant. She envisions it as a tool that can help her streamline her workflow and deepen her creative process. In live performances, Mogen could use biometric and atmospheric data to create hyperreal, immersive experiences for the audience. Heap believes this level of interaction between artist, AI, and fan will pave the way for new, ethically sound ways to collaborate with technology.
The project reflects Heap’s ongoing mission to push the boundaries of music and tech, using AI not just for efficiency but for expanding creative possibilities. While she acknowledges potential risks, Heap is confident that AI can revolutionise the music industry, much like her earlier work with vocoders and innovative production techniques.
ByteDance, the parent company of TikTok, has dismissed an intern for what it described as “maliciously interfering” with the training of one of its AI models. The Chinese tech giant clarified that while the intern, who was part of the advertising technology team, had no experience with ByteDance’s AI Lab, some reports circulating on social media and other platforms have exaggerated the incident’s impact.
ByteDance stated that the interference did not disrupt its commercial operations or its large language AI models. It also denied claims that the damage exceeded $10 million or affected an AI training system powered by thousands of graphics processing units (GPUs). The company highlighted that the intern was fired in August, and it has since notified their university and relevant industry bodies.
As one of the leading tech firms in AI development, ByteDance operates popular platforms like TikTok and Douyin. The company continues to invest heavily in AI, with applications including its Doubao chatbot and a text-to-video tool named Jimeng.
The parents of a Massachusetts high school senior are suing Hingham High School and its district after their son received a “D” grade and detention for using AI in a social studies project. Jennifer and Dale Harris, the plaintiffs, argue that their son was unfairly punished, as there was no rule in the school’s handbook prohibiting AI use at the time. They claim the grade has impacted his eligibility for the National Honor Society and his applications to top-tier universities like Stanford and MIT.
The lawsuit, filed in Plymouth County District Court, alleges the school’s actions could cause “irreparable harm” to the student’s academic future. Jennifer Harris stated that their son’s use of AI should not be considered cheating, arguing that AI-generated content belongs to the creator. The school, however, classified it as plagiarism. The family’s lawyer, Peter Farrell, contends that there’s widespread information supporting their view that using AI isn’t plagiarism.
The Harrises are seeking to have their son’s grade changed and his academic record cleared. They emphasised that while they can’t reverse past punishments like detention, the school can still adjust his grade and confirm that he did not cheat. Hingham Public Schools has not commented on the ongoing litigation.
A London-based company, Synthesia, known for its lifelike AI video technology, is under scrutiny after its avatars were used in deepfake videos promoting authoritarian regimes. These AI-generated videos, featuring people like Mark Torres and Connor Yeates, falsely showed their likenesses endorsing the military leader of Burkina Faso, causing distress to the models involved. Despite the company’s claims of strengthened content moderation, many affected models were unaware of their image’s misuse until journalists informed them.
In 2022, actors like Torres and Yeates were hired to participate in Synthesia’s AI model shoots for corporate projects. They later discovered their avatars had been used in political propaganda, which they had not consented to. This caused emotional distress, as they feared personal and professional damage from the fake videos. Despite Synthesia’s efforts to ban accounts using its technology for such purposes, the harmful content spread online, including on platforms like Facebook.
UK-based Synthesia has expressed regret, stating it will continue to improve its processes. However, the long-term impact on the actors remains, with some questioning the lack of safeguards in the AI industry and warning of the dangers involved when likenesses are handed over to companies without adequate protections.
IBM unveiled its latest AI model, known as ‘Granite 3.0,’ on Monday, targeting businesses eager to adopt generative AI technology. The company aims to stand out from its competitors by offering these models as open-source, a different approach from firms like Microsoft, which charge clients for access to their AI models. IBM’s open-source strategy promotes accessibility and flexibility, allowing businesses to customise and integrate these models as needed.
Alongside the Granite 3.0 models, IBM provides a paid service called Watsonx, which assists companies in running these models within their data centres once they are customised. This service gives enterprises more control over their AI solutions, enabling them to tailor and optimise the models for their specific needs while maintaining privacy and data security within their infrastructure.
The Granite models are already available for commercial use through the Watsonx platform. In addition, select models from the Granite family will be accessible on Nvidia’s AI software stack, allowing businesses to incorporate these models using Nvidia’s advanced tools and resources. IBM collaborated closely with Nvidia, utilising its H100 GPUs, a leading technology in the AI chip market, to train these models. Dario Gil, IBM’s research director, highlighted that the partnership with Nvidia is central to delivering powerful and efficient AI solutions for enterprises looking to stay ahead in a rapidly evolving technological landscape.
A new report from Aspen Digital reveals that 76% of Asia’s private wealth sector has already ventured into digital assets, with an additional 18% planning future investments. Interest in digital assets has surged since 2022, when just 58% of respondents had explored the space. The survey covered 80 family offices and high-net-worth individuals and found that most manage assets ranging from $10 million to $500 million.
Among those invested, 70% have allocated less than 5% of their portfolios to digital assets, although some increased their holdings to over 10% in 2024. Interest in decentralised finance (DeFi) and blockchain applications continues to grow, with two-thirds expressing a desire to explore DeFi, while 61% are keen on AI and decentralised physical infrastructure.
The approval of spot Bitcoin ETFs, particularly in the US and Hong Kong, has driven increased demand for digital assets. The report highlighted that 53% of investors have gained exposure through funds or ETFs, with optimism remaining high as 31% predict Bitcoin could reach $100,000 by the end of 2024.
A1 Austria, Eurofiber, and Quantcom have joined forces to develop a high-speed dark-fibre network connecting Frankfurt and Vienna, marking a significant advancement in European telecommunications. Scheduled for completion in December 2025, this ambitious project aims to deliver an ultra-low-latency infrastructure essential for meeting modern telecommunications’s growing demands.
By collaborating, these three providers are not only bolstering their technical capabilities but are also ensuring that the network will support a wide array of critical applications, including cloud services, media broadcasting, AI, and machine learning (ML). Furthermore, the network’s low latency will significantly enhance connectivity for key industries across Europe, making it a vital asset for telecommunications companies, fixed network operators, and global enterprises.
Ultimately, this new fibre network is poised to serve as a critical backbone for the region’s digital ecosystem, facilitating seamless communication and data exchange. As a result, it is expected to have a substantial economic impact by connecting various industries and enabling high-performance connectivity, thereby acting as a catalyst for growth across multiple sectors.
Moreover, this initiative addresses the current demand for faster and more reliable data transfer and lays the groundwork for a more robust digital infrastructure in Europe, thereby fostering innovation and economic development in the years to come.
Bain & Company announced it is expanding its partnership with OpenAI to offer AI tools like ChatGPT to its clients. The firms previously formed a global alliance to introduce OpenAI technology to Bain’s clients, and the consultancy has now made OpenAI platforms, including ChatGPT Enterprise, available to its employees worldwide.
Bain is also setting up an OpenAI Centre of Excellence, managed by its own team, to further integrate AI solutions. The partnership will initially focus on developing custom solutions for the retail and healthcare life sciences industries, with plans for expansion into other sectors.
While Bain did not disclose financial details, around 50 employees will be dedicated to this collaboration, as reported by the Wall Street Journal.
Two independent candidates participated in an online debate on Thursday, engaging with an AI-generated version of incumbent congressman Don Beyer. The digital avatar, dubbed ‘DonBot’, was created using Beyer’s website and public materials to simulate his responses in the event, streamed on YouTube and Rumble.
Beyer, a Democrat seeking re-election, opted not to join the debate in person. His AI representation featured a robotic voice reading answers without imitating his tone. Independent challengers Bentley Hensel and David Kennedy appeared on camera, while the Republican candidate Jerry Torres did not participate. Viewership remained low, peaking at fewer than 20 viewers, and parts of DonBot’s responses were inaudible.
Hensel explained that the AI was programmed to provide unbiased answers using available public information. The debate tackled policy areas such as healthcare, gun control, and aid to Israel. When asked why voters should re-elect Beyer, the AI stated, ‘I believe that I can make a real difference in the lives of the people of Virginia’s 8th district.’
Although the event saw minimal impact, observers suggest the use of AI in politics could become more prevalent. The reliance on such technology raises concerns about transparency, especially if no regulations are introduced to guide its use in future elections.