Dear readers,
In the past week, Meta Platforms unveiled its partnership with Reuters to integrate Reuters’ news content into its AI chatbot. The collaboration across Meta’s platforms, including Facebook, WhatsApp, and Instagram, allows Meta’s chatbot to respond to real-time news inquiries using Reuters’ trusted reporting. Following Meta’s scaled-back news operations amid content disputes with regulators, this deal marks a notable return to licensed news distribution. It marks the company’s aim to balance AI-driven content with verified information, compensating Reuters through a multi-year agreement and establishing a promising model for AI and media partnerships.
Yet, the path to collaboration has not been smooth for all. Earlier in 2024, News Corp sued Perplexity AI for alleged copyright violations, arguing that the AI company used News Corp’s content without authorisation. The lawsuit was soon echoed by Dow Jones and the New York Post, both accusing Perplexity of bypassing sources. Perplexity defended itself by citing fair use, stressing that its summaries only replicated small portions of articles.
Meanwhile, in August 2024, the French news agency AFP filed a lawsuit against X (formerly Twitter), demanding compensation for using AFP’s content to train AI models. The legal action stresses the global demand for fairer treatment of newsrooms by tech companies and reflects growing concerns that the intellectual property rights of news providers are often sidelined in favour of AI innovation.
However, over the past year, other AI giants like OpenAI have chosen to formalise relationships with media, establishing partnerships with publishers such as Hearst, Conde Nast, and Axel Springer. OpenAI’s ChatGPT now features licensed news content, a strategic move to avoid copyright disputes while providing high-quality, fact-based summaries to users. These partnerships also provide publishers with new avenues for traffic and revenue, showcasing a balanced approach where AI enhances access to reliable news and publishers are compensated.
Other companies like Microsoft and Apple have entered the AI news space, each establishing robust collaborations with news organisations. Microsoft’s approach centres on supporting AI-driven innovation within newsrooms, while Apple plans to utilise publisher archives to improve its AI training data. These initiatives signal a trend toward structured partnerships and the emergence of Big Tech’s role in reshaping news consumption. However, as these tech giants build AI models on news content, pressure grows to respect news publishers’ copyrights, reflecting a delicate balance between AI advancement and content ownership.
As AI becomes increasingly central to media, industry leaders and advocates call for equitable policies to protect newsrooms’ intellectual property and revenue. With studies estimating that Big Tech may owe news publishers billions annually, the push for fair compensation intensifies. But, given the above cases of legal disputes and successful digital business models on the other side, the evolution of AI-news partnerships will likely hinge on transparent standards that ensure newsrooms receive due credit and financial benefit, creating a sustainable, equitable future for AI-driven media. However, these arrangements also raise questions about AI’s long-term impact on traditional newsrooms and revenue structures.
In other news…
UK man sentenced to 18 years for using AI to create child sexual abuse material
In a case spotlighting the misuse of AI in criminal activity, Hugh Nelson, a 27-year-old from Bolton, UK, was sentenced to 18 years in prison for creating child sexual abuse material (CSAM) using AI. Nelson utilised the app Daz 3D to turn ordinary photos of children into exploitative 3D images, some based on photos provided by acquaintances of the victims.
Chinese military adapts Meta’s Llama for AI tool
China’s People’s Liberation Army (PLA) has utilised Meta’s open-source AI model, Llama, to develop a military-adapted AI tool, ChatBIT, focusing on military decision-making and intelligence tasks.
Marko and the Digital Watch team
Highlights from the week of 25-01 November 2024
Six Democratic senators are urging the Biden administration to address human rights and cybersecurity concerns in the upcoming UN Cybercrime Convention, warning it could enable authoritarian surveillance and weaken privacy…
Scrutiny intensifies over X’s handling of misinformation.
PLA researchers use the tech giant’s AI for military innovations.
The lawsuits highlight a growing debate over social media regulation in Brazil, especially after a high-profile legal dispute between Elon Musk’s X platform and a Brazilian Supreme Court justice led…
In response to rising concerns over illegal product sales, the European Commission is preparing to investigate Chinese e-commerce platform Temu for potential regulatory breaches under the DSA.
Masayoshi Son predicts that artificial super intelligence could surpass human brainpower by 10,000 times by 2035.
By 2040, a world with 10B humanoid robots could become reality, with prices set to make them accessible for both personal and business use globally.
A record fine over YouTube ban.
A new AI model from biotech firm Iambic Therapeutics could revolutionise drug development, potentially cutting costs in half by identifying effective drugs early in the testing process.
New developments in hiring, ‘Hiring Assistant’ LinkedIn’s latest AI tool, seeks to ease recruiters’ workloads by automating job listings and candidate searches, marking a new milestone in the platform’s AI…
ICYMI
Reading corner
By partnering with the UN Security Council, DiploAI is transforming session reporting with AI-driven insights that go beyond traditional methods.
Cognitive proximity is key to human-centred AI. Discover how AI can be aligned with human intuition and values, allowing for more harmonious human–AI collaboration. Dr Anita Lamprecht explains.
In the age of AI, understanding its workings is essential for us to shift from being passive passengers to active copilots. While many view AI as a complex tool shrouded in mystery, basic knowledge of its foundational concepts—patterns, probability, hardware, data, and algorithms—can empower us. Recognizing the influence of biases in AI and advocating for ethical practices and diversity in its development are crucial steps. By engaging in discussions around AI’s governance, we can navigate our AI-driven reality, ensuring that technology serves the common good rather than merely accepting its outcomes.
Upcoming
Unpacking Global Digital Compact | Book launch Join us online on 8th November for the launch of Unpacking Global Digital Compact, a new publication written by