The United Kingdom is set to become the first country to criminalise the use of AI to create child sexual abuse images. New offences will target AI-generated explicit content, including tools that ‘nudeify’ real-life images of children. The move follows a sharp rise in AI-generated abuse material, with reports increasing nearly five-fold in 2024, according to the Internet Watch Foundation.
The government warns that predators are using AI to disguise their identities and blackmail children into further exploitation. New laws will criminalise the possession, creation, or distribution of AI tools designed for child abuse material, as well as so-called ‘paedophile manuals’ that provide instructions on using such technology. Websites hosting AI-generated child abuse content will also be targeted, and authorities will gain powers to unlock digital devices for inspection.
The measures will be included in the upcoming Crime and Policing Bill. Earlier this month, Britain also announced plans to outlaw AI-generated ‘deepfake’ pornography, making it illegal to create or share sexually explicit deepfakes. Officials say the new laws will help protect children from emerging online threats.
Australia’s government recently passed laws banning social media access for children under 16, targeting platforms like TikTok, Snapchat, Instagram, Facebook, and X. However, YouTube was granted an exemption, with the government arguing that it serves as a valuable educational tool and is not a ‘core social media application.’ That decision followed input from company executives and educational content creators, who argued that YouTube is essential for learning and information-sharing. While the government claims broad community support for the exemption, some experts believe this undermines the goal of protecting children from harmful online content.
Mental health and extremism experts have raised concerns that YouTube exposes young users to dangerous material, including violent, extremist, and addictive content. Despite being exempted from the ban, YouTube has been criticised for its algorithm, which researchers say can promote far-right ideologies, misogyny, and conspiracy theories to minors. Studies conducted by academics have shown that the platform delivers problematic content within minutes of search queries, including harmful videos on topics like sex, COVID-19, and European history.
To test these claims, Reuters created child accounts and found that searches led to content promoting extremism and hate speech. Although YouTube removed some flagged videos, others remain on the platform. YouTube stated that it is actively working to improve its content moderation systems and that it has removed content violating its policies. However, critics argue that the platform’s algorithm still allows harmful content to thrive, especially among younger users.
Microsoft-backed OpenAI is seeking to prevent some of India’s largest media organisations, including those linked to Gautam Adani and Mukesh Ambani, from joining a copyright lawsuit. The case, initiated by news agency ANI last year, involves claims that AI systems like ChatGPT use copyrighted material without permission, sparking a wider debate over AI and intellectual property in the country. India ranks as OpenAI’s second-largest market by user numbers, following the US.
OpenAI has argued its AI services rely only on publicly available data and adhere to fair use principles. During Tuesday’s hearing, OpenAI’s lawyer opposed bids by additional media organisations to join the case, stating he would submit formal objections in writing. The company has also challenged the court’s jurisdiction, asserting that its servers are located outside India. The case is scheduled to continue in February.
The Federation of Indian Publishers has accused ChatGPT of harming their business by summarising books from unlicensed online sources. OpenAI denies these claims, maintaining its tools do not infringe copyright. Prominent digital media groups, including the Indian Express and Hindustan Times, allege ChatGPT scrapes and reproduces their content, prompting their involvement in the lawsuit.
Tensions escalated over media coverage of the case, with OpenAI objecting to reports based on non-public court filings. Lawyers representing media groups called such claims unfounded. The lawsuit is poised to shape the future of AI and copyright law in India, as courts worldwide grapple with similar challenges.
Microsoft and OpenAI are investigating whether a group linked to Chinese AI startup DeepSeek accessed OpenAI data without authorisation. Bloomberg News reported that Microsoft’s security team detected large-scale data transfers last autumn using OpenAI’s application programming interface (API).
Microsoft, OpenAI’s largest investor, flagged the suspicious activity to the AI firm. DeepSeek, a low-cost Chinese AI startup, gained attention after its AI assistant surpassed OpenAI’s ChatGPT on Apple’s App Store in the US, causing a selloff in tech stocks.
White House AI and crypto adviser David Sacks suggested DeepSeek may have stolen US intellectual property by extracting knowledge from OpenAI’s models. An OpenAI spokesperson acknowledged that foreign firms frequently attempt to replicate its technology and stressed the importance of government collaboration to protect advanced AI models.
Microsoft declined to comment on the matter, while DeepSeek was unavailable for a response. OpenAI stated it actively counters unauthorised attempts to replicate its technology but did not specifically name DeepSeek.
The UK government has demanded urgent action from major social media platforms to remove violent and extremist content following the Southport killings. Home Secretary Yvette Cooper criticised the ease with which Axel Rudakubana, who murdered three children and attempted to kill ten others, accessed an al-Qaeda training manual and other violent material online. She described the availability of such content as “unacceptable” and called for immediate action.
Rudakubana, jailed last week for his crimes, had reportedly used techniques from the manual during the attack and watched graphic footage of a similar incident before carrying it out. While platforms like YouTube and TikTok are expected to comply with the UK‘s Online Safety Act when it comes into force in March, Cooper argued that companies have a ‘moral responsibility’ to act now rather than waiting for legal enforcement.
The Southport attack has intensified scrutiny on gaps in counter-terrorism measures and the role of online content in fostering extremism. The government has announced a public inquiry into missed opportunities to intervene, revealing that Rudakubana had been referred to the Prevent programme multiple times. Cooper’s call for immediate action underscores the urgent need to prevent further tragedies linked to online extremism.
US President Donald Trump has signed an executive order aimed at solidifying the country’s dominance in artificial intelligence. The directive includes creating an Artificial Intelligence Action Plan within 180 days to promote economic competitiveness, national security, and human well-being. The White House confirmed this initiative as part of efforts to position the nation as a global AI leader.
Trump has also instructed his AI and national security advisers to dismantle policies implemented by former President Joe Biden. Among these is a 2023 order requiring AI developers to submit safety test results to the government for systems with potential risks to national security, public safety, or the economy.
Biden’s policies aimed to regulate AI development under the Defence Production Act to minimise risks posed by advanced technologies. Critics argue the approach imposed unnecessary constraints, while supporters viewed it as a safeguard against potential misuse of AI.
The latest move reflects Trump’s broader strategy to reshape the nation’s AI framework, focusing on economic growth and innovation while rolling back measures seen as restrictive.
Google is rolling out a unique accessibility feature for Chromebooks that allows users to control their devices using head and facial movements. Initially introduced in December, this tool is designed for people with motor impairments and uses AI to let facial gestures act as a virtual cursor. The feature is available on Chromebooks with 8GB of RAM or more and builds on Google’s prior efforts, such as its Project Gameface accessibility tool for Windows and Android.
In addition to accessibility, Google is unveiling over 20 new Chromebook models this year, including the Lenovo Chromebook Plus 2-in-1, to complement its existing lines. The devices target educators, students, and general users seeking enhanced performance and versatility.
Google has also introduced ‘Class Tools’ for ChromeOS, which offer teachers real-time screen-sharing capabilities. These tools allow educators to share content directly with students, monitor their progress, and activate live captions or translations during lessons. Integration with Figma’s FigJam now brings interactive whiteboard assignments to Google Classroom, promoting collaboration and creative group work. Together, these updates aim to enhance accessibility and productivity in education.
Meta users in the US are experiencing an unusual phenomenon where they are being automatically re-followed by the accounts of President Donald Trump, Vice President JD Vance, and first lady Melania Trump. The issue emerged after users intentionally unfollowed these accounts following the administration’s transition. Feedback from users, including actress Demi Lovato and comedian Sarah Colonna, highlighted frustration over the inability to maintain their choice to unfollow prominent political figures.
Upon the change of administration, official White House social media accounts are supposed to transition smoothly to the new leaders. While Meta’s communications director Andy Stone acknowledged that followers from the Biden administration were carried over to Trump’s accounts, he confirmed that users were not being forced to re-follow these profiles. Stone suggested that delays in processing follow and unfollow requests might contribute to the confusion experienced by users.
Many individuals reported recurrent issues despite efforts to unfollow the accounts multiple times, raising questions about the underlying technicalities involved. Users are expressing concerns over privacy and choice in the use of social media platforms, as the ability to curate their feeds appears compromised. However, this automatic re-following could reflect broader implications for user control in digital spaces.
As Meta has yet to release a detailed response to the reported glitch, users continue to voice their concerns across multiple platforms. The situation underscores an ongoing need for clarity and assurance regarding user preferences in social media interactions, especially during a politically sensitive time.
The Indian government has issued notices to ride-hailing companies Ola and Uber, launching an investigation into allegations of price discrimination. Concerns have arisen over reports and user complaints suggesting that iPhone users are being charged significantly higher fares for the same rides compared to those using Android devices. This investigation, led by the Central Consumer Protection Agency (CCPA), aims to determine if these price discrepancies are indeed occurring and whether they constitute unfair trade practices.
The government has previously expressed strong opposition to differential pricing, deeming it an unfair and discriminatory practice. India is a crucial market for both Ola and Uber, with intense competition among various ride-hailing services. The outcome of this investigation could have significant implications for the industry, potentially impacting pricing models and consumer trust.
Beyond the ride-hailing sector, the CCPA will also examine potential pricing disparities in other sectors, including food delivery and online ticketing platforms. The broader investigation aims to identify and address any instances where consumers may be facing discriminatory pricing based on factors such as the device they use or other personal characteristics.
Ensuring fair and transparent pricing practices in the digital economy is crucial. As technology continues to shape our daily lives, it is essential to address concerns about potential algorithmic biases and discriminatory practices that may be embedded within digital platforms. The Indian government’s action sends a clear message that such practices will not be tolerated and that consumer protection remains a top priority.
LinkedIn, owned by Microsoft, faces a class-action lawsuit from its Premium customers who allege that the platform improperly shared their private messages with third parties to train AI models. The lawsuit alleges that LinkedIn introduced a new privacy setting last August that allowed users to control the sharing of their data, yet failed to adequately inform them about the use of their messages for AI training.
Customers claim that a stealthy update to LinkedIn’s privacy policy on 18 September outlined this data usage, while also stating that opting out of data sharing would not prevent past training from being utilised.
The plaintiffs, representing millions of Premium users, seek damages for breaches of contract and violations of California’s unfair competition laws. In addition, they demand compensation of $1,000 for each individual affected by alleged violations of the federal Stored Communications Act. The lawsuit highlights concerns over the potential misuse of customer data, asserting that LinkedIn deliberately obscured its practices to evade scrutiny regarding user privacy.
LinkedIn has denied the allegations, stating that the claims lack merit. The legal action arose just hours after President Donald Trump announced a significant AI investment initiative, backed by Microsoft and other major companies. In San Jose, California, the case has been filed as De La Torre v. LinkedIn Corp in the federal district court.
With privacy becoming an increasingly crucial issue, the implications of this lawsuit could resonate throughout the tech industry. Customers are scrutinising platforms’ commitments to safeguarding personal information, especially in the context of rapidly evolving AI technologies.