Young people in Guernsey are being offered a free six-week course on AI to help them understand both the opportunities and challenges of the technology. Run by Digital Greenhouse in St Peter Port, the programme is open to students and graduates over the age of 16, regardless of their academic background. Experts from University College London (UCL) deliver the lessons remotely each week.
Jenny de la Mare from Digital Greenhouse said the course was designed to “inform and inspire” participants while helping them stand out in job and university applications. She emphasised that the programme was not limited to STEM students and could serve as a strong introduction to AI for anyone interested in the field.
Recognising that young people in Guernsey may have fewer opportunities to attend major tech events in the UK, organisers hope the course will give them a competitive edge. The programme has already started but is still open for registrations, with interested individuals encouraged to contact Digital Greenhouse.
With Germany’s parliamentary elections just weeks away, lawmakers are warning that authoritarian states, including Russia, are intensifying disinformation efforts to destabilise the country. Authorities are particularly concerned about a Russian campaign, known as Doppelgänger, which has been active since 2022 and aims to undermine Western support for Ukraine. The campaign has been linked to fake social media accounts and misleading content in Germany, France, and the US.
CSU MP Thomas Erndl confirmed that Russia is attempting to influence European elections, including in Germany. He argued that disinformation campaigns are contributing to the rise of right-wing populist parties, such as the AfD, by sowing distrust in state institutions and painting foreigners and refugees as a problem. Erndl emphasised the need for improved defences, including modern technologies like AI to detect disinformation, and greater public awareness and education.
The German Foreign Ministry recently reported the identification of over 50,000 fake X accounts associated with the Doppelgänger campaign. These accounts mimic credible news outlets like Der Spiegel and Welt to spread fabricated articles, amplifying propaganda. Lawmakers stress the need for stronger cooperation within Europe and better tools for intelligence agencies to combat these threats, even suggesting that a shift in focus from privacy to security may be necessary to tackle the issue effectively.
Greens MP Konstantin von Notz highlighted the security risks posed by disinformation campaigns, warning that authoritarian regimes like Russia and China are targeting democratic societies, including Germany. He called for stricter regulation of online platforms, stronger counterintelligence efforts, and increased media literacy to bolster social resilience. As the election date approaches, lawmakers urge both government agencies and the public to remain vigilant against the growing threat of foreign interference.
The United Kingdom is set to become the first country to criminalise the use of AI to create child sexual abuse images. New offences will target AI-generated explicit content, including tools that ‘nudeify’ real-life images of children. The move follows a sharp rise in AI-generated abuse material, with reports increasing nearly five-fold in 2024, according to the Internet Watch Foundation.
The government warns that predators are using AI to disguise their identities and blackmail children into further exploitation. New laws will criminalise the possession, creation, or distribution of AI tools designed for child abuse material, as well as so-called ‘paedophile manuals’ that provide instructions on using such technology. Websites hosting AI-generated child abuse content will also be targeted, and authorities will gain powers to unlock digital devices for inspection.
The measures will be included in the upcoming Crime and Policing Bill. Earlier this month, Britain also announced plans to outlaw AI-generated ‘deepfake’ pornography, making it illegal to create or share sexually explicit deepfakes. Officials say the new laws will help protect children from emerging online threats.
Australia’s government recently passed laws banning social media access for children under 16, targeting platforms like TikTok, Snapchat, Instagram, Facebook, and X. However, YouTube was granted an exemption, with the government arguing that it serves as a valuable educational tool and is not a ‘core social media application.’ That decision followed input from company executives and educational content creators, who argued that YouTube is essential for learning and information-sharing. While the government claims broad community support for the exemption, some experts believe this undermines the goal of protecting children from harmful online content.
Mental health and extremism experts have raised concerns that YouTube exposes young users to dangerous material, including violent, extremist, and addictive content. Despite being exempted from the ban, YouTube has been criticised for its algorithm, which researchers say can promote far-right ideologies, misogyny, and conspiracy theories to minors. Studies conducted by academics have shown that the platform delivers problematic content within minutes of search queries, including harmful videos on topics like sex, COVID-19, and European history.
To test these claims, Reuters created child accounts and found that searches led to content promoting extremism and hate speech. Although YouTube removed some flagged videos, others remain on the platform. YouTube stated that it is actively working to improve its content moderation systems and that it has removed content violating its policies. However, critics argue that the platform’s algorithm still allows harmful content to thrive, especially among younger users.
Microsoft-backed OpenAI is seeking to prevent some of India’s largest media organisations, including those linked to Gautam Adani and Mukesh Ambani, from joining a copyright lawsuit. The case, initiated by news agency ANI last year, involves claims that AI systems like ChatGPT use copyrighted material without permission, sparking a wider debate over AI and intellectual property in the country. India ranks as OpenAI’s second-largest market by user numbers, following the US.
OpenAI has argued its AI services rely only on publicly available data and adhere to fair use principles. During Tuesday’s hearing, OpenAI’s lawyer opposed bids by additional media organisations to join the case, stating he would submit formal objections in writing. The company has also challenged the court’s jurisdiction, asserting that its servers are located outside India. The case is scheduled to continue in February.
The Federation of Indian Publishers has accused ChatGPT of harming their business by summarising books from unlicensed online sources. OpenAI denies these claims, maintaining its tools do not infringe copyright. Prominent digital media groups, including the Indian Express and Hindustan Times, allege ChatGPT scrapes and reproduces their content, prompting their involvement in the lawsuit.
Tensions escalated over media coverage of the case, with OpenAI objecting to reports based on non-public court filings. Lawyers representing media groups called such claims unfounded. The lawsuit is poised to shape the future of AI and copyright law in India, as courts worldwide grapple with similar challenges.
Microsoft and OpenAI are investigating whether a group linked to Chinese AI startup DeepSeek accessed OpenAI data without authorisation. Bloomberg News reported that Microsoft’s security team detected large-scale data transfers last autumn using OpenAI’s application programming interface (API).
Microsoft, OpenAI’s largest investor, flagged the suspicious activity to the AI firm. DeepSeek, a low-cost Chinese AI startup, gained attention after its AI assistant surpassed OpenAI’s ChatGPT on Apple’s App Store in the US, causing a selloff in tech stocks.
White House AI and crypto adviser David Sacks suggested DeepSeek may have stolen US intellectual property by extracting knowledge from OpenAI’s models. An OpenAI spokesperson acknowledged that foreign firms frequently attempt to replicate its technology and stressed the importance of government collaboration to protect advanced AI models.
Microsoft declined to comment on the matter, while DeepSeek was unavailable for a response. OpenAI stated it actively counters unauthorised attempts to replicate its technology but did not specifically name DeepSeek.
The UK government has demanded urgent action from major social media platforms to remove violent and extremist content following the Southport killings. Home Secretary Yvette Cooper criticised the ease with which Axel Rudakubana, who murdered three children and attempted to kill ten others, accessed an al-Qaeda training manual and other violent material online. She described the availability of such content as “unacceptable” and called for immediate action.
Rudakubana, jailed last week for his crimes, had reportedly used techniques from the manual during the attack and watched graphic footage of a similar incident before carrying it out. While platforms like YouTube and TikTok are expected to comply with the UK‘s Online Safety Act when it comes into force in March, Cooper argued that companies have a ‘moral responsibility’ to act now rather than waiting for legal enforcement.
The Southport attack has intensified scrutiny on gaps in counter-terrorism measures and the role of online content in fostering extremism. The government has announced a public inquiry into missed opportunities to intervene, revealing that Rudakubana had been referred to the Prevent programme multiple times. Cooper’s call for immediate action underscores the urgent need to prevent further tragedies linked to online extremism.
US President Donald Trump has signed an executive order aimed at solidifying the country’s dominance in artificial intelligence. The directive includes creating an Artificial Intelligence Action Plan within 180 days to promote economic competitiveness, national security, and human well-being. The White House confirmed this initiative as part of efforts to position the nation as a global AI leader.
Trump has also instructed his AI and national security advisers to dismantle policies implemented by former President Joe Biden. Among these is a 2023 order requiring AI developers to submit safety test results to the government for systems with potential risks to national security, public safety, or the economy.
Biden’s policies aimed to regulate AI development under the Defence Production Act to minimise risks posed by advanced technologies. Critics argue the approach imposed unnecessary constraints, while supporters viewed it as a safeguard against potential misuse of AI.
The latest move reflects Trump’s broader strategy to reshape the nation’s AI framework, focusing on economic growth and innovation while rolling back measures seen as restrictive.
Google is rolling out a unique accessibility feature for Chromebooks that allows users to control their devices using head and facial movements. Initially introduced in December, this tool is designed for people with motor impairments and uses AI to let facial gestures act as a virtual cursor. The feature is available on Chromebooks with 8GB of RAM or more and builds on Google’s prior efforts, such as its Project Gameface accessibility tool for Windows and Android.
In addition to accessibility, Google is unveiling over 20 new Chromebook models this year, including the Lenovo Chromebook Plus 2-in-1, to complement its existing lines. The devices target educators, students, and general users seeking enhanced performance and versatility.
Google has also introduced ‘Class Tools’ for ChromeOS, which offer teachers real-time screen-sharing capabilities. These tools allow educators to share content directly with students, monitor their progress, and activate live captions or translations during lessons. Integration with Figma’s FigJam now brings interactive whiteboard assignments to Google Classroom, promoting collaboration and creative group work. Together, these updates aim to enhance accessibility and productivity in education.
Meta users in the US are experiencing an unusual phenomenon where they are being automatically re-followed by the accounts of President Donald Trump, Vice President JD Vance, and first lady Melania Trump. The issue emerged after users intentionally unfollowed these accounts following the administration’s transition. Feedback from users, including actress Demi Lovato and comedian Sarah Colonna, highlighted frustration over the inability to maintain their choice to unfollow prominent political figures.
Upon the change of administration, official White House social media accounts are supposed to transition smoothly to the new leaders. While Meta’s communications director Andy Stone acknowledged that followers from the Biden administration were carried over to Trump’s accounts, he confirmed that users were not being forced to re-follow these profiles. Stone suggested that delays in processing follow and unfollow requests might contribute to the confusion experienced by users.
Many individuals reported recurrent issues despite efforts to unfollow the accounts multiple times, raising questions about the underlying technicalities involved. Users are expressing concerns over privacy and choice in the use of social media platforms, as the ability to curate their feeds appears compromised. However, this automatic re-following could reflect broader implications for user control in digital spaces.
As Meta has yet to release a detailed response to the reported glitch, users continue to voice their concerns across multiple platforms. The situation underscores an ongoing need for clarity and assurance regarding user preferences in social media interactions, especially during a politically sensitive time.