TikTok is stepping beyond the digital screen with its first UK and Ireland Awards, celebrating 72 creators across 12 categories. From travel influencers to comedy sketch stars, these creators, with over 101 million combined followers, will be recognised in a London ceremony, highlighting the growing cultural impact of short-form content.
The platform’s nominees range from lifestyle influencers to niche creators like fossil hunters and ASMR pool cleaners. TikTok’s Melissa McFarlane emphasised that the awards showcase creators’ influence on everything from literature to cooking, proving that TikTok communities are shaping trends well beyond the app.
Nominees like Ayamé Ponder, known for her comedy sketches, are also using their platforms for broader causes. Meanwhile, creators Jade Beaty and Ryan Losasso hope the awards will inspire others to try content creation, a process they say takes considerable effort despite misconceptions.
With millions of European users and global awards spanning 20 regions, TikTok aims to underline the value of its creators’ work. As the app continues to define social media culture, these UK awards celebrate the diverse talents driving its viral success.
The UK faces an escalating cyber threat from hostile states and criminal gangs, according to Richard Horne, head of the National Cyber Security Centre (NCSC). In his first major speech, Horne warned that the severity of these risks is being underestimated, citing a significant rise in cyber incidents, particularly from Russia and China. He described Russia’s cyber activity as ‘aggressive and reckless’ while noting that China’s operations are highly sophisticated with growing global ambitions.
Over the past year, the NCSC responded to 430 cyber incidents, a marked increase from the previous year. Among them, 12 were deemed especially severe, a threefold rise from 2023. The agency highlighted the growing threats to critical infrastructure and supply chains, urging both public and private sectors to strengthen their cyber defences. The UK also faces a growing number of ransomware attacks, often originating from Russia, which target key organisations like the British Library and healthcare services.
Horne emphasised the human costs of cyber-attacks, citing how these incidents disrupt vital services like healthcare and education. The rise in ransomware, often linked to Russian criminal gangs, is a major concern, and the NCSC is working to address these challenges. The agency’s review also pointed to increasing cyber activity from China, Iran, and North Korea, with these states targeting the UK’s infrastructure and private sector.
Experts like Professor Alan Woodward of Surrey University echoed Horne’s concerns, urging the UK to step up its cybersecurity efforts to keep pace with evolving threats. With adversaries growing more sophisticated, the government and businesses must act swiftly to protect the country’s digital infrastructure.
Heathrow Airport, one of the world’s busiest, is trialling an advanced AI system named ‘Amy’ to assist air traffic controllers in managing its crowded airspace. Handling nearly half a million flights annually, Heathrow aims to improve safety and efficiency through real-time data and advanced tracking capabilities provided by the AI system.
Amy integrates radar and 4K video data to give controllers a detailed visualisation of aircraft positions, even when out of sight. Designed by NATS, the UK’s air traffic management agency, the system offers vital information such as flight numbers and aircraft types, helping controllers make faster, more informed decisions. After testing on over 40,000 flights, NATS plans to fully operationalise a ‘digital contingency tower’ by 2027 to ensure backup in emergencies.
Despite its promise, experts caution against over-reliance on AI. They highlight potential limitations, such as insufficient contextual judgment and challenges in handling unexpected scenarios. Colin Rigby from Keele University emphasised that AI should complement human operators rather than replace them.
The adoption of similar AI-driven solutions is being explored by major airports worldwide, including those in Singapore, New York, and Hong Kong, signaling a shift toward digital transformation in air traffic management.
A recent investigation revealed that most top-selling mobile games in the UK fail to disclose the presence of loot boxes in their advertisements, despite regulations mandating transparency. Loot boxes, which provide randomised in-game items often obtained through payments, have drawn criticism for fostering addictive behaviors and targeting vulnerable groups, including children. Of the top 45 highest-grossing games analysed on Google Play, only two clearly mentioned loot boxes in their advertisements.
The UK Advertising Standards Authority, which oversees compliance, acknowledges the issue and promises further action but has faced criticism for its slow and limited enforcement. Critics argue that lax self-regulation within the gaming industry enables companies to prioritise profits over player well-being, particularly as loot boxes reportedly generate $15B annually.
Advocacy groups and researchers have voiced alarm over these findings, warning of long-term consequences. Zoë Osmond of GambleAware emphasised the risks of exposing children to gambling-like features in games, which could lead to harmful habits later in life. The gaming industry has so far resisted stricter government intervention, despite mounting evidence of non-compliance and harm.
British Prime Minister Keir Starmer inaugurated London’s first Google-backed AI Campus in Camden on Wednesday. The facility aims to equip young people with vital skills in AI and machine learning, addressing the growing demand for expertise in these areas. Located near Google’s upcoming offices in King’s Cross, the campus has already launched a two-year pilot project for students aged 16 to 18.
The pilot cohort of 32 students will benefit from AI-focused mentoring and resources provided by Google’s DeepMind. They will engage in real-world projects that integrate AI with health, social sciences, and the arts. The campus, a collaboration between Google and local authorities, seeks to inspire students from diverse backgrounds to envision themselves as leaders in the AI-driven future.
Starmer hailed the initiative as a transformative step for young people, particularly those in challenging circumstances. He emphasised that the programme represents a significant milestone in preparing the next generation to navigate AI’s boundless potential. ‘The possibilities of AI are incredible, and projects like this will shape the future,’ he stated.
Google further announced £865,000 in funding for a national AI literacy programme. This initiative, led by the Raspberry Pi Foundation and Parent Zone, aims to train teachers and reach 250,000 students by 2026. Debbie Weinstein, Google’s UK and Ireland managing director, highlighted the programme’s role in ensuring that AI’s vast opportunities are accessible to all, potentially unlocking £400 billion in economic benefits for the UK.
British police forces are retreating from using X, formerly known as Twitter, citing concerns over violent content and misinformation. A Reuters survey found significant reductions in posting activity from several forces, with some nearly halting use of the platform entirely. Critics argue the platform fosters hate speech under Elon Musk’s leadership, a claim he disputes, emphasising his commitment to free speech.
West Midlands Police, which serves Birmingham, reduced posts by 95% compared to last year. Lancashire Police cut its use by three-quarters, while Derbyshire Police has only responded to queries since August. North Wales Police became the first force to fully withdraw, stating the platform no longer served as an effective communication tool.
Some forces, however, continue limited use of X for urgent updates like road closures, while increasingly favouring Facebook and Instagram to engage with communities. Platforms such as Threads and Blue Sky are also emerging alternatives, though X remains more widely used in Britain despite a 19% drop in app users over the past year.
The shift reflects broader discontent with X among organisations, including media outlets and non-profits, due to concerns over Musk’s influence and the platform’s growing extremism. A government source confirmed its preference for other social media platforms for advertising while maintaining limited unpaid use of X.
Apple and Google face growing scrutiny in the UK over allegations of stifling competition in mobile web browsers. The UK Competition and Markets Authority (CMA) claims that both companies use their dominant positions to restrict consumer choice, citing Apple’s limits on progressive web apps as a barrier to innovation on iOS devices. Progressive web apps could bypass app stores and their fees, offering faster and more secure browsing.
The CMA’s report also points to a revenue-sharing deal between Apple and Google that discourages competition in mobile ecosystems. Both companies have responded, with Apple defending its privacy and security measures and Google emphasising the openness of its Android platform.
This investigation is part of a broader crackdown on Big Tech, with regulators in the US and UK aiming to curb monopolistic practices. The CMA plans to finalise its report in March and use upcoming digital competition laws to address these concerns.
British businesses have lost an estimated £44 billion ($55 billion) in revenue over the past five years due to cyberattacks, with more than half of private sector companies experiencing at least one incident, according to a report by insurance broker Howden. Companies earning over £100 million annually faced the highest risk, with cyberattacks cutting 1.9% of revenue on average.
The report identified compromised emails (20%) and data theft (18%) as the leading causes of cyber incidents. Despite these risks, only 61% of businesses used anti-virus software, and just 55% had network firewalls, with cost and limited IT resources cited as major obstacles to better cybersecurity.
“Cybercrime is rising as businesses rely more on technology, exposing vulnerabilities to malicious actors,” said Sarah Neild, head of UK cyber retail at Howden. The findings are based on a September survey of 905 UK private-sector IT leaders conducted by YouGov.
The UK government has announced the launch of a Laboratory for AI Security Research (LASR), an initiative to protect against emerging AI-driven threats and bolster Britain’s cyber resilience. The lab, backed by an initial £8.22 million in government funding, will bring together experts from academia, industry, and government to address AI’s evolving challenges to national security.
Speaking at the NATO Cyber Defence Conference in London, the Chancellor of the Duchy of Lancaster emphasised that AI is revolutionising national security and noted that ‘[…]as we develop this technology, there’s a danger it could be weaponised against us. Our adversaries are exploring how to use AI on the physical and cyber battlefield’.
LASR will collaborate with leading institutions, including the Alan Turing Institute, Oxford University, Queen’s University Belfast, and Plexal, alongside government agencies such as GCHQ, the National Cyber Security Centre, and the MOD’s Defence Science and Technology Laboratory. Partnerships will extend to NATO allies and Five Eyes countries, fostering an international approach to AI security.
In addition to LASR, the government announced a £1 million incident response project to help allies respond more effectively to cyberattacks. This initiative will further enhance international cooperation in managing cyber incidents.
The official communication highlights that this announcement aligns with the government’s broader agenda, including the forthcoming Cyber Security and Resilience Bill (to be introduced to Parliament in 2025) and the designation of data centres as critical national infrastructure (CNI) to secure the UK’s position as a global leader in cybersecurity and AI innovation.
The UK’s Competition and Markets Authority (CMA) has decided against investigating the partnership between Google’s parent company, Alphabet, and AI startup Anthropic. Following a detailed review, the CMA found the agreement did not qualify as a merger under UK competition law.
Concerns over competition prompted the CMA to scrutinise the deal, focusing on whether it gave Alphabet control over Anthropic’s business. The authority concluded that Alphabet’s involvement, including financial support and computing resources, did not result in material influence or loss of independence for Anthropic.
The agreement includes Google providing Anthropic with cloud services, distributing its AI models, and offering convertible debt financing. While the partnership is significant, Anthropic’s UK turnover fell below the £70m threshold required for it to qualify as a merger.
This ruling follows similar CMA decisions involving tech companies and AI startups, including clearing Microsoft’s investment in Mistral and Amazon’s $4bn stake in Anthropic. The watchdog remains vigilant about potential anti-competitive practices in the rapidly growing AI sector.