AI system keeps 6,000 deer off UK railways

An AI-based system has successfully prevented nearly 6,000 deer from crossing busy rail lines in England, enhancing safety for both wildlife and train operations. Network Rail and train operator LNER first installed the system at Stoke Junction in May 2023, later expanding it to Little Bytham in December 2023. The technology uses AI to detect approaching deer and activates alarms to deter them, with cameras monitoring the animals until they are safely away from the tracks.

The trial showed promising results, with 2,765 deer deterred at Little Bytham and 3,147 at Stoke Junction. Network Rail officials expressed optimism about the system’s effectiveness and plan to expand its use. Deer are a significant concern on Britain’s railways, with 349 incidents reported in the past year, the highest among animal-related incidents. Deer population in the UK has risen dramatically to two million, the highest in a millennium, due to factors like milder winters and increased woodland.

UK and France to launch consultation on misuse of commercial cyber intrusion tools

The United Kingdom and France are set to initiate a consultation on addressing the proliferation and irresponsible use of commercial cyber intrusion tools, according to a UK government announcement.

The consultation is part of the Pall Mall Process, a joint UK-French effort focused on addressing the misuse of commercial hacking tools like spyware. The Pall Mall Process was announced last year when the UK and France, alongside major tech companies like Google, Microsoft, and Meta, issued a joint statement acknowledging the urgent need for decisive action against the malicious exploitation of cyberespionage tools. At a conference convened by the UK and France with representatives from 35 nations, concerns were raised regarding the proliferation of spyware used to listen to phone calls, steal photos and remotely operate cameras and microphones.

The following launch of this process came after President Joe Biden issued an executive order prohibiting federal agencies from utilizing commercial spyware that might threaten US security or had been exploited by foreign entities. The executive order aimed to tackle the increasing instances of spyware abuse internationally, as well as reports of its improper use against US officials, government infrastructure, and ordinary citizens. In 2021, the Biden administration had also taken steps against spyware vendor NSO Group, founded by two former Israeli military officers, by adding the company to its Entity List.

As part of this consultation, both governments invite stakeholders to provide insights on best practices concerning commercial cyber intrusion capabilities (CCICs) across three key groups:

  • States: Acting as both regulators and potential consumers within the CCIC market.
  • Industry organizations: Engaged in or connected to the CCIC market, along with their broader value chain.
  • Civil society, experts, and threat researchers: Possessing relevant expertise on the risks posed by the CCIC market and the strategies to address them.

Previously, experts had already raised concerns about the Pall Mall Process and its goals, highlighting questions such as whether the initiative will be geographically diverse and include a broad range of countries. Will stakeholders be involved, and will companies providing some of the intrusive tools, in particular, be invited for discussions? What does success look like for this process, and for whom?

To participate in this consultation, please follow this link.

Social media Bluesky gains popularity in UK after Musk’s riot remarks

Bluesky, a social media platform, has reported a significant increase in signups in the United Kingdom recently as users look for alternatives to Elon Musk’s X. The increase follows Musk’s controversial remarks on ongoing riots in the UK, which have driven users, including several Members of Parliament, to explore other platforms. The company announced that it had experienced a 60% rise in activity from UK accounts.

Musk has faced criticism for inflaming tensions after riots in Britain were sparked by misinformation surrounding the murder of three girls in northern England. The Tesla CEO allegedly used X to disseminate misleading information to his vast audience, including a post claiming that civil war in Britain was ‘inevitable.’ The case has prompted Prime Minister Keir Starmer to respond and increased calls for the government to accelerate the implementation of online content regulations.

Bluesky highlighted that the UK had the most signups of any country for five of the last seven days. Once supported by Twitter co-founder Jack Dorsey, the platform is among the many apps vying to replace Twitter after Musk’s turbulent takeover in late 2022.

As of July, Bluesky’s monthly active user base was approximately 688,568, which is small compared to X’s 76.9 million users, according to Similarweb, a digital market intelligence firm. Despite its smaller size, the recent surge in UK signups to Bluesky appears to be a growing interest in alternative social media platforms.

Man who used AI to create indecent images of children faces jail

In a groundbreaking case in the UK, a 27-year-old man named Hugh Nelson has admitted to using AI technology to create indecent images of children, a crime for which he is expected to be jailed. Nelson pleaded guilty to multiple charges at Bolton Crown Court, including attempting to incite a minor into sexual activity, distributing and making indecent images, and publishing obscene content. His sentencing is scheduled for 25 September.

The case, described by Greater Manchester Police (GMP) as ‘deeply horrifying,’ marks the first instance in the region—and possibly nationally—where AI technology was used to transform ordinary photographs of children into indecent images. Detective Constable Carly Baines, who led the investigation, emphasised the global reach of Nelson’s crimes, noting that arrests and safeguarding measures have been implemented in various locations worldwide.

Authorities hope this case will influence future legislation, as the use of AI in such offences is not yet fully addressed by current UK laws. The Crown Prosecution Service highlighted the severity of the crime, warning that the misuse of emerging technologies to generate abusive imagery could lead to an increased risk of actual child abuse.

UK considers revising Online Safety Act amid riots

The British government is considering revisions to the Online Safety Act in response to a recent wave of racist riots allegedly fueled by misinformation spread online. The act, passed in October but not yet enforced, currently allows the government to fine social media companies up to 10% of their global turnover if they fail to remove illegal content, such as incitements to violence or hate speech. However, proposed changes could extend these penalties to platforms that permit ‘legal but harmful’ content, like misinformation, to thrive.

Britain’s Labour government inherited the act from the Conservatives, who had spent considerable time adjusting the bill to balance free speech with the need to curb online harms. A recent YouGov poll found that 66% of adults believe social media companies should be held accountable for posts inciting criminal behaviour, and 70% feel these companies are not sufficiently regulated. Additionally, 71% of respondents criticised social media platforms for not doing enough to combat misinformation during the riots.

In response to these concerns, Cabinet Office Minister Nick Thomas-Symonds announced that the government is prepared to revisit the act’s framework to ensure its effectiveness. London Mayor Sadiq Khan also voiced his belief that the law is not ‘fit for purpose’ and called for urgent amendments in light of the recent unrest.

Why does it matter?

The riots, which spread across Britain last week, were triggered by false online claims that the perpetrator of a 29 July knife attack, which killed three young girls, was a Muslim migrant. As tensions escalated, X owner Elon Musk contributed to the chaos by sharing misleading information with his large following, including a statement suggesting that civil war in Britain was ‘inevitable.’ Prime Minister Keir Starmer’s spokesperson condemned these comments, stating there was ‘no justification’ for such rhetoric.

UK riots escalate as Elon Musk stirs tensions with conspiracy theory

The CEO of Tesla has drawn criticism after labelling UK Prime Minister Keir Starmer as ‘#TwoTierKier’ and promoting a far-right conspiracy theory that claims white rioters are treated more harshly by the police than minorities. His comments have coincided with rising tensions and violent protests across the UK, where asylum centres are being boarded up as a precaution. Amidst the unrest, six thousand police officers are on standby to protect dozens of targeted locations, including asylum centres and law firms, from far-right attacks.

Elon Musk’s tweets have intensified the situation, with officials struggling to get posts removed from X, formerly known as Twitter, that are deemed threats to national security. The riots were triggered by the recent deaths of three children in Southport, leading to a surge in conspiracy theories and far-right activity on social media platforms, particularly Telegram. The messaging app has taken some action by removing a channel promoting violent protests, though it’s unclear whether this was prompted by UK authorities.

United Kingdom law enforcement has been cracking down on those inciting violence online, with arrests already being made. One high-profile arrest involved the wife of a Northampton councillor who called for asylum seeker hotels to be set on fire in a post on X. Meanwhile, rioters have been using TikTok Live to broadcast their actions, providing police with evidence to prosecute and charge over 100 individuals, with some already facing court proceedings.

Critics argue that Musk‘s influence is exacerbating the situation by amplifying extremist voices, including those who had been previously banned from social media. Courts Minister Heidi Alexander condemned Musk’s actions, calling them ‘irresponsible’ and ‘unconscionable.’ Meanwhile, Starmer has focused on the broader issue of online radicalisation, stressing the importance of legal consequences for those promoting violence.

EU scrutiny of X could expand due to UK riots

The European Commission’s ongoing investigation into social media platform X, owned by Elon Musk, could factor in the company’s handling of harmful content during the recent UK riots.

Charges against X were issued last month under the Digital Services Act (DSA), which mandates stricter controls on illegal content and public security risks for large online platforms.

Although the UK is no longer part of the EU, content shared in Britain that violates DSA rules might still reach European users, potentially breaching the law. Recent events in Britain, where far-right and anti-Muslim groups exploited the fatal stabbing of three young girls to spread disinformation and incite violence, have raised concerns.

The European Commission acknowledged that while the DSA does not cover actions outside the EU, content visible in Europe from the UK could influence their proceedings against X. The company has yet to respond to these developments.

Elon Musk under fire as social media giant X implicated in fuelling UK riots

Elon Musk is under fire for his social media posts, which many believe have exacerbated the ongoing riots in Britain. Musk, known for his provocative online presence, has shared riot footage on his platform, X, and made controversial remarks, including predicting a ‘civil war’ and criticising Prime Minister Keir Starmer and the British government for prioritising speech policing over community safety.

The unrest began after a stabbing at a Taylor Swift-themed dance class in Southport, England, resulted in the deaths of three young girls. Allegedly, false information spread online suggested the attacker was an illegal Muslim immigrant. However, the suspect, Axel Rudakubana, is a 17-year-old born in Cardiff, Wales, with unknown religious affiliation, though his parents are from predominantly Christian Rwanda.

Despite the facts, anti-immigrant protests have erupted in at least 15 cities across Britain, leading to the most significant civil disorder since 2011. Rioters have targeted mosques and hotels housing asylum seekers, with much violence directed at the police.

Prime Minister Starmer has criticised social media companies for allowing violent disinformation to spread. He specifically called out Musk for reinstating banned far-right figures, including activist Tommy Robinson. Technology Secretary Peter Kyle has met with representatives from major tech companies like TikTok, Meta, Google, and X to stress their duty to curb the spread of harmful misinformation.

Publicly, Musk has argued that the government should focus on its duties, mocking Starmer and questioning the UK’s approach to policing speech.

Home Secretary Yvette Cooper has stated that social media has amplified disinformation, promising government action against tech giants and online criminality. However, Britain’s Online Safety Act, which mandates platforms to address illegal content, will be fully effective next year. Meanwhile, the EU’s Digital Services Act, which Britain is no longer part of, is already in effect.

UK scrutinises Google-Alphabet AI deal

Britain’s antitrust watchdog is examining Google-parent Alphabet’s partnership with AI startup Anthropic to assess its impact on market competition. The scrutiny comes amid growing global concerns about the influence of major tech companies on the AI industry following the AI boom sparked by Microsoft-backed OpenAI’s release of ChatGPT.

Regulators are scrutinising deals between big tech giants and AI startups, including Microsoft’s collaborations with OpenAI, Inflection AI, and Mistral AI, as well as Alphabet’s investments in companies like Anthropic and Cohere. Anthropic’s AI models, developed by former OpenAI executives Dario and Daniela Amodei, compete with OpenAI’s GPT series.

Last week, the UK’s Competition and Markets Authority (CMA) joined forces with US and the EU regulators to ensure fair competition in the AI sector. The CMA is now inviting public comments on the Alphabet-Anthropic partnership until 13 August before deciding whether to initiate a formal investigation. The CMA’s decision will be based on feedback during this initial consultation.

Personal data of 40 million voters exposed in UK hack

The UK’s Electoral Commission has faced criticism for failing to safeguard the personal data of 40 million voters following an extensive breach that occurred in August 2021 but was only discovered in October 2022. The Information Commissioner’s Office (ICO) reported that the violation was due to the Electoral Commission’s outdated security systems, including unpatched servers and inadequate password management.

The Conservative government previously attributed the breach to Chinese hackers, leading to diplomatic tensions and sanctions from the US and its allies, including the UK and New Zealand. Despite these allegations, no confirmed evidence exists that the stolen data has been misused.

In response to the incident, the Electoral Commission has overhauled its security measures, including updating its infrastructure and implementing stricter password controls and multi-factor authentication. The Commission has assured that cybersecurity experts have validated these new measures.

China has consistently denied any wrongdoing, and the UK’s Labour Party has vowed to take a stronger stance on cyber threats and interference in British democracy. Labour plans to audit UK-China relations and introduce new cybersecurity legislation to enhance national resilience against future attacks.