UK considers revising Online Safety Act amid riots

The British government is considering revisions to the Online Safety Act in response to a recent wave of racist riots allegedly fueled by misinformation spread online. The act, passed in October but not yet enforced, currently allows the government to fine social media companies up to 10% of their global turnover if they fail to remove illegal content, such as incitements to violence or hate speech. However, proposed changes could extend these penalties to platforms that permit ‘legal but harmful’ content, like misinformation, to thrive.

Britain’s Labour government inherited the act from the Conservatives, who had spent considerable time adjusting the bill to balance free speech with the need to curb online harms. A recent YouGov poll found that 66% of adults believe social media companies should be held accountable for posts inciting criminal behaviour, and 70% feel these companies are not sufficiently regulated. Additionally, 71% of respondents criticised social media platforms for not doing enough to combat misinformation during the riots.

In response to these concerns, Cabinet Office Minister Nick Thomas-Symonds announced that the government is prepared to revisit the act’s framework to ensure its effectiveness. London Mayor Sadiq Khan also voiced his belief that the law is not ‘fit for purpose’ and called for urgent amendments in light of the recent unrest.

Why does it matter?

The riots, which spread across Britain last week, were triggered by false online claims that the perpetrator of a 29 July knife attack, which killed three young girls, was a Muslim migrant. As tensions escalated, X owner Elon Musk contributed to the chaos by sharing misleading information with his large following, including a statement suggesting that civil war in Britain was ‘inevitable.’ Prime Minister Keir Starmer’s spokesperson condemned these comments, stating there was ‘no justification’ for such rhetoric.

UK riots escalate as Elon Musk stirs tensions with conspiracy theory

The CEO of Tesla has drawn criticism after labelling UK Prime Minister Keir Starmer as ‘#TwoTierKier’ and promoting a far-right conspiracy theory that claims white rioters are treated more harshly by the police than minorities. His comments have coincided with rising tensions and violent protests across the UK, where asylum centres are being boarded up as a precaution. Amidst the unrest, six thousand police officers are on standby to protect dozens of targeted locations, including asylum centres and law firms, from far-right attacks.

Elon Musk’s tweets have intensified the situation, with officials struggling to get posts removed from X, formerly known as Twitter, that are deemed threats to national security. The riots were triggered by the recent deaths of three children in Southport, leading to a surge in conspiracy theories and far-right activity on social media platforms, particularly Telegram. The messaging app has taken some action by removing a channel promoting violent protests, though it’s unclear whether this was prompted by UK authorities.

United Kingdom law enforcement has been cracking down on those inciting violence online, with arrests already being made. One high-profile arrest involved the wife of a Northampton councillor who called for asylum seeker hotels to be set on fire in a post on X. Meanwhile, rioters have been using TikTok Live to broadcast their actions, providing police with evidence to prosecute and charge over 100 individuals, with some already facing court proceedings.

Critics argue that Musk‘s influence is exacerbating the situation by amplifying extremist voices, including those who had been previously banned from social media. Courts Minister Heidi Alexander condemned Musk’s actions, calling them ‘irresponsible’ and ‘unconscionable.’ Meanwhile, Starmer has focused on the broader issue of online radicalisation, stressing the importance of legal consequences for those promoting violence.

EU scrutiny of X could expand due to UK riots

The European Commission’s ongoing investigation into social media platform X, owned by Elon Musk, could factor in the company’s handling of harmful content during the recent UK riots.

Charges against X were issued last month under the Digital Services Act (DSA), which mandates stricter controls on illegal content and public security risks for large online platforms.

Although the UK is no longer part of the EU, content shared in Britain that violates DSA rules might still reach European users, potentially breaching the law. Recent events in Britain, where far-right and anti-Muslim groups exploited the fatal stabbing of three young girls to spread disinformation and incite violence, have raised concerns.

The European Commission acknowledged that while the DSA does not cover actions outside the EU, content visible in Europe from the UK could influence their proceedings against X. The company has yet to respond to these developments.

Elon Musk under fire as social media giant X implicated in fuelling UK riots

Elon Musk is under fire for his social media posts, which many believe have exacerbated the ongoing riots in Britain. Musk, known for his provocative online presence, has shared riot footage on his platform, X, and made controversial remarks, including predicting a ‘civil war’ and criticising Prime Minister Keir Starmer and the British government for prioritising speech policing over community safety.

The unrest began after a stabbing at a Taylor Swift-themed dance class in Southport, England, resulted in the deaths of three young girls. Allegedly, false information spread online suggested the attacker was an illegal Muslim immigrant. However, the suspect, Axel Rudakubana, is a 17-year-old born in Cardiff, Wales, with unknown religious affiliation, though his parents are from predominantly Christian Rwanda.

Despite the facts, anti-immigrant protests have erupted in at least 15 cities across Britain, leading to the most significant civil disorder since 2011. Rioters have targeted mosques and hotels housing asylum seekers, with much violence directed at the police.

Prime Minister Starmer has criticised social media companies for allowing violent disinformation to spread. He specifically called out Musk for reinstating banned far-right figures, including activist Tommy Robinson. Technology Secretary Peter Kyle has met with representatives from major tech companies like TikTok, Meta, Google, and X to stress their duty to curb the spread of harmful misinformation.

Publicly, Musk has argued that the government should focus on its duties, mocking Starmer and questioning the UK’s approach to policing speech.

Home Secretary Yvette Cooper has stated that social media has amplified disinformation, promising government action against tech giants and online criminality. However, Britain’s Online Safety Act, which mandates platforms to address illegal content, will be fully effective next year. Meanwhile, the EU’s Digital Services Act, which Britain is no longer part of, is already in effect.

UK scrutinises Google-Alphabet AI deal

Britain’s antitrust watchdog is examining Google-parent Alphabet’s partnership with AI startup Anthropic to assess its impact on market competition. The scrutiny comes amid growing global concerns about the influence of major tech companies on the AI industry following the AI boom sparked by Microsoft-backed OpenAI’s release of ChatGPT.

Regulators are scrutinising deals between big tech giants and AI startups, including Microsoft’s collaborations with OpenAI, Inflection AI, and Mistral AI, as well as Alphabet’s investments in companies like Anthropic and Cohere. Anthropic’s AI models, developed by former OpenAI executives Dario and Daniela Amodei, compete with OpenAI’s GPT series.

Last week, the UK’s Competition and Markets Authority (CMA) joined forces with US and the EU regulators to ensure fair competition in the AI sector. The CMA is now inviting public comments on the Alphabet-Anthropic partnership until 13 August before deciding whether to initiate a formal investigation. The CMA’s decision will be based on feedback during this initial consultation.

Personal data of 40 million voters exposed in UK hack

The UK’s Electoral Commission has faced criticism for failing to safeguard the personal data of 40 million voters following an extensive breach that occurred in August 2021 but was only discovered in October 2022. The Information Commissioner’s Office (ICO) reported that the violation was due to the Electoral Commission’s outdated security systems, including unpatched servers and inadequate password management.

The Conservative government previously attributed the breach to Chinese hackers, leading to diplomatic tensions and sanctions from the US and its allies, including the UK and New Zealand. Despite these allegations, no confirmed evidence exists that the stolen data has been misused.

In response to the incident, the Electoral Commission has overhauled its security measures, including updating its infrastructure and implementing stricter password controls and multi-factor authentication. The Commission has assured that cybersecurity experts have validated these new measures.

China has consistently denied any wrongdoing, and the UK’s Labour Party has vowed to take a stronger stance on cyber threats and interference in British democracy. Labour plans to audit UK-China relations and introduce new cybersecurity legislation to enhance national resilience against future attacks.

US, EU, UK pledge to protect generative AI market fairness

Top competition authorities from the EU, UK, and US have issued a joint statement emphasising the importance of fair, open, and competitive markets in developing and deploying generative AI. Leaders from these regions, including Margrethe Vestager of the European Commission, Sarah Cardell of the UK Competition and Markets Authority, Jonathan Kanter of the US Department of Justice, and Lina M. Khan of the US Federal Trade Commission, highlighted their commitment to ensuring effective competition and protecting consumers and businesses from potential market abuses.

The officials recognise the transformational potential of AI technologies but stress the need to safeguard against risks that could undermine fair competition. These risks include the concentration of control over essential AI development inputs, such as specialised chips and vast amounts of data, and the possibility of large firms using their existing market power to entrench or extend their dominance in AI-related markets. The statement also warns against partnerships and investments that could stifle competition by allowing major firms to co-opt competitive threats.

The joint statement outlines several principles for protecting competition within the AI ecosystem, including fair dealing, interoperability, and maintaining choices for consumers and businesses. The authorities are particularly vigilant about the potential for AI to facilitate anti-competitive behaviours, such as price fixing or unfair exclusion. Additionally, they underscore the importance of consumer protection, ensuring that AI applications do not compromise privacy, security, or autonomy through deceptive or unfair practices.

UK government to introduce new cyber security bill

The UK government plans to introduce a Cyber Security and Resilience Bill to enhance national cyber-resilience, as announced in the King’s Speech on 17 July 2024. The bill aims to strengthen defences and protect essential digital services, focusing on critical infrastructure providers and expanding the scope of current regulations.Plans Cyber Security and Resilience Bill to Protect Critical Infrastructure

The new legislation will introduce mandatory ransomware reporting, helping authorities better understand the scale of the threat and alert them to potential attacks. It also grants new powers to regulators and extends the scope of existing regulations to include more digital services and supply chains. This initiative responds to heightened cyber threats, such as recent high-profile cyber-attacks on the NHS and the Ministry of Defence.

According to Stuart Davey of Pinsent Masons, the bill builds on previous efforts to reform the UK’s NIS regime. Dominic Trott of Orange Cyberdefense emphasised the importance of updating the regulatory framework to protect supply chains, a significant threat vector for attackers. Martin Greenfield of Quod Orbis added that the bill would help the Labour government deliver on its promise to boost economic growth.

A separate Digital Information and Smart Data Bill will be introduced, incorporating many measures from the Data Protection and Digital Information Bill, which failed to pass in the last parliament. This move aims to create a more secure and prosperous digital economy.

New UK government considers AI regulation

Britain’s new Labour government plans to investigate how to regulate the most powerful AI models but hasn’t proposed specific legislation yet. King Charles outlined Prime Minister Keir Starmer’s program for government, which includes over 35 new bills covering various areas, including cybersecurity.

The government aims to establish appropriate laws for developing advanced AI models. Former Prime Minister Rishi Sunak positioned the UK as a leader in AI safety, hosting a summit at Bletchley Park and launching the world’s first AI Safety Institute. However, Sunak’s administration avoided targeted AI regulation, preferring a sector-by-sector approach.

Nathan Benaich from Air Street Capital noted that AI labs are relieved by the government’s cautious approach. Nevertheless, some experts, like Gaia Marcus from the Ada Lovelace Institute, argue that the rapid development of AI tools necessitates urgent legislation.

The UK’s careful approach to AI regulation contrasts with the EU’s more proactive stance, potentially offering a competitive advantage. Starmer’s government remains committed to introducing new AI laws but is proceeding with caution.

UK investigates Microsoft over AI hiring concerns

British regulators have launched a preliminary investigation into Microsoft’s recent hiring spree from AI startup Inflection AI and its entry into associated arrangements with Inflection AI, due to concerns that this could hinder competition in the burgeoning AI market. Mustafa Suleyman, Inflection AI’s co-founder and CEO, along with several top engineers and researchers, joined Microsoft earlier this year. Suleyman, a co-founder of the AI research lab DeepMind, is a prominent figure in the AI industry.

The UK’s Competition and Markets Authority (CMA) is scrutinising whether these hirings might lead to a significant reduction in competition within the UK’s AI sector, potentially breaching antitrust regulations. Microsoft, however, maintains that the recruitment of talent fosters competition and should not be regarded as a merger. The company has pledged to cooperate with the CMA’s inquiry.

The CMA has a deadline of 11 September to decide whether to approve the hirings or escalate the investigation. The authority has the power to reverse deals or impose conditions to address any competition concerns. This investigation highlights the growing regulatory scrutiny over how major tech companies are acquiring talent and technology from innovative AI startups.

Across the Atlantic, US senators have urged antitrust enforcers to investigate Amazon’s deal with AI startup Adept. The senators noted similarities to the Microsoft-Inflection case, emphasizing concerns over the potential elimination of major competitors in the AI market. These developments reflect a broader regulatory focus on maintaining competitive balance in the rapidly evolving AI industry.