Elon Musk under fire as social media giant X implicated in fuelling UK riots

Elon Musk is under fire for his social media posts, which many believe have exacerbated the ongoing riots in Britain. Musk, known for his provocative online presence, has shared riot footage on his platform, X, and made controversial remarks, including predicting a ‘civil war’ and criticising Prime Minister Keir Starmer and the British government for prioritising speech policing over community safety.

The unrest began after a stabbing at a Taylor Swift-themed dance class in Southport, England, resulted in the deaths of three young girls. Allegedly, false information spread online suggested the attacker was an illegal Muslim immigrant. However, the suspect, Axel Rudakubana, is a 17-year-old born in Cardiff, Wales, with unknown religious affiliation, though his parents are from predominantly Christian Rwanda.

Despite the facts, anti-immigrant protests have erupted in at least 15 cities across Britain, leading to the most significant civil disorder since 2011. Rioters have targeted mosques and hotels housing asylum seekers, with much violence directed at the police.

Prime Minister Starmer has criticised social media companies for allowing violent disinformation to spread. He specifically called out Musk for reinstating banned far-right figures, including activist Tommy Robinson. Technology Secretary Peter Kyle has met with representatives from major tech companies like TikTok, Meta, Google, and X to stress their duty to curb the spread of harmful misinformation.

Publicly, Musk has argued that the government should focus on its duties, mocking Starmer and questioning the UK’s approach to policing speech.

Home Secretary Yvette Cooper has stated that social media has amplified disinformation, promising government action against tech giants and online criminality. However, Britain’s Online Safety Act, which mandates platforms to address illegal content, will be fully effective next year. Meanwhile, the EU’s Digital Services Act, which Britain is no longer part of, is already in effect.

UK scrutinises Google-Alphabet AI deal

Britain’s antitrust watchdog is examining Google-parent Alphabet’s partnership with AI startup Anthropic to assess its impact on market competition. The scrutiny comes amid growing global concerns about the influence of major tech companies on the AI industry following the AI boom sparked by Microsoft-backed OpenAI’s release of ChatGPT.

Regulators are scrutinising deals between big tech giants and AI startups, including Microsoft’s collaborations with OpenAI, Inflection AI, and Mistral AI, as well as Alphabet’s investments in companies like Anthropic and Cohere. Anthropic’s AI models, developed by former OpenAI executives Dario and Daniela Amodei, compete with OpenAI’s GPT series.

Last week, the UK’s Competition and Markets Authority (CMA) joined forces with US and the EU regulators to ensure fair competition in the AI sector. The CMA is now inviting public comments on the Alphabet-Anthropic partnership until 13 August before deciding whether to initiate a formal investigation. The CMA’s decision will be based on feedback during this initial consultation.

Personal data of 40 million voters exposed in UK hack

The UK’s Electoral Commission has faced criticism for failing to safeguard the personal data of 40 million voters following an extensive breach that occurred in August 2021 but was only discovered in October 2022. The Information Commissioner’s Office (ICO) reported that the violation was due to the Electoral Commission’s outdated security systems, including unpatched servers and inadequate password management.

The Conservative government previously attributed the breach to Chinese hackers, leading to diplomatic tensions and sanctions from the US and its allies, including the UK and New Zealand. Despite these allegations, no confirmed evidence exists that the stolen data has been misused.

In response to the incident, the Electoral Commission has overhauled its security measures, including updating its infrastructure and implementing stricter password controls and multi-factor authentication. The Commission has assured that cybersecurity experts have validated these new measures.

China has consistently denied any wrongdoing, and the UK’s Labour Party has vowed to take a stronger stance on cyber threats and interference in British democracy. Labour plans to audit UK-China relations and introduce new cybersecurity legislation to enhance national resilience against future attacks.

US, EU, UK pledge to protect generative AI market fairness

Top competition authorities from the EU, UK, and US have issued a joint statement emphasising the importance of fair, open, and competitive markets in developing and deploying generative AI. Leaders from these regions, including Margrethe Vestager of the European Commission, Sarah Cardell of the UK Competition and Markets Authority, Jonathan Kanter of the US Department of Justice, and Lina M. Khan of the US Federal Trade Commission, highlighted their commitment to ensuring effective competition and protecting consumers and businesses from potential market abuses.

The officials recognise the transformational potential of AI technologies but stress the need to safeguard against risks that could undermine fair competition. These risks include the concentration of control over essential AI development inputs, such as specialised chips and vast amounts of data, and the possibility of large firms using their existing market power to entrench or extend their dominance in AI-related markets. The statement also warns against partnerships and investments that could stifle competition by allowing major firms to co-opt competitive threats.

The joint statement outlines several principles for protecting competition within the AI ecosystem, including fair dealing, interoperability, and maintaining choices for consumers and businesses. The authorities are particularly vigilant about the potential for AI to facilitate anti-competitive behaviours, such as price fixing or unfair exclusion. Additionally, they underscore the importance of consumer protection, ensuring that AI applications do not compromise privacy, security, or autonomy through deceptive or unfair practices.

UK government to introduce new cyber security bill

The UK government plans to introduce a Cyber Security and Resilience Bill to enhance national cyber-resilience, as announced in the King’s Speech on 17 July 2024. The bill aims to strengthen defences and protect essential digital services, focusing on critical infrastructure providers and expanding the scope of current regulations.Plans Cyber Security and Resilience Bill to Protect Critical Infrastructure

The new legislation will introduce mandatory ransomware reporting, helping authorities better understand the scale of the threat and alert them to potential attacks. It also grants new powers to regulators and extends the scope of existing regulations to include more digital services and supply chains. This initiative responds to heightened cyber threats, such as recent high-profile cyber-attacks on the NHS and the Ministry of Defence.

According to Stuart Davey of Pinsent Masons, the bill builds on previous efforts to reform the UK’s NIS regime. Dominic Trott of Orange Cyberdefense emphasised the importance of updating the regulatory framework to protect supply chains, a significant threat vector for attackers. Martin Greenfield of Quod Orbis added that the bill would help the Labour government deliver on its promise to boost economic growth.

A separate Digital Information and Smart Data Bill will be introduced, incorporating many measures from the Data Protection and Digital Information Bill, which failed to pass in the last parliament. This move aims to create a more secure and prosperous digital economy.

New UK government considers AI regulation

Britain’s new Labour government plans to investigate how to regulate the most powerful AI models but hasn’t proposed specific legislation yet. King Charles outlined Prime Minister Keir Starmer’s program for government, which includes over 35 new bills covering various areas, including cybersecurity.

The government aims to establish appropriate laws for developing advanced AI models. Former Prime Minister Rishi Sunak positioned the UK as a leader in AI safety, hosting a summit at Bletchley Park and launching the world’s first AI Safety Institute. However, Sunak’s administration avoided targeted AI regulation, preferring a sector-by-sector approach.

Nathan Benaich from Air Street Capital noted that AI labs are relieved by the government’s cautious approach. Nevertheless, some experts, like Gaia Marcus from the Ada Lovelace Institute, argue that the rapid development of AI tools necessitates urgent legislation.

The UK’s careful approach to AI regulation contrasts with the EU’s more proactive stance, potentially offering a competitive advantage. Starmer’s government remains committed to introducing new AI laws but is proceeding with caution.

UK investigates Microsoft over AI hiring concerns

British regulators have launched a preliminary investigation into Microsoft’s recent hiring spree from AI startup Inflection AI and its entry into associated arrangements with Inflection AI, due to concerns that this could hinder competition in the burgeoning AI market. Mustafa Suleyman, Inflection AI’s co-founder and CEO, along with several top engineers and researchers, joined Microsoft earlier this year. Suleyman, a co-founder of the AI research lab DeepMind, is a prominent figure in the AI industry.

The UK’s Competition and Markets Authority (CMA) is scrutinising whether these hirings might lead to a significant reduction in competition within the UK’s AI sector, potentially breaching antitrust regulations. Microsoft, however, maintains that the recruitment of talent fosters competition and should not be regarded as a merger. The company has pledged to cooperate with the CMA’s inquiry.

The CMA has a deadline of 11 September to decide whether to approve the hirings or escalate the investigation. The authority has the power to reverse deals or impose conditions to address any competition concerns. This investigation highlights the growing regulatory scrutiny over how major tech companies are acquiring talent and technology from innovative AI startups.

Across the Atlantic, US senators have urged antitrust enforcers to investigate Amazon’s deal with AI startup Adept. The senators noted similarities to the Microsoft-Inflection case, emphasizing concerns over the potential elimination of major competitors in the AI market. These developments reflect a broader regulatory focus on maintaining competitive balance in the rapidly evolving AI industry.

The UK High Court dismissed Tesla’s lawsuit for a 5G patent licence

Tesla’s attempt to secure a 5G patent licence in the UK has been dismissed by the High Court. The automaker sought the licence before its planned launch of 5G vehicles in Britain.

The lawsuit, filed against US technology firm InterDigital and the patent licensing platform Avanci, was thrown out on Monday. Tesla wanted the court to determine fair, reasonable, and non-discriminatory (FRAND) terms for using patents owned by InterDigital and licensed by Avanci.

Judge Timothy Fancourt ruled that Tesla’s bid for a license must be dismissed. However, Tesla’s separate claim to revoke three InterDigital’s patents will continue.

UK debates digital ID vs national ID cards

Tony Blair, former UK Prime Minister, is advocating for digital identity as a solution to manage irregular migration, a pressing issue in the recent UK elections. In a piece for The Times addressed to Prime Minister Keir Starmer, Blair proposes leveraging AI and digital ID systems to enhance border controls and immigration management.

Blair emphasises the need for a robust digital identity framework, suggesting it could replace traditional national ID cards. This approach, he argues, could ensure accurate identification without the need for centralised databases or government-issued cards, which has sparked controversy in the past.

Despite Blair’s advocacy, UK government officials, including Business Secretary Jonathan Reynolds, hesitated to reintroduce national ID cards. Instead, the government plans to establish a new enforcement and return unit to tackle illegal migration and smuggling rings.

The debate over digital ID versus national ID cards has historical roots, dating back to Blair’s earlier proposals in the 2000s. The issue resurfaced recently amidst concerns over illegal migration and the small boat crisis in the English Channel, prompting renewed discussions about the role of ID documents in modern immigration policies.

Why does this matter?

Advocates like the Open Identity Exchange stress that if implemented adequately through frameworks like the Digital Verification Service, digital ID systems could drive economic growth and improve service delivery in sectors beyond immigration, such as healthcare and education. Despite challenges, proponents argue that a secure, decentralised digital ID system could substantially benefit the UK’s digital economy and public services.

Examiners fooled as AI students outperform real students in the UK

In a groundbreaking study published in PLOS One, the University of Reading has unveiled startling findings from a real-world Turing test involving AI in university exams, raising profound implications for education.

The study, led by the university’s tech team, involved 33 fictitious student profiles using OpenAI’s GPT-4 to complete psychology assignments and exams online. Astonishingly, 94% of AI-generated submissions went undetected by examiners, outperforming their human counterparts by achieving higher grades on average.

Associate Professor Peter Scarfe, a co-author of the study, emphasised the urgent need for educational institutions to address the impact of AI on academic integrity. He highlighted a recent UNESCO survey revealing minimal global preparation for the use of generative AI in education, calling for a reassessment of assessment practices worldwide.

Professor Etienne Roesch, another co-author, underscored the importance of establishing clear guidelines on AI usage to maintain trust in educational assessments and beyond. She stressed the responsibility of both creators and consumers of information to uphold academic integrity amid AI advancements.

The study also pointed to ongoing challenges for educators in combating AI-driven academic misconduct, even as tools like Turnitin adapt to detect AI-authored work. Despite these challenges, educators like Professor Elizabeth McCrum, the University of Reading’s pro-vice chancellor of education, advocate for embracing AI as a tool for enhancing student learning and employability skills.

Looking ahead, Professor McCrum expressed confidence in the university’s proactive stance in integrating AI responsibly into educational practices, preparing students for a future shaped by rapid technological change.

Why does it matter?

The IATSE’s tentative agreement represents a significant step forward in securing fair wages and job protections for Hollywood’s behind-the-scenes workers, ensuring that the rapid advancements in technology do not come at the expense of human employment.