US, EU, UK pledge to protect generative AI market fairness

Top competition authorities from the EU, UK, and US have issued a joint statement emphasising the importance of fair, open, and competitive markets in developing and deploying generative AI. Leaders from these regions, including Margrethe Vestager of the European Commission, Sarah Cardell of the UK Competition and Markets Authority, Jonathan Kanter of the US Department of Justice, and Lina M. Khan of the US Federal Trade Commission, highlighted their commitment to ensuring effective competition and protecting consumers and businesses from potential market abuses.

The officials recognise the transformational potential of AI technologies but stress the need to safeguard against risks that could undermine fair competition. These risks include the concentration of control over essential AI development inputs, such as specialised chips and vast amounts of data, and the possibility of large firms using their existing market power to entrench or extend their dominance in AI-related markets. The statement also warns against partnerships and investments that could stifle competition by allowing major firms to co-opt competitive threats.

The joint statement outlines several principles for protecting competition within the AI ecosystem, including fair dealing, interoperability, and maintaining choices for consumers and businesses. The authorities are particularly vigilant about the potential for AI to facilitate anti-competitive behaviours, such as price fixing or unfair exclusion. Additionally, they underscore the importance of consumer protection, ensuring that AI applications do not compromise privacy, security, or autonomy through deceptive or unfair practices.

UK government to introduce new cyber security bill

The UK government plans to introduce a Cyber Security and Resilience Bill to enhance national cyber-resilience, as announced in the King’s Speech on 17 July 2024. The bill aims to strengthen defences and protect essential digital services, focusing on critical infrastructure providers and expanding the scope of current regulations.Plans Cyber Security and Resilience Bill to Protect Critical Infrastructure

The new legislation will introduce mandatory ransomware reporting, helping authorities better understand the scale of the threat and alert them to potential attacks. It also grants new powers to regulators and extends the scope of existing regulations to include more digital services and supply chains. This initiative responds to heightened cyber threats, such as recent high-profile cyber-attacks on the NHS and the Ministry of Defence.

According to Stuart Davey of Pinsent Masons, the bill builds on previous efforts to reform the UK’s NIS regime. Dominic Trott of Orange Cyberdefense emphasised the importance of updating the regulatory framework to protect supply chains, a significant threat vector for attackers. Martin Greenfield of Quod Orbis added that the bill would help the Labour government deliver on its promise to boost economic growth.

A separate Digital Information and Smart Data Bill will be introduced, incorporating many measures from the Data Protection and Digital Information Bill, which failed to pass in the last parliament. This move aims to create a more secure and prosperous digital economy.

New UK government considers AI regulation

Britain’s new Labour government plans to investigate how to regulate the most powerful AI models but hasn’t proposed specific legislation yet. King Charles outlined Prime Minister Keir Starmer’s program for government, which includes over 35 new bills covering various areas, including cybersecurity.

The government aims to establish appropriate laws for developing advanced AI models. Former Prime Minister Rishi Sunak positioned the UK as a leader in AI safety, hosting a summit at Bletchley Park and launching the world’s first AI Safety Institute. However, Sunak’s administration avoided targeted AI regulation, preferring a sector-by-sector approach.

Nathan Benaich from Air Street Capital noted that AI labs are relieved by the government’s cautious approach. Nevertheless, some experts, like Gaia Marcus from the Ada Lovelace Institute, argue that the rapid development of AI tools necessitates urgent legislation.

The UK’s careful approach to AI regulation contrasts with the EU’s more proactive stance, potentially offering a competitive advantage. Starmer’s government remains committed to introducing new AI laws but is proceeding with caution.

UK investigates Microsoft over AI hiring concerns

British regulators have launched a preliminary investigation into Microsoft’s recent hiring spree from AI startup Inflection AI and its entry into associated arrangements with Inflection AI, due to concerns that this could hinder competition in the burgeoning AI market. Mustafa Suleyman, Inflection AI’s co-founder and CEO, along with several top engineers and researchers, joined Microsoft earlier this year. Suleyman, a co-founder of the AI research lab DeepMind, is a prominent figure in the AI industry.

The UK’s Competition and Markets Authority (CMA) is scrutinising whether these hirings might lead to a significant reduction in competition within the UK’s AI sector, potentially breaching antitrust regulations. Microsoft, however, maintains that the recruitment of talent fosters competition and should not be regarded as a merger. The company has pledged to cooperate with the CMA’s inquiry.

The CMA has a deadline of 11 September to decide whether to approve the hirings or escalate the investigation. The authority has the power to reverse deals or impose conditions to address any competition concerns. This investigation highlights the growing regulatory scrutiny over how major tech companies are acquiring talent and technology from innovative AI startups.

Across the Atlantic, US senators have urged antitrust enforcers to investigate Amazon’s deal with AI startup Adept. The senators noted similarities to the Microsoft-Inflection case, emphasizing concerns over the potential elimination of major competitors in the AI market. These developments reflect a broader regulatory focus on maintaining competitive balance in the rapidly evolving AI industry.

The UK High Court dismissed Tesla’s lawsuit for a 5G patent licence

Tesla’s attempt to secure a 5G patent licence in the UK has been dismissed by the High Court. The automaker sought the licence before its planned launch of 5G vehicles in Britain.

The lawsuit, filed against US technology firm InterDigital and the patent licensing platform Avanci, was thrown out on Monday. Tesla wanted the court to determine fair, reasonable, and non-discriminatory (FRAND) terms for using patents owned by InterDigital and licensed by Avanci.

Judge Timothy Fancourt ruled that Tesla’s bid for a license must be dismissed. However, Tesla’s separate claim to revoke three InterDigital’s patents will continue.

UK debates digital ID vs national ID cards

Tony Blair, former UK Prime Minister, is advocating for digital identity as a solution to manage irregular migration, a pressing issue in the recent UK elections. In a piece for The Times addressed to Prime Minister Keir Starmer, Blair proposes leveraging AI and digital ID systems to enhance border controls and immigration management.

Blair emphasises the need for a robust digital identity framework, suggesting it could replace traditional national ID cards. This approach, he argues, could ensure accurate identification without the need for centralised databases or government-issued cards, which has sparked controversy in the past.

Despite Blair’s advocacy, UK government officials, including Business Secretary Jonathan Reynolds, hesitated to reintroduce national ID cards. Instead, the government plans to establish a new enforcement and return unit to tackle illegal migration and smuggling rings.

The debate over digital ID versus national ID cards has historical roots, dating back to Blair’s earlier proposals in the 2000s. The issue resurfaced recently amidst concerns over illegal migration and the small boat crisis in the English Channel, prompting renewed discussions about the role of ID documents in modern immigration policies.

Why does this matter?

Advocates like the Open Identity Exchange stress that if implemented adequately through frameworks like the Digital Verification Service, digital ID systems could drive economic growth and improve service delivery in sectors beyond immigration, such as healthcare and education. Despite challenges, proponents argue that a secure, decentralised digital ID system could substantially benefit the UK’s digital economy and public services.

Examiners fooled as AI students outperform real students in the UK

In a groundbreaking study published in PLOS One, the University of Reading has unveiled startling findings from a real-world Turing test involving AI in university exams, raising profound implications for education.

The study, led by the university’s tech team, involved 33 fictitious student profiles using OpenAI’s GPT-4 to complete psychology assignments and exams online. Astonishingly, 94% of AI-generated submissions went undetected by examiners, outperforming their human counterparts by achieving higher grades on average.

Associate Professor Peter Scarfe, a co-author of the study, emphasised the urgent need for educational institutions to address the impact of AI on academic integrity. He highlighted a recent UNESCO survey revealing minimal global preparation for the use of generative AI in education, calling for a reassessment of assessment practices worldwide.

Professor Etienne Roesch, another co-author, underscored the importance of establishing clear guidelines on AI usage to maintain trust in educational assessments and beyond. She stressed the responsibility of both creators and consumers of information to uphold academic integrity amid AI advancements.

The study also pointed to ongoing challenges for educators in combating AI-driven academic misconduct, even as tools like Turnitin adapt to detect AI-authored work. Despite these challenges, educators like Professor Elizabeth McCrum, the University of Reading’s pro-vice chancellor of education, advocate for embracing AI as a tool for enhancing student learning and employability skills.

Looking ahead, Professor McCrum expressed confidence in the university’s proactive stance in integrating AI responsibly into educational practices, preparing students for a future shaped by rapid technological change.

Why does it matter?

The IATSE’s tentative agreement represents a significant step forward in securing fair wages and job protections for Hollywood’s behind-the-scenes workers, ensuring that the rapid advancements in technology do not come at the expense of human employment.

AI and the UK election: Can ChatGPT influence the outcome?

With the UK heading to the polls, the role of AI in guiding voter decisions is under scrutiny. ChatGPT, a generative AI tool, has been tested on its ability to provide insights into the upcoming general election. Despite its powerful pattern-matching capabilities, experts emphasise its limitations and potential biases, given that AI tools rely on their training data and accessible online content.

ChatGPT suggested a strong chance of a Labour victory in the UK based on current polling when prompted about the likely outcomes of the election. However, AI’s predictions can be flawed, as demonstrated when a glitch led ChatGPT to declare Labour as the election winner prematurely incorrectly. This incident prompted OpenAI to refine ChatGPT’s responses, ensuring more cautious and accurate outputs.

ChatGPT can help voters navigate party manifestos, outlining the priorities of major parties like Labour and the Conservatives. By summarising key points from multiple sources, the AI aims to provide balanced insights. Nevertheless, the psychological impact of AI-generated single answers remains a concern, as it could influence voter behaviour and election outcomes.

Why does it matter?

The use of AI for election guidance has sparked debates about its appropriateness and reliability. While AI can offer valuable information, the importance of critical thinking and informed decision-making must be balanced. As the election date approaches, voters are reminded that their choices hold significant weight, and participation in the democratic process is crucial.

UK’s CMA investigates Hewlett Packard over $14 billion acquisition of Juniper Networks

The UK’s Competition and Markets Authority (CMA) has commenced investigating Hewlett Packard Enterprise’s (HPE) proposed $14 billion acquisition of Juniper Networks. The inquiry seeks to determine whether the acquisition might lead to competition issues within the UK market, with a deadline set for 14 August to decide if a more comprehensive probe is warranted.

In January, HPE, a US-based technology firm, announced its intention to purchase Juniper Networks to enhance HPE’s AI capabilities and expand its networking business. HPE anticipates doubling its networking operations through this acquisition, aligning with the broader industry trend known as the AI gold rush, where companies invest heavily to advance their technological offerings.

Why does it matter?

The CMA’s preliminary investigation points to potential regulatory concerns about reducing competition, focusing on UK market dynamics and consumer choices. If significant issues are identified by the August deadline, the CMA may thoroughly examine the merger.

The legal action underscores the CMA’s role in maintaining fair competition and monitoring significant market transactions, especially in the rapidly evolving AI sector, to prevent monopolistic practices and ensure a balanced market environment.

Neither HPE nor Juniper Networks have provided comments attributed to the Juneteenth holiday affecting market operations in the US.

CMA accepts Meta’s updated UK privacy compliance proposals

Meta Platforms has agreed to limit the use of certain data from advertisers on its Facebook Marketplace as part of an updated proposal accepted by the UK’s Competition Market Authority (CMA). The request aims to prevent Meta from exploiting its advertising customers’ data. The initial commitments, accepted by the CMA in November, included allowing competitors to opt out of having their data used to enhance Facebook Marketplace.

The British competition regulator has provisionally accepted Meta’s updated changes and is now seeking feedback from interested parties, with the consultation period closing on 14 June. The details about any further amendments to Meta’s initial proposals in UK have yet to be disclosed. The following decision reflects a broader effort by regulators to ensure fair competition and prevent dominant platforms from misusing data.

In November, Amazon committed to avoiding the use of marketplace data from rival sellers, thereby promoting an even playing field for third-party sellers. Both cases highlight the increasing scrutiny of major tech companies regarding their data practices and market power, aiming to foster a more competitive and transparent digital marketplace.