The UK High Court dismissed Tesla’s lawsuit for a 5G patent licence

Tesla’s attempt to secure a 5G patent licence in the UK has been dismissed by the High Court. The automaker sought the licence before its planned launch of 5G vehicles in Britain.

The lawsuit, filed against US technology firm InterDigital and the patent licensing platform Avanci, was thrown out on Monday. Tesla wanted the court to determine fair, reasonable, and non-discriminatory (FRAND) terms for using patents owned by InterDigital and licensed by Avanci.

Judge Timothy Fancourt ruled that Tesla’s bid for a license must be dismissed. However, Tesla’s separate claim to revoke three InterDigital’s patents will continue.

UK debates digital ID vs national ID cards

Tony Blair, former UK Prime Minister, is advocating for digital identity as a solution to manage irregular migration, a pressing issue in the recent UK elections. In a piece for The Times addressed to Prime Minister Keir Starmer, Blair proposes leveraging AI and digital ID systems to enhance border controls and immigration management.

Blair emphasises the need for a robust digital identity framework, suggesting it could replace traditional national ID cards. This approach, he argues, could ensure accurate identification without the need for centralised databases or government-issued cards, which has sparked controversy in the past.

Despite Blair’s advocacy, UK government officials, including Business Secretary Jonathan Reynolds, hesitated to reintroduce national ID cards. Instead, the government plans to establish a new enforcement and return unit to tackle illegal migration and smuggling rings.

The debate over digital ID versus national ID cards has historical roots, dating back to Blair’s earlier proposals in the 2000s. The issue resurfaced recently amidst concerns over illegal migration and the small boat crisis in the English Channel, prompting renewed discussions about the role of ID documents in modern immigration policies.

Why does this matter?

Advocates like the Open Identity Exchange stress that if implemented adequately through frameworks like the Digital Verification Service, digital ID systems could drive economic growth and improve service delivery in sectors beyond immigration, such as healthcare and education. Despite challenges, proponents argue that a secure, decentralised digital ID system could substantially benefit the UK’s digital economy and public services.

Examiners fooled as AI students outperform real students in the UK

In a groundbreaking study published in PLOS One, the University of Reading has unveiled startling findings from a real-world Turing test involving AI in university exams, raising profound implications for education.

The study, led by the university’s tech team, involved 33 fictitious student profiles using OpenAI’s GPT-4 to complete psychology assignments and exams online. Astonishingly, 94% of AI-generated submissions went undetected by examiners, outperforming their human counterparts by achieving higher grades on average.

Associate Professor Peter Scarfe, a co-author of the study, emphasised the urgent need for educational institutions to address the impact of AI on academic integrity. He highlighted a recent UNESCO survey revealing minimal global preparation for the use of generative AI in education, calling for a reassessment of assessment practices worldwide.

Professor Etienne Roesch, another co-author, underscored the importance of establishing clear guidelines on AI usage to maintain trust in educational assessments and beyond. She stressed the responsibility of both creators and consumers of information to uphold academic integrity amid AI advancements.

The study also pointed to ongoing challenges for educators in combating AI-driven academic misconduct, even as tools like Turnitin adapt to detect AI-authored work. Despite these challenges, educators like Professor Elizabeth McCrum, the University of Reading’s pro-vice chancellor of education, advocate for embracing AI as a tool for enhancing student learning and employability skills.

Looking ahead, Professor McCrum expressed confidence in the university’s proactive stance in integrating AI responsibly into educational practices, preparing students for a future shaped by rapid technological change.

Why does it matter?

The IATSE’s tentative agreement represents a significant step forward in securing fair wages and job protections for Hollywood’s behind-the-scenes workers, ensuring that the rapid advancements in technology do not come at the expense of human employment.

AI and the UK election: Can ChatGPT influence the outcome?

With the UK heading to the polls, the role of AI in guiding voter decisions is under scrutiny. ChatGPT, a generative AI tool, has been tested on its ability to provide insights into the upcoming general election. Despite its powerful pattern-matching capabilities, experts emphasise its limitations and potential biases, given that AI tools rely on their training data and accessible online content.

ChatGPT suggested a strong chance of a Labour victory in the UK based on current polling when prompted about the likely outcomes of the election. However, AI’s predictions can be flawed, as demonstrated when a glitch led ChatGPT to declare Labour as the election winner prematurely incorrectly. This incident prompted OpenAI to refine ChatGPT’s responses, ensuring more cautious and accurate outputs.

ChatGPT can help voters navigate party manifestos, outlining the priorities of major parties like Labour and the Conservatives. By summarising key points from multiple sources, the AI aims to provide balanced insights. Nevertheless, the psychological impact of AI-generated single answers remains a concern, as it could influence voter behaviour and election outcomes.

Why does it matter?

The use of AI for election guidance has sparked debates about its appropriateness and reliability. While AI can offer valuable information, the importance of critical thinking and informed decision-making must be balanced. As the election date approaches, voters are reminded that their choices hold significant weight, and participation in the democratic process is crucial.

UK’s CMA investigates Hewlett Packard over $14 billion acquisition of Juniper Networks

The UK’s Competition and Markets Authority (CMA) has commenced investigating Hewlett Packard Enterprise’s (HPE) proposed $14 billion acquisition of Juniper Networks. The inquiry seeks to determine whether the acquisition might lead to competition issues within the UK market, with a deadline set for 14 August to decide if a more comprehensive probe is warranted.

In January, HPE, a US-based technology firm, announced its intention to purchase Juniper Networks to enhance HPE’s AI capabilities and expand its networking business. HPE anticipates doubling its networking operations through this acquisition, aligning with the broader industry trend known as the AI gold rush, where companies invest heavily to advance their technological offerings.

Why does it matter?

The CMA’s preliminary investigation points to potential regulatory concerns about reducing competition, focusing on UK market dynamics and consumer choices. If significant issues are identified by the August deadline, the CMA may thoroughly examine the merger.

The legal action underscores the CMA’s role in maintaining fair competition and monitoring significant market transactions, especially in the rapidly evolving AI sector, to prevent monopolistic practices and ensure a balanced market environment.

Neither HPE nor Juniper Networks have provided comments attributed to the Juneteenth holiday affecting market operations in the US.

CMA accepts Meta’s updated UK privacy compliance proposals

Meta Platforms has agreed to limit the use of certain data from advertisers on its Facebook Marketplace as part of an updated proposal accepted by the UK’s Competition Market Authority (CMA). The request aims to prevent Meta from exploiting its advertising customers’ data. The initial commitments, accepted by the CMA in November, included allowing competitors to opt out of having their data used to enhance Facebook Marketplace.

The British competition regulator has provisionally accepted Meta’s updated changes and is now seeking feedback from interested parties, with the consultation period closing on 14 June. The details about any further amendments to Meta’s initial proposals in UK have yet to be disclosed. The following decision reflects a broader effort by regulators to ensure fair competition and prevent dominant platforms from misusing data.

In November, Amazon committed to avoiding the use of marketplace data from rival sellers, thereby promoting an even playing field for third-party sellers. Both cases highlight the increasing scrutiny of major tech companies regarding their data practices and market power, aiming to foster a more competitive and transparent digital marketplace.

AI drives productivity surge in certain industries, report shows

A recent PwC (PricewaterhouseCoopers International Limited) report highlights that sectors of the global economy with high exposure to AI are experiencing significant productivity gains and wage increases. The study found that productivity growth in AI-intensive industries is nearly five times faster than in sectors with less AI integration. In the UK, job postings requiring AI skills are growing 3.6 times faster than other listings, with employers offering a 14% wage premium for these roles, particularly in legal and IT sectors.

Since the launch of ChatGPT in late 2022, AI’s impact on employment has been widely debated. However, PwC’s findings indicate that AI has influenced the job market for over a decade. Job postings for AI specialists have increased sevenfold since 2012, far outpacing the growth for other roles. The report suggests that AI is being used to address labour shortages, which could benefit countries with ageing populations and high worker demand.

PwC’s 2024 global AI jobs barometer reveals that the growth in AI-related employment contradicts fears of widespread job losses due to automation. Despite predictions of significant job reductions, the continued rise in AI-exposed occupations suggests that AI is creating new industries and transforming the job market. According to PwC UK’s chief economist, Barret Kupelian, as AI technology advances and spreads across more sectors, its potential economic impact could be transformative, marking only the beginning of its influence on productivity and employment.

UK launches cybersecurity law for smart devices to prevent hacking

Starting today, the UK is implementing consumer protection laws targeting cyber-attacks and hacking vulnerabilities in smart devices. This legislation, part of the Product Security and Telecommunications Infrastructure (PSTI) regime, mandates that all internet-connected devices—from smartphones to gaming consoles and smart fridges—adhere to strict security standards.

Manufacturers must eliminate weak default passwords like ‘admin’ or ‘12345’ and prompt users to change them upon device setup. The legal move aims to enhance the UK’s cyber-resilience, reflecting that 99% of UK adults now own at least one smart device, with the average household possessing nine.

Other key elements of the new legislation include banning common weak passwords, requiring manufacturers to provide clear contact information for reporting security issues and ensuring transparency about the duration of product security updates. By implementing these standards, the UK seeks to enhance consumer confidence, stimulate economic growth, and position itself as a leader in online safety.

Why does it matter?

The legislation responds to vulnerabilities exposed by significant cyber incidents, such as the 2016 Mirai attack, which compromised 300,000 smart products and disrupted internet services across the US East Coast. Similar incidents have since affected major UK banks such as Lloyds and RBS, which prompted the government to work on robust cybersecurity measures.

UK draft report questions Google’s Privacy Sandbox

A draft report from the UK Information Commissioner’s Office (ICO) raises concerns about Google’s Privacy Sandbox, which is aimed at preserving privacy in online ad targeting and analytics. The report highlights gaps that could be exploited to compromise privacy and track individuals online. This technology seeks to replace current tracking methods with more privacy-conscious alternatives, but its credibility hinges on its ability to deliver privacy assurances.

If Google’s Privacy Sandbox fails to address regulatory, community, and competitive challenges, it could collapse, leaving adtech rivals to continue tracking users through existing or alternative methods. The ICO report represents another setback for Google’s attempts to reconcile ad targeting with privacy laws like GDPR. Google’s strategy involves moving ad auction mechanics to users’ local devices through web APIs, such as the Topics API in Chrome, which aims to convey user interests to advertisers without identifying individuals.

Critics, including the Electronic Frontier Foundation and rival browser maker Vivaldi, have raised concerns about the Privacy Sandbox’s support for behavioural advertising and its reliance on advertisers’ good behaviour rather than technical guarantees for privacy. Given Google’s market dominance and significant revenue tied to online advertising, scepticism persists about rebuilding ad architecture on its platforms. Both regulators and industry groups like the IAB have expressed concerns about the Privacy Sandbox’s potential competitive disadvantages and limitations, suggesting that Google may need to address these issues before proceeding.

Despite challenges and criticism, Google remains committed to Privacy Sandbox technologies, emphasising their aim to enhance privacy while maintaining targeted advertising. The company continues to engage with regulators and stakeholders to address concerns and ensure a solution that benefits users and the entire advertising ecosystem.

UK bans sex offender from AI tools after child abuse conviction

A convicted sex offender in the UK has been banned from using ‘AI-creating tools’ for five years, marking the first known case of its kind. Anthony Dover, 48, received the prohibition as part of a sexual harm prevention order, preventing him from accessing AI generation tools without prior police permission. This includes text-to-image generators and ‘nudifying’ websites used to produce explicit deepfake content.

Dover’s case highlights the increasing concern over the proliferation of AI-generated sexual abuse imagery, prompting government action. The UK recently introduced a new offence making it illegal to create sexually explicit deepfakes of adults without consent, with penalties including prosecution and unlimited fines. The move aims to address the evolving landscape of digital exploitation and safeguard individuals from the misuse of advanced technology.

Charities and law enforcement agencies emphasise the urgent need for collaboration to combat the spread of AI-generated abuse material. Recent prosecutions reveal a growing trend of offenders exploiting AI tools to create highly realistic and harmful content. The Internet Watch Foundation (IWF) and the Lucy Faithfull Foundation (LFF) stress the importance of targeting both offenders and tech companies to prevent the production and dissemination of such material.

Why does it matter?

The decision to restrict an adult sex offender’s access to AI tools sets a precedent for future monitoring and prevention measures. While the specific reasons for Dover’s ban remain unclear, it underscores the broader effort to mitigate the risks posed by digital advancements in sexual exploitation. Law enforcement agencies are increasingly adopting proactive measures to address emerging threats and protect vulnerable individuals from harm in the digital age.