AI and the UK election: Can ChatGPT influence the outcome?

With the UK heading to the polls, the role of AI in guiding voter decisions is under scrutiny. ChatGPT, a generative AI tool, has been tested on its ability to provide insights into the upcoming general election. Despite its powerful pattern-matching capabilities, experts emphasise its limitations and potential biases, given that AI tools rely on their training data and accessible online content.

ChatGPT suggested a strong chance of a Labour victory in the UK based on current polling when prompted about the likely outcomes of the election. However, AI’s predictions can be flawed, as demonstrated when a glitch led ChatGPT to declare Labour as the election winner prematurely incorrectly. This incident prompted OpenAI to refine ChatGPT’s responses, ensuring more cautious and accurate outputs.

ChatGPT can help voters navigate party manifestos, outlining the priorities of major parties like Labour and the Conservatives. By summarising key points from multiple sources, the AI aims to provide balanced insights. Nevertheless, the psychological impact of AI-generated single answers remains a concern, as it could influence voter behaviour and election outcomes.

Why does it matter?

The use of AI for election guidance has sparked debates about its appropriateness and reliability. While AI can offer valuable information, the importance of critical thinking and informed decision-making must be balanced. As the election date approaches, voters are reminded that their choices hold significant weight, and participation in the democratic process is crucial.

UK’s CMA investigates Hewlett Packard over $14 billion acquisition of Juniper Networks

The UK’s Competition and Markets Authority (CMA) has commenced investigating Hewlett Packard Enterprise’s (HPE) proposed $14 billion acquisition of Juniper Networks. The inquiry seeks to determine whether the acquisition might lead to competition issues within the UK market, with a deadline set for 14 August to decide if a more comprehensive probe is warranted.

In January, HPE, a US-based technology firm, announced its intention to purchase Juniper Networks to enhance HPE’s AI capabilities and expand its networking business. HPE anticipates doubling its networking operations through this acquisition, aligning with the broader industry trend known as the AI gold rush, where companies invest heavily to advance their technological offerings.

Why does it matter?

The CMA’s preliminary investigation points to potential regulatory concerns about reducing competition, focusing on UK market dynamics and consumer choices. If significant issues are identified by the August deadline, the CMA may thoroughly examine the merger.

The legal action underscores the CMA’s role in maintaining fair competition and monitoring significant market transactions, especially in the rapidly evolving AI sector, to prevent monopolistic practices and ensure a balanced market environment.

Neither HPE nor Juniper Networks have provided comments attributed to the Juneteenth holiday affecting market operations in the US.

CMA accepts Meta’s updated UK privacy compliance proposals

Meta Platforms has agreed to limit the use of certain data from advertisers on its Facebook Marketplace as part of an updated proposal accepted by the UK’s Competition Market Authority (CMA). The request aims to prevent Meta from exploiting its advertising customers’ data. The initial commitments, accepted by the CMA in November, included allowing competitors to opt out of having their data used to enhance Facebook Marketplace.

The British competition regulator has provisionally accepted Meta’s updated changes and is now seeking feedback from interested parties, with the consultation period closing on 14 June. The details about any further amendments to Meta’s initial proposals in UK have yet to be disclosed. The following decision reflects a broader effort by regulators to ensure fair competition and prevent dominant platforms from misusing data.

In November, Amazon committed to avoiding the use of marketplace data from rival sellers, thereby promoting an even playing field for third-party sellers. Both cases highlight the increasing scrutiny of major tech companies regarding their data practices and market power, aiming to foster a more competitive and transparent digital marketplace.

AI drives productivity surge in certain industries, report shows

A recent PwC (PricewaterhouseCoopers International Limited) report highlights that sectors of the global economy with high exposure to AI are experiencing significant productivity gains and wage increases. The study found that productivity growth in AI-intensive industries is nearly five times faster than in sectors with less AI integration. In the UK, job postings requiring AI skills are growing 3.6 times faster than other listings, with employers offering a 14% wage premium for these roles, particularly in legal and IT sectors.

Since the launch of ChatGPT in late 2022, AI’s impact on employment has been widely debated. However, PwC’s findings indicate that AI has influenced the job market for over a decade. Job postings for AI specialists have increased sevenfold since 2012, far outpacing the growth for other roles. The report suggests that AI is being used to address labour shortages, which could benefit countries with ageing populations and high worker demand.

PwC’s 2024 global AI jobs barometer reveals that the growth in AI-related employment contradicts fears of widespread job losses due to automation. Despite predictions of significant job reductions, the continued rise in AI-exposed occupations suggests that AI is creating new industries and transforming the job market. According to PwC UK’s chief economist, Barret Kupelian, as AI technology advances and spreads across more sectors, its potential economic impact could be transformative, marking only the beginning of its influence on productivity and employment.

UK launches cybersecurity law for smart devices to prevent hacking

Starting today, the UK is implementing consumer protection laws targeting cyber-attacks and hacking vulnerabilities in smart devices. This legislation, part of the Product Security and Telecommunications Infrastructure (PSTI) regime, mandates that all internet-connected devices—from smartphones to gaming consoles and smart fridges—adhere to strict security standards.

Manufacturers must eliminate weak default passwords like ‘admin’ or ‘12345’ and prompt users to change them upon device setup. The legal move aims to enhance the UK’s cyber-resilience, reflecting that 99% of UK adults now own at least one smart device, with the average household possessing nine.

Other key elements of the new legislation include banning common weak passwords, requiring manufacturers to provide clear contact information for reporting security issues and ensuring transparency about the duration of product security updates. By implementing these standards, the UK seeks to enhance consumer confidence, stimulate economic growth, and position itself as a leader in online safety.

Why does it matter?

The legislation responds to vulnerabilities exposed by significant cyber incidents, such as the 2016 Mirai attack, which compromised 300,000 smart products and disrupted internet services across the US East Coast. Similar incidents have since affected major UK banks such as Lloyds and RBS, which prompted the government to work on robust cybersecurity measures.

UK draft report questions Google’s Privacy Sandbox

A draft report from the UK Information Commissioner’s Office (ICO) raises concerns about Google’s Privacy Sandbox, which is aimed at preserving privacy in online ad targeting and analytics. The report highlights gaps that could be exploited to compromise privacy and track individuals online. This technology seeks to replace current tracking methods with more privacy-conscious alternatives, but its credibility hinges on its ability to deliver privacy assurances.

If Google’s Privacy Sandbox fails to address regulatory, community, and competitive challenges, it could collapse, leaving adtech rivals to continue tracking users through existing or alternative methods. The ICO report represents another setback for Google’s attempts to reconcile ad targeting with privacy laws like GDPR. Google’s strategy involves moving ad auction mechanics to users’ local devices through web APIs, such as the Topics API in Chrome, which aims to convey user interests to advertisers without identifying individuals.

Critics, including the Electronic Frontier Foundation and rival browser maker Vivaldi, have raised concerns about the Privacy Sandbox’s support for behavioural advertising and its reliance on advertisers’ good behaviour rather than technical guarantees for privacy. Given Google’s market dominance and significant revenue tied to online advertising, scepticism persists about rebuilding ad architecture on its platforms. Both regulators and industry groups like the IAB have expressed concerns about the Privacy Sandbox’s potential competitive disadvantages and limitations, suggesting that Google may need to address these issues before proceeding.

Despite challenges and criticism, Google remains committed to Privacy Sandbox technologies, emphasising their aim to enhance privacy while maintaining targeted advertising. The company continues to engage with regulators and stakeholders to address concerns and ensure a solution that benefits users and the entire advertising ecosystem.

UK bans sex offender from AI tools after child abuse conviction

A convicted sex offender in the UK has been banned from using ‘AI-creating tools’ for five years, marking the first known case of its kind. Anthony Dover, 48, received the prohibition as part of a sexual harm prevention order, preventing him from accessing AI generation tools without prior police permission. This includes text-to-image generators and ‘nudifying’ websites used to produce explicit deepfake content.

Dover’s case highlights the increasing concern over the proliferation of AI-generated sexual abuse imagery, prompting government action. The UK recently introduced a new offence making it illegal to create sexually explicit deepfakes of adults without consent, with penalties including prosecution and unlimited fines. The move aims to address the evolving landscape of digital exploitation and safeguard individuals from the misuse of advanced technology.

Charities and law enforcement agencies emphasise the urgent need for collaboration to combat the spread of AI-generated abuse material. Recent prosecutions reveal a growing trend of offenders exploiting AI tools to create highly realistic and harmful content. The Internet Watch Foundation (IWF) and the Lucy Faithfull Foundation (LFF) stress the importance of targeting both offenders and tech companies to prevent the production and dissemination of such material.

Why does it matter?

The decision to restrict an adult sex offender’s access to AI tools sets a precedent for future monitoring and prevention measures. While the specific reasons for Dover’s ban remain unclear, it underscores the broader effort to mitigate the risks posed by digital advancements in sexual exploitation. Law enforcement agencies are increasingly adopting proactive measures to address emerging threats and protect vulnerable individuals from harm in the digital age.

Companies in UK reconsider facial recognition amid regulatory clampdown

A wave of reconsideration is sweeping across UK businesses as they reassess the use of facial recognition technology and fingerprint scanning for staff attendance monitoring. This shift comes in response to a clampdown by the Information Commissioner’s Office (ICO), which recently ordered a Serco subsidiary to cease using biometrics for attendance tracking at leisure centres it manages.

The ICO’s directive followed its discovery that over 2,000 employees’ biometric data had been unlawfully processed across 38 Serco-managed leisure centres. As a result, Serco has been granted a three-month window to align its systems with the ICO’s compliance standards.

In the wake of the ICO’s ruling, various leisure centre operators and corporations are either reviewing or halting the use of similar biometric technologies. Notable among them is Virgin Active, which has removed biometric scanners from 32 sites and is actively seeking alternative attendance monitoring solutions for its staff.

Why does it matter?

The ICO’s intervention underscores broader concerns regarding the increasing prevalence of facial recognition and surveillance tools in employment contexts. The scrutiny extends beyond leisure centres, as highlighted by a recent case involving an Uber Eats driver who received a financial settlement over allegations of racially discriminatory facial recognition checks. These developments underscore the urgent need for robust regulations to safeguard workers’ rights in the age of AI and automated processes.

UK treasury to introduce regulatory framework for crypto assets and stablecoins in July

The United Kingdom Treasury is set to introduce a regulatory framework for crypto assets and stablecoins by July, with the aim of promoting local innovation in digital assets and blockchain technology. Bim Afolami, the UK’s economic secretary to the Treasury, emphasized the importance of crypto regulations in maintaining global competitiveness in fintech. The regulatory framework seeks to strike a balance between fostering innovation and safeguarding consumers.

The Treasury is currently finalizing proposals for regulations on stablecoins and crypto staking, which are expected to be delivered in June or July. Once implemented, various activities involving crypto assets, such as operating exchanges and taking custody of customer assets, will come under regulatory oversight for the first time.

During the Innovate Finance Global Summit, Afolami also announced the establishment of an open finance task force. This task force will provide specific recommendations on the necessary data sets and commercial incentives to drive the use case for open finance, with a particular focus on SME lending.

In addition to the regulatory framework, the UK is enacting a new law that grants authorities the power to confiscate crypto assets directly from exchanges and custodian wallet providers. This measure, effective from April 26, aims to address economic crime and illicit activities. The law is an amendment to the Economic Crime and Corporate Transparency Act 2023. While the specific method of destroying a crypto token is not specified, the news text mentions that burning the token is a common way to remove it from circulation.

UK invests £55.5 million in facial recognition to combat retail crime

UK Prime Minister Rishi Sunak has announced a substantial investment of £55.5 million over four years in facial recognition technology, which aims to combat retail crime by identifying repeated shoplifters.

The initiative, part of a broader crackdown on theft, includes deploying bespoke mobile units equipped with live facial recognition capabilities across high streets nationwide. While controversial, its deployment has resulted in numerous arrests, primarily for offences ranging from theft to assault. However, concerns persist regarding privacy and false positives.

Despite criticism from privacy advocates like Big Brother Watch, Home Secretary James Cleverly emphasises the technology’s preventative nature, while the Metropolitan Police views it as a transformative tool in law enforcement. The Office of the Scottish Biometrics Commissioner noted that careful deployment is needed to maintain public confidence.

Why does it matter?

The development has emerged months after Scotland’s biometrics commissioner, Brian Plastow, raised concerns about the trajectory towards autocracy driven by inappropriate use of biometric surveillance in the UK. While supporting specific biometric surveillance applications, like live facial recognition, he critiques government overreach and highlights risks such as database misuse and privacy erosion. Plastow’s concerns are exemplified by incidents like the arrest of an eight-month-pregnant woman for failing to report community service. While Scotland may resist England’s path towards a vigilant state, the stance of Wales remains uncertain.