Study warns of AI’s role in fueling bank runs

A new study from the UK has raised concerns about the risks of bank runs fueled by AI-generated fake news spread on social media. The research, published by Say No to Disinfo and Fenimore Harper, highlights how generative AI can create false stories or memes suggesting that bank deposits are at risk, leading to panic withdrawals. The study found that a significant portion of UK bank customers would consider moving their money after seeing such disinformation, especially with the speed at which funds can be transferred through online banking.

The issue is gaining traction globally, with regulators and banks worried about the growing role of AI in spreading malicious content. Following the collapse of Silicon Valley Bank in 2023, which saw $42 billion in withdrawals within a day, financial institutions are increasingly focused on detecting disinformation that could trigger similar crises. The study estimates that a small investment in social media ads promoting fake content could cause millions in deposit withdrawals.

The report calls for banks to enhance their monitoring systems, integrating social media tracking with withdrawal monitoring to better identify when disinformation is impacting customer behaviour. Revolut, a UK fintech, has already implemented real-time monitoring for emerging threats, urging financial institutions to be prepared for potential risks. While banks remain optimistic about AI’s potential, the financial stability challenges it poses are still a growing concern for regulators.

As financial institutions work to mitigate AI-related risks, the broader industry is also grappling with how to balance the benefits of AI with the threats it may pose. UK Finance, the industry body, emphasised that banks are making efforts to manage these risks, while regulators continue to monitor the situation closely.

For more information on these topics, visit diplomacy.edu.

Anthropic’s Claude tested as UK explores AI chatbot for public services

The UK government has partnered with AI startup Anthropic to explore the use of its chatbot, Claude, in public services. The collaboration aims to improve access to public information and streamline interactions for citizens.

Anthropic, a competitor of ChatGPT creator OpenAI and supported by tech giants Google and Amazon, signed a memorandum of understanding with the government.

The initiative aligns with Prime Minister Keir Starmer’s ambition to establish the UK as a leader in AI and enhance public service efficiency through innovative technologies.

Technology minister Peter Kyle highlighted the importance of this partnership, emphasising its role in positioning the UK as a hub for advanced AI development.

Claude has already been employed by the European Parliament to simplify access to its archives, demonstrating its potential in reducing time for document retrieval and analysis.

This step underscores Britain’s commitment to leveraging cutting-edge AI for the benefit of individuals and businesses nationwide.

For more information on these topics, visit diplomacy.edu.

AI development is outpacing our understanding, says expert

Dario Amodei, CEO of AI firm Anthropic, has warned that the race to develop AI is moving faster than efforts to fully understand it. Speaking at an event in Paris, he stressed the need for deeper research into AI models, describing it as a race between expanding capabilities and improving transparency. ‘We can’t slow down development, but our understanding must match our ability to build,’ he said.

Amodei rejected the notion that AI safety measures hinder progress, arguing instead that they help refine and improve models. He pointed to earlier discussions at the UK’s Bletchley Summit, where risk assessment strategies were introduced, and insisted they had not slowed technological growth. ‘Better testing and measurement actually lead to better models,’ he said.

The Anthropic CEO also discussed the evolving AI market, including competition from Chinese firm DeepSeek, whose claims of dramatically lower training costs he dismissed as ‘not based on facts.’ Looking ahead, he hinted at upcoming improvements in AI reasoning, with a focus on creating more seamless transitions between different types of models. He remains optimistic, predicting that AI will drive innovation across industries, from healthcare to finance and energy.

For more information on these topics, visit diplomacy.edu.

Apple granted UK authorities iCloud data in just 4 of 6,000 requests since 2020—excluding Investigatory Powers Act cases

Since 2020, Apple has provided iCloud data to UK authorities in response to four of more than 6,000 legal requests for customer information under non-IPA laws. This data excludes requests made under the Investigatory Powers Act (IPA), the UK’s primary law for accessing tech company data.

From January 2020 to June 2023, Apple received between 0 and 499 IPA-related requests in the first half of 2023, reported in bands of 500. Due to legal limitations, Apple cannot disclose details about these requests.

Earlier reporting linked the low number of content disclosures to efforts by the UK government to force Apple to provide encrypted iCloud data. However, due to the data’s lack of detail, no direct connection can be made.

The UK government previously stated that it has made over 10,000 requests to US companies since the US-UK Data Access Agreement began, providing crucial data for law enforcement in cases related to terrorism, organized crime, and other serious offenses.

Apple’s transparency reports suggest that content data is shared more frequently in other countries, such as the US, where it responded to 22,306 requests in 2020-2023. In comparison, most countries see lower content disclosures due to restrictions on sharing with foreign governments.

The British government’s Technical Capability Notice (TCN), revealed by The Washington Post, follows Apple’s 2022 introduction of optional end-to-end encryption (E2EE) for iCloud. While the UK government did not characterise it as such, critics see the TCN as a potential ‘back door’ to Apple’s encrypted data. Apple has declined comment, while the UK government refrains from discussing operational matters.

The controversy reflects ongoing debates about the balance between encryption, privacy, and law enforcement access to encrypted data.

Motorola loses appeal over UK emergency services contract

Motorola has been denied permission to appeal against the UK competition regulator’s ruling that it was making excessive profits from its contract to provide communications for Britain’s emergency services. The Court of Appeal unanimously dismissed the company’s application, upholding the Competition and Markets Authority’s (CMA) decision to impose a price cap on Motorola’s Airwave network.

The CMA introduced the cap in July 2023, reducing the cost of the Airwave service to reflect a competitive market, cutting an estimated £200 million in annual charges. Motorola had previously challenged the regulator’s findings at a tribunal but was unsuccessful. CMA Executive Director George Lusty welcomed the court’s decision, stating it ensures fair pricing for emergency services and marks the end of the legal dispute.

A Motorola spokesperson defended the company’s role, emphasising that Airwave remains essential for UK public safety communications. Despite disagreeing with the CMA’s ruling, Motorola said it is focused on continuing to provide high-quality emergency communication services.

UK gambling websites breach data protection laws

Gambling companies are under investigation for covertly sharing visitors’ data with Facebook’s parent company, Meta, without proper consent, breaching data protection laws. A hidden tracking tool embedded in numerous UK gambling websites has been sending data, such as the web pages users visit and the buttons they click, to Meta, which then uses this information to profile individuals as gamblers. This data is then used to target users with gambling-related ads, violating the legal requirement for explicit consent before sharing such information.

Testing of 150 gambling websites revealed that 52 automatically transmitted user data to Meta, including large brands like Hollywoodbets, Sporting Index, and Bet442. This data sharing occurred without users having the opportunity to consent, resulting in targeted ads for gambling websites shortly after visiting these sites. Experts have raised concerns about the industry’s unlawful practices and called for immediate regulatory action.

The Information Commissioner’s Office (ICO) is reviewing the use of tracking tools like Meta Pixel and has warned that enforcement action could be taken, including significant fines. Some gambling companies have updated their websites to prevent automatic data sharing, while others have removed the tracking tool altogether in response to the findings. However, the Gambling Commission has yet to address the issue of third-party profiling used to recruit new customers.

The misuse of data in this way highlights the risks of unregulated marketing, particularly for vulnerable individuals. Data privacy experts have stressed that these practices not only breach privacy laws but could also exacerbate gambling problems by targeting individuals who may already be at risk.

Sony extends PlayStation Plus after global network disruption

PlayStation Plus subscribers will receive an automatic five-day extension after a global outage disrupted the PlayStation Network for around 18 hours on Friday and Saturday. Sony confirmed on Sunday that network services had been fully restored and apologised for the inconvenience but did not specify the cause of the disruption.

The outage, which started late on Friday, left users unable to sign in, play online games or access the PlayStation Store. By Saturday evening, Sony announced that services were back online. At its peak, Downdetector.com recorded nearly 8,000 affected users in the US and over 7,300 in the UK.

PlayStation Network plays a vital role in Sony’s gaming division, supporting millions of users worldwide. Previous disruptions have been more severe, including a cyberattack in 2014 that shut down services for several days and a major 2011 data breach affecting 77 million users, leading to a month-long shutdown and regulatory scrutiny.

UK officials push Apple to unlock cloud data, according to TWP

Britain’s security officials have reportedly ordered Apple to create a so-called ‘back door’ to access all content uploaded to the cloud by its users worldwide. The demand, revealed by The Washington Post, could force Apple to compromise its security promises to customers. Sources suggest the company may opt to stop offering encrypted storage in the UK rather than comply with the order.

Apple has not yet responded to requests for comment outside of regular business hours. The Home Office has served Apple with a technical capability notice, which would require the company to grant access to the requested data. However, a spokesperson from the Home Office declined to confirm or deny the existence of such a notice.

In January, Britain initiated an investigation into the operating systems of Apple and Google, as well as their app stores and browsers. The ongoing regulatory scrutiny highlights growing tensions between tech giants and governments over privacy and security concerns.

UK announces AI cyber code for companies developing and managing AI systems

The UK government has launched its Code of Practice for the Cyber Security of AI, a voluntary framework designed to enhance security in AI development. The code sets out 13 principles aimed at reducing risks such as AI-driven cyberattacks, system failures, and data vulnerabilities.

The guidelines apply to developers, system operators, and data custodians (any type of business, organisation or individual that controls data permissions and the integrity of data that is used for any AI model or system to function) responsible for creating, deploying, or managing AI systems. Companies that solely sell AI models or components fall under separate regulations. According to the Department for Science, Innovation, and Technology, the code will help ensure AI is developed and deployed securely while fostering innovation and economic growth.

Key recommendations include implementing AI security training, establishing recovery plans, conducting risk assessments, maintaining system inventories, and ensuring transparency about data usage. One of the principles calls to enable human responsibility for AI systems and prescribes to ensure AI decisions are explainable and users understand their responsibilities.

The code references existing standards and best practices for secure software development and security by design, as well as provides useful definitions.

The release of the code follows the UK’s AI Opportunities Action Plan, which outlines strategies to expand the nation’s AI sector and establish global leadership in the field. It also coincides with a call from the National Cyber Security Centre urging software vendors to eliminate ‘unforgivable vulnerabilities‘—security flaws that are easy and cost-effective to fix but are often overlooked in favour of speed and new features.

This code also builds on NCSC’s Guidelines for Secure AI Development which were published in November 2023 and endorsed by 19 international partners.

UK course aims to equip young people with important AI skills

Young people in Guernsey are being offered a free six-week course on AI to help them understand both the opportunities and challenges of the technology. Run by Digital Greenhouse in St Peter Port, the programme is open to students and graduates over the age of 16, regardless of their academic background. Experts from University College London (UCL) deliver the lessons remotely each week.

Jenny de la Mare from Digital Greenhouse said the course was designed to “inform and inspire” participants while helping them stand out in job and university applications. She emphasised that the programme was not limited to STEM students and could serve as a strong introduction to AI for anyone interested in the field.

Recognising that young people in Guernsey may have fewer opportunities to attend major tech events in the UK, organisers hope the course will give them a competitive edge. The programme has already started but is still open for registrations, with interested individuals encouraged to contact Digital Greenhouse.