AI development is outpacing our understanding, says expert

Dario Amodei, CEO of AI firm Anthropic, has warned that the race to develop AI is moving faster than efforts to fully understand it. Speaking at an event in Paris, he stressed the need for deeper research into AI models, describing it as a race between expanding capabilities and improving transparency. ‘We can’t slow down development, but our understanding must match our ability to build,’ he said.

Amodei rejected the notion that AI safety measures hinder progress, arguing instead that they help refine and improve models. He pointed to earlier discussions at the UK’s Bletchley Summit, where risk assessment strategies were introduced, and insisted they had not slowed technological growth. ‘Better testing and measurement actually lead to better models,’ he said.

The Anthropic CEO also discussed the evolving AI market, including competition from Chinese firm DeepSeek, whose claims of dramatically lower training costs he dismissed as ‘not based on facts.’ Looking ahead, he hinted at upcoming improvements in AI reasoning, with a focus on creating more seamless transitions between different types of models. He remains optimistic, predicting that AI will drive innovation across industries, from healthcare to finance and energy.

For more information on these topics, visit diplomacy.edu.

Apple granted UK authorities iCloud data in just 4 of 6,000 requests since 2020—excluding Investigatory Powers Act cases

Since 2020, Apple has provided iCloud data to UK authorities in response to four of more than 6,000 legal requests for customer information under non-IPA laws. This data excludes requests made under the Investigatory Powers Act (IPA), the UK’s primary law for accessing tech company data.

From January 2020 to June 2023, Apple received between 0 and 499 IPA-related requests in the first half of 2023, reported in bands of 500. Due to legal limitations, Apple cannot disclose details about these requests.

Earlier reporting linked the low number of content disclosures to efforts by the UK government to force Apple to provide encrypted iCloud data. However, due to the data’s lack of detail, no direct connection can be made.

The UK government previously stated that it has made over 10,000 requests to US companies since the US-UK Data Access Agreement began, providing crucial data for law enforcement in cases related to terrorism, organized crime, and other serious offenses.

Apple’s transparency reports suggest that content data is shared more frequently in other countries, such as the US, where it responded to 22,306 requests in 2020-2023. In comparison, most countries see lower content disclosures due to restrictions on sharing with foreign governments.

The British government’s Technical Capability Notice (TCN), revealed by The Washington Post, follows Apple’s 2022 introduction of optional end-to-end encryption (E2EE) for iCloud. While the UK government did not characterise it as such, critics see the TCN as a potential ‘back door’ to Apple’s encrypted data. Apple has declined comment, while the UK government refrains from discussing operational matters.

The controversy reflects ongoing debates about the balance between encryption, privacy, and law enforcement access to encrypted data.

Motorola loses appeal over UK emergency services contract

Motorola has been denied permission to appeal against the UK competition regulator’s ruling that it was making excessive profits from its contract to provide communications for Britain’s emergency services. The Court of Appeal unanimously dismissed the company’s application, upholding the Competition and Markets Authority’s (CMA) decision to impose a price cap on Motorola’s Airwave network.

The CMA introduced the cap in July 2023, reducing the cost of the Airwave service to reflect a competitive market, cutting an estimated £200 million in annual charges. Motorola had previously challenged the regulator’s findings at a tribunal but was unsuccessful. CMA Executive Director George Lusty welcomed the court’s decision, stating it ensures fair pricing for emergency services and marks the end of the legal dispute.

A Motorola spokesperson defended the company’s role, emphasising that Airwave remains essential for UK public safety communications. Despite disagreeing with the CMA’s ruling, Motorola said it is focused on continuing to provide high-quality emergency communication services.

UK gambling websites breach data protection laws

Gambling companies are under investigation for covertly sharing visitors’ data with Facebook’s parent company, Meta, without proper consent, breaching data protection laws. A hidden tracking tool embedded in numerous UK gambling websites has been sending data, such as the web pages users visit and the buttons they click, to Meta, which then uses this information to profile individuals as gamblers. This data is then used to target users with gambling-related ads, violating the legal requirement for explicit consent before sharing such information.

Testing of 150 gambling websites revealed that 52 automatically transmitted user data to Meta, including large brands like Hollywoodbets, Sporting Index, and Bet442. This data sharing occurred without users having the opportunity to consent, resulting in targeted ads for gambling websites shortly after visiting these sites. Experts have raised concerns about the industry’s unlawful practices and called for immediate regulatory action.

The Information Commissioner’s Office (ICO) is reviewing the use of tracking tools like Meta Pixel and has warned that enforcement action could be taken, including significant fines. Some gambling companies have updated their websites to prevent automatic data sharing, while others have removed the tracking tool altogether in response to the findings. However, the Gambling Commission has yet to address the issue of third-party profiling used to recruit new customers.

The misuse of data in this way highlights the risks of unregulated marketing, particularly for vulnerable individuals. Data privacy experts have stressed that these practices not only breach privacy laws but could also exacerbate gambling problems by targeting individuals who may already be at risk.

Sony extends PlayStation Plus after global network disruption

PlayStation Plus subscribers will receive an automatic five-day extension after a global outage disrupted the PlayStation Network for around 18 hours on Friday and Saturday. Sony confirmed on Sunday that network services had been fully restored and apologised for the inconvenience but did not specify the cause of the disruption.

The outage, which started late on Friday, left users unable to sign in, play online games or access the PlayStation Store. By Saturday evening, Sony announced that services were back online. At its peak, Downdetector.com recorded nearly 8,000 affected users in the US and over 7,300 in the UK.

PlayStation Network plays a vital role in Sony’s gaming division, supporting millions of users worldwide. Previous disruptions have been more severe, including a cyberattack in 2014 that shut down services for several days and a major 2011 data breach affecting 77 million users, leading to a month-long shutdown and regulatory scrutiny.

UK officials push Apple to unlock cloud data, according to TWP

Britain’s security officials have reportedly ordered Apple to create a so-called ‘back door’ to access all content uploaded to the cloud by its users worldwide. The demand, revealed by The Washington Post, could force Apple to compromise its security promises to customers. Sources suggest the company may opt to stop offering encrypted storage in the UK rather than comply with the order.

Apple has not yet responded to requests for comment outside of regular business hours. The Home Office has served Apple with a technical capability notice, which would require the company to grant access to the requested data. However, a spokesperson from the Home Office declined to confirm or deny the existence of such a notice.

In January, Britain initiated an investigation into the operating systems of Apple and Google, as well as their app stores and browsers. The ongoing regulatory scrutiny highlights growing tensions between tech giants and governments over privacy and security concerns.

UK announces AI cyber code for companies developing and managing AI systems

The UK government has launched its Code of Practice for the Cyber Security of AI, a voluntary framework designed to enhance security in AI development. The code sets out 13 principles aimed at reducing risks such as AI-driven cyberattacks, system failures, and data vulnerabilities.

The guidelines apply to developers, system operators, and data custodians (any type of business, organisation or individual that controls data permissions and the integrity of data that is used for any AI model or system to function) responsible for creating, deploying, or managing AI systems. Companies that solely sell AI models or components fall under separate regulations. According to the Department for Science, Innovation, and Technology, the code will help ensure AI is developed and deployed securely while fostering innovation and economic growth.

Key recommendations include implementing AI security training, establishing recovery plans, conducting risk assessments, maintaining system inventories, and ensuring transparency about data usage. One of the principles calls to enable human responsibility for AI systems and prescribes to ensure AI decisions are explainable and users understand their responsibilities.

The code references existing standards and best practices for secure software development and security by design, as well as provides useful definitions.

The release of the code follows the UK’s AI Opportunities Action Plan, which outlines strategies to expand the nation’s AI sector and establish global leadership in the field. It also coincides with a call from the National Cyber Security Centre urging software vendors to eliminate ‘unforgivable vulnerabilities‘—security flaws that are easy and cost-effective to fix but are often overlooked in favour of speed and new features.

This code also builds on NCSC’s Guidelines for Secure AI Development which were published in November 2023 and endorsed by 19 international partners.

UK course aims to equip young people with important AI skills

Young people in Guernsey are being offered a free six-week course on AI to help them understand both the opportunities and challenges of the technology. Run by Digital Greenhouse in St Peter Port, the programme is open to students and graduates over the age of 16, regardless of their academic background. Experts from University College London (UCL) deliver the lessons remotely each week.

Jenny de la Mare from Digital Greenhouse said the course was designed to “inform and inspire” participants while helping them stand out in job and university applications. She emphasised that the programme was not limited to STEM students and could serve as a strong introduction to AI for anyone interested in the field.

Recognising that young people in Guernsey may have fewer opportunities to attend major tech events in the UK, organisers hope the course will give them a competitive edge. The programme has already started but is still open for registrations, with interested individuals encouraged to contact Digital Greenhouse.

Britain to outlaw AI tools used for child abuse images

The United Kingdom is set to become the first country to criminalise the use of AI to create child sexual abuse images. New offences will target AI-generated explicit content, including tools that ‘nudeify’ real-life images of children. The move follows a sharp rise in AI-generated abuse material, with reports increasing nearly five-fold in 2024, according to the Internet Watch Foundation.

The government warns that predators are using AI to disguise their identities and blackmail children into further exploitation. New laws will criminalise the possession, creation, or distribution of AI tools designed for child abuse material, as well as so-called ‘paedophile manuals’ that provide instructions on using such technology. Websites hosting AI-generated child abuse content will also be targeted, and authorities will gain powers to unlock digital devices for inspection.

The measures will be included in the upcoming Crime and Policing Bill. Earlier this month, Britain also announced plans to outlaw AI-generated ‘deepfake’ pornography, making it illegal to create or share sexually explicit deepfakes. Officials say the new laws will help protect children from emerging online threats.

Siri upgrade brings expanded language support

Apple has announced that its AI suite, Apple Intelligence, will support additional languages starting in April, including French, German, Italian, Portuguese, Spanish, Japanese, Korean, and simplified Chinese. The update will also introduce localised English versions for India and Singapore, broadening access to the technology beyond its initial US English release.

The expansion follows a December update that brought support for various English dialects, including those used in Australia, Canada, New Zealand, South Africa, and the UK. However, Apple has yet to confirm when its AI suite will be available in the EU or mainland China.

CEO Tim Cook also revealed that the next version of Siri, which will feature improved on-screen contextual understanding, is expected to launch in the coming months. The update marks Apple’s latest effort to strengthen its AI ecosystem and compete with rivals in the artificial intelligence space.