Vodafone collaborates with IBM on quantum-safe cryptography

Vodafone UK has teamed up with IBM to explore quantum-safe cryptography as part of a new Proof of Concept (PoC) test for its mobile and broadband services, particularly for users of its ‘Secure Net’ anti-malware service. While quantum computers are still in the early stages of development, they could eventually break current internet encryption methods. In anticipation of this, Vodafone and IBM are testing how to integrate new post-quantum cryptographic standards into Vodafone’s existing Secure Net service, which already protects millions of users from threats like phishing and malware.

IBM’s cryptography experts have co-developed two algorithms now recognised in the US National Institute of Standards and Technology’s first post-quantum cryptography standards. This collaboration, supported by Akamai Technologies, aims to make Vodafone’s services more resilient against future quantum computing risks. Vodafone’s Head of R&D, Luke Ibbetson, stressed the importance of future-proofing digital security to ensure customers can continue enjoying safe internet experiences.

Although the PoC is still in its feasibility phase, Vodafone hopes to implement quantum-safe cryptography across its networks and products soon, ensuring stronger protection for both business and consumer users.

For more information on these topics, visit diplomacy.edu.

Wayve expands with new testing hub in Germany

British startup Wayve has announced plans to open a new testing and development hub in Germany, deploying a fleet of test vehicles in the Stuttgart region. The self-driving technology firm aims to enhance features like lane change assistance at the new facility, which will focus on improving its “Embodied AI” system that learns from human behaviour.

Wayve, which operates in the UK and the US, is expanding into Germany as part of its strategy to enter the European market, particularly Germany, the continent’s largest automotive hub. The company received a boost earlier this year, with Uber investing in August and SoftBank leading a $1 billion funding round in May, supported by Nvidia.

Despite the significant investments in autonomous vehicle technology, self-driving systems still face challenges in predicting and assessing risks as accurately as human drivers. Wayve’s technology is already integrated into six vehicle platforms, including electric models like the Jaguar I-PACE and Ford Mustang Mach-E, as part of advanced driver assistance systems (ADAS).

For more information on these topics, visit diplomacy.edu.

UK regulator scrutinises TikTok and Reddit for child privacy concerns

Britain’s privacy regulator, the Information Commissioner’s Office (ICO), has launched an investigation into the child privacy practices of TikTok, Reddit, and Imgur. The ICO is scrutinising how these platforms manage personal data and age verification for users, particularly teenagers, to ensure they comply with UK data protection laws.

The investigation focuses on TikTok’s use of data from 13-17-year-olds to recommend content via its algorithm. The ICO is also examining how Reddit and Imgur assess and protect the privacy of child users. If evidence of legal breaches is found, the ICO will take action, as it did in 2023 when TikTok was fined £12.7 million for mishandling data from children under 13.

Both Reddit and Imgur have expressed a commitment to adhering to UK regulations. Reddit, for example, stated that it plans to roll out updates to meet new age-assurance requirements. Meanwhile, TikTok and Imgur have not yet responded to requests for comment.

The investigation comes amid stricter UK legislation aimed at safeguarding children online, including measures requiring social media platforms to limit harmful content and enforce age checks to prevent underage access to inappropriate material.

For more information on these topics, visit diplomacy.edu.

UK students increase use of AI for academic work

British universities have been urged to reassess their assessment methods after new research revealed a significant rise in students using genAI for their projects. A survey of 1,000 undergraduates found that 88% of students used AI tools like ChatGPT for assessments in 2025, up from 53% last year. Overall, 92% of students now use some form of AI, marking a substantial shift in academic behaviours in just a year.

The report, by the Higher Education Policy Institute and Kortext, highlights how AI is being used for tasks such as summarising articles, explaining concepts, and suggesting research ideas. While AI can enhance the quality of work and save time, some students admitted to directly including AI-generated content in their assignments, raising concerns about academic misconduct.

The research also found that concerns over AI’s potential impact on academic integrity vary across demographics. Women, wealthier students, and those studying STEM subjects were more likely to embrace AI, while others expressed fears about getting caught or receiving biased results. Despite these concerns, students generally feel that universities are addressing the issue of academic integrity, with many believing their institutions have clear policies on AI use.

Experts argue that universities need to adapt quickly to the changing landscape, with some suggesting that AI should be integrated into teaching rather than being seen solely as a threat to academic integrity. As AI tools become an essential part of education, institutions must find a balance between leveraging the technology and maintaining academic standards.

For more information on these topics, visit diplomacy.edu.

UK Home Office’s new vulnerability reporting policy creates legal risks for ethical researchers, experts warn

The UK Home Office has introduced a vulnerability reporting mechanism through the platform HackerOne, allowing cybersecurity researchers to report security issues in its systems. However, concerns have been raised that individuals who submit reports could still face legal risks under the UK’s Computer Misuse Act (CMA), even if they follow the department’s new guidance.

Unlike some private-sector initiatives, the Home Office program does not offer financial rewards for reporting vulnerabilities. The new guidelines prohibit researchers from disrupting systems or accessing and modifying data. However, they also caution that individuals must not ‘break any applicable law or regulations,’ a clause that some industry groups argue could discourage vulnerability disclosure due to the broad provisions of the CMA, which dates back to 1990.

The CyberUp Campaign, a coalition of industry professionals, academics, and cybersecurity experts, warns that the CMA’s definition of unauthorized access does not distinguish between malicious intent and ethical security research. While the Ministry of Defence has previously assured researchers they would not face prosecution, the Home Office provides no such assurances, leaving researchers uncertain about potential legal consequences.

A Home Office spokesperson declined to comment on the concerns.

The CyberUp Campaign acknowledged the growing adoption of vulnerability disclosure policies across the public and private sectors but highlighted the ongoing legal risks researchers face in the UK. The campaign noted that other countries, including Malta, Portugal, and Belgium, have updated their laws to provide legal protections for ethical security research, while the UK has yet to introduce similar reforms.

The Labour Party had previously proposed an amendment to the CMA that would introduce a public interest defense for cybersecurity researchers, but this was not passed. Last year, Labour’s security minister Dan Jarvis praised the contributions of cybersecurity professionals and stated that the government was considering CMA reforms, though no legislative changes have been introduced so far.

For more information on these topics, visit diplomacy.edu.

Silent album released to challenge UK AI copyright reforms

More than 1,000 musicians have joined forces to release a silent album as part of a protest against the UK government’s proposed changes to copyright laws. The changes would allow AI companies to use artists’ work to train models without needing permission, a move critics argue would undermine creators’ rights. The silent album, titled ‘Is This What We Want?’, features empty studios and performance spaces, symbolising the potential loss of control over their work.

The changes have sparked outrage from high-profile artists such as Kate Bush, who warned that this could lead to the exploitation of musicians by tech companies. The protest album, which includes contributions from other major artists like Ed Sheeran and Dua Lipa, aims to highlight the negative impact of such reforms on the livelihoods of creators.

The UK government argues that these changes will help boost the AI and creative industries, allowing them to reach their full potential. However, the controversy over copyright law is growing, with many in the music industry urging a rethink before any new regulations are finalised.

For more information on these topics, visit diplomacy.edu.

Bluesky teams up with IWF to tackle harmful content

Bluesky, the rapidly growing decentralised social media platform, has partnered with the UK-based Internet Watch Foundation (IWF) to combat the spread of child sexual abuse material (CSAM). As part of the collaboration, Bluesky will gain access to the IWF’s tools, which include a list of websites containing CSAM and a catalogue of digital fingerprints, or ‘hashes,’ that identify abusive images. This partnership aims to reduce the risk of users encountering illegal content while helping to keep the platform safe from such material.

Bluesky’s head of trust and safety, Aaron Rodericks, welcomed the partnership as a significant step in protecting users from harmful content. With the platform’s rapid growth—reaching over 30 million users by the end of last month—the move comes at a crucial time. In November, Bluesky announced plans to expand its moderation team to address the rise in harmful material following the influx of new users.

The partnership also highlights the growing concern over online child sexual abuse material. The IWF reported record levels of harmful content last year, with over 291,000 web pages removed from the internet. The foundation’s CEO, Derek Ray-Hill, stressed the urgency of tackling the crisis, calling for a collective effort from governments, tech companies, and society.

For more information on these topics, visit diplomacy.edu.

UK users face reduced cloud security as Apple responds to government pressure

Apple has withdrawn its Advanced Data Protection (ADP) feature for cloud backups in Britain, citing government requirements.

Users attempting to enable the encryption service now receive an error message, while existing users will eventually have to deactivate it. The move weakens iCloud security in the country, allowing authorities access to data that would otherwise be encrypted.

Experts warn that the change compromises user privacy and exposes data to potential cyber threats. Apple has insisted it will not create a backdoor for encrypted services, as doing so would increase security risks.

The UK government has not confirmed whether it issued a Technical Capability Notice, which could mandate such access.

Apple’s decision highlights ongoing tensions between tech companies and governments over encryption policies. Similar legal frameworks exist in countries like Australia, raising concerns that other nations could follow suit.

Security advocates argue that strong encryption is essential for protecting user privacy and safeguarding sensitive information from cybercriminals.

For more information on these topics, visit diplomacy.edu.

Apple rejects UK plans for mobile browser controls

Apple has pushed back against proposed remedies from the UK’s competition watchdog, arguing they could hinder innovation in the mobile browser market. The Competition and Markets Authority (CMA) is investigating Apple and Google’s dominance in browser engines and cloud gaming distribution through app stores, with potential regulatory measures under consideration.

In its response, Apple stated that mandating free access to future WebKit updates or iOS features used by Safari would be unfair, given the significant resources required to develop them. The company warned this could lead to ‘free-riding’ by third parties and discourage further investment in browser technologies.

The UK CMA’s investigation aims to increase competition in the mobile browser space, where Apple’s WebKit engine is a key player. However, Apple insists that the proposed changes would harm its ability to innovate and could ultimately reduce the quality of browser experiences for users. The regulator is expected to continue assessing industry feedback before making a final decision.

For more information on these topics, visit diplomacy.edu.

Former GCHQ chief calls for transparency amid UK’s attempt to access encrypted iCloud accounts

A controversy has emerged over the British government’s reported attempt to compel Apple to grant authorities access to encrypted iCloud accounts, leading to calls for increased transparency from intelligence agencies. Sir Jeremy Fleming, the former head of the UK’s GCHQ from 2017 to 2023, addressed this issue at the Munich Cyber Security Conference, highlighting the need for public understanding and trust in intelligence operations. He emphasised that an agency’s ‘license to operate’ should be grounded in transparency.

The UK government has contested the description of a ‘back door’ in relation to the notice, clarifying that it seeks to ensure Apple maintains the capability to provide iCloud data in response to lawful warrants, a function that existed prior to the introduction of end-to-end encryption for iCloud in December 2022.

Since 2020, Apple has provided iCloud data to UK authorities in response to four of more than 6,000 legal requests for customer information under non-IPA laws. However, this data excludes requests made under the Investigatory Powers Act (IPA), the UK’s primary law for accessing tech company data.

Fleming emphasised the importance of intelligence agencies providing clear explanations of their operations, particularly in relation to new technologies. He pointed out the need for a better understanding of how intelligence agencies operate in practice, particularly as technological advancements change their methods.

For more information on these topics, visit diplomacy.edu.