UK artists raise alarm over AI law proposals

A new proposal by the UK government to alter copyright laws has sparked significant concern among artists, particularly in Devon. The changes would allow AI companies to use the content found on the internet, including artwork, to help train their models unless the creators opt-out. Artists like Sarah McIntyre, an illustrator from Bovey Tracey, argue that such a shift could undermine their rights, making it harder for them to control the use of their work and potentially depriving them of income.

The Devon Artist Network has expressed strong opposition to these plans, warning that they could have a devastating impact on creative industries. They believe that creators should retain control over their work, without needing to actively opt out of its use by AI. While some, like Mike Phillips from the University of Plymouth in the UK, suggest that AI could help artists track copyright violations, the majority of artists remain wary of the proposed changes.

The Department for Science, Innovation and Technology has acknowledged the concerns and confirmed that no decisions have yet been made. However, it has stated that the current copyright framework is limiting the potential of both the creative and AI sectors. As consultations close, the future of the proposal remains uncertain.

For more information on these topics, visit diplomacy.edu.

Microsoft executive says firms are lagging in AI adoption

Microsoft’s UK boss has warned that many companies are ‘stuck in neutral’ when it comes to AI, with a significant number of private and public sector organisations lacking any formal AI strategy. According to a Microsoft survey of nearly 1,500 senior leaders and 1,440 employees in the UK, more than half of executives report that their organisations have no official AI plan. Additionally, many recognise a growing productivity gap between employees using AI and those who are not.

Darren Hardman, Microsoft’s UK chief executive, stated that some companies are caught in the experimentation phase rather than fully deploying AI. Microsoft, a major backer of OpenAI, has been promoting AI deployment in workplaces through autonomous AI agents designed to perform tasks without human intervention. Early adopters, like consulting giant McKinsey, are already using AI agents for tasks such as scheduling meetings.

Hardman also discussed AI’s potential impact on jobs, with the Tony Blair Institute estimating that AI could displace up to 3 million UK jobs, though the net job loss will likely be much lower as new roles are created. He compared AI’s transformative impact on the workplace to how the internet revolutionised retail, creating roles like data analysts and social media managers. Hardman also backed proposed UK copyright law reforms, which would allow tech companies to use copyright-protected work for training AI models, arguing that the changes could drive economic growth and support AI development.

For more information on these topics, visit diplomacy.edu.

UK regulator approves Synopsys’ $35 billion Ansys deal

Britain’s competition regulator has approved Synopsys’ $35 billion acquisition of Ansys after the companies addressed concerns about the potential negative impact on innovation and pricing.

In December, the regulator raised alarms that the deal could reduce competition in the chip design software market, possibly leading to higher prices and less innovation.

However, following negotiations and the companies’ offer of remedies to mitigate these concerns, the regulator decided not to refer the deal for an in-depth phase-2 investigation.

Synopsys, a major player in the chip design software industry, announced the acquisition in January. The deal, which will be a mix of cash and stock, aims to strengthen Synopsys’ portfolio and expand its offerings in the design and development of complex products.

Ansys, a well-established provider of simulation software, is used by a range of industries, from aerospace to sports equipment, to design and optimise products like aeroplanes and tennis rackets.

The acquisition marks a significant move for Synopsys, enhancing its capabilities in the design and development of advanced technology.

The deal is expected to bring together the strengths of both companies, allowing them to offer a broader set of solutions to customers in various sectors, from semiconductor manufacturing to engineering and consumer goods.

For more information on these topics, visit diplomacy.edu.

UK regulator sets deadline for assessing online content risks

Britain’s media regulator, Ofcom, has set a 31 March deadline for social media and online platforms to submit a risk assessment on the likelihood of users encountering illegal content. This move follows new laws passed last year requiring companies such as Meta’s Facebook and Instagram, as well as ByteDance’s TikTok, to take action against criminal activities on their platforms. Under the Online Safety Act, these firms must assess and address the risks of offences like terrorism, hate crimes, child sexual exploitation, and financial fraud.

The risk assessment must evaluate how likely it is for users to come across illegal content, or how user-to-user services could facilitate criminal activities. Ofcom has warned that failure to meet the deadline could result in enforcement actions against the companies. The new regulations aim to make online platforms safer and hold them accountable for the content shared on their sites.

The deadline is part of the UK‘s broader push to regulate online content and enhance user safety. Social media giants are now facing stricter scrutiny to ensure they are addressing potential risks associated with their platforms and protecting users from harmful content.

For more information on these topics, visit diplomacy.edu.

US investigates UK over alleged backdoor demand for Apple data

United States officials are reviewing whether the UK breached a bilateral agreement by reportedly pressuring Apple to create a ‘backdoor’ for government access to encrypted iCloud backups.

Apple recently withdrew an encrypted storage feature for UK users following reports that it had refused to comply with such demands, which could have affected users worldwide. The Washington Post reported that Apple rejected the UK government’s request.

The US director of national intelligence, Tulsi Gabbard, confirmed in a letter to lawmakers that a legal review is underway to determine if the UK violated the CLOUD Act.

Under the agreement, neither the US nor the United Kingdom can demand data access for citizens or residents of the other country. Initial legal assessments suggest the UK’s reported demands may have overstepped its authority under the agreement.

Apple has long defended its encryption policies, arguing that creating a backdoor for government access would weaken security and leave user data vulnerable to hackers. Cybersecurity experts warn that any such backdoor, once created, would inevitably be exploited.

The tech giant has clashed with regulators over encryption before, notably in 2016 when it resisted US government efforts to unlock a terrorism suspect’s iPhone.

For more information on these topics, visit diplomacy.edu.

Vodafone collaborates with IBM on quantum-safe cryptography

Vodafone UK has teamed up with IBM to explore quantum-safe cryptography as part of a new Proof of Concept (PoC) test for its mobile and broadband services, particularly for users of its ‘Secure Net’ anti-malware service. While quantum computers are still in the early stages of development, they could eventually break current internet encryption methods. In anticipation of this, Vodafone and IBM are testing how to integrate new post-quantum cryptographic standards into Vodafone’s existing Secure Net service, which already protects millions of users from threats like phishing and malware.

IBM’s cryptography experts have co-developed two algorithms now recognised in the US National Institute of Standards and Technology’s first post-quantum cryptography standards. This collaboration, supported by Akamai Technologies, aims to make Vodafone’s services more resilient against future quantum computing risks. Vodafone’s Head of R&D, Luke Ibbetson, stressed the importance of future-proofing digital security to ensure customers can continue enjoying safe internet experiences.

Although the PoC is still in its feasibility phase, Vodafone hopes to implement quantum-safe cryptography across its networks and products soon, ensuring stronger protection for both business and consumer users.

For more information on these topics, visit diplomacy.edu.

Wayve expands with new testing hub in Germany

British startup Wayve has announced plans to open a new testing and development hub in Germany, deploying a fleet of test vehicles in the Stuttgart region. The self-driving technology firm aims to enhance features like lane change assistance at the new facility, which will focus on improving its “Embodied AI” system that learns from human behaviour.

Wayve, which operates in the UK and the US, is expanding into Germany as part of its strategy to enter the European market, particularly Germany, the continent’s largest automotive hub. The company received a boost earlier this year, with Uber investing in August and SoftBank leading a $1 billion funding round in May, supported by Nvidia.

Despite the significant investments in autonomous vehicle technology, self-driving systems still face challenges in predicting and assessing risks as accurately as human drivers. Wayve’s technology is already integrated into six vehicle platforms, including electric models like the Jaguar I-PACE and Ford Mustang Mach-E, as part of advanced driver assistance systems (ADAS).

For more information on these topics, visit diplomacy.edu.

UK regulator scrutinises TikTok and Reddit for child privacy concerns

Britain’s privacy regulator, the Information Commissioner’s Office (ICO), has launched an investigation into the child privacy practices of TikTok, Reddit, and Imgur. The ICO is scrutinising how these platforms manage personal data and age verification for users, particularly teenagers, to ensure they comply with UK data protection laws.

The investigation focuses on TikTok’s use of data from 13-17-year-olds to recommend content via its algorithm. The ICO is also examining how Reddit and Imgur assess and protect the privacy of child users. If evidence of legal breaches is found, the ICO will take action, as it did in 2023 when TikTok was fined £12.7 million for mishandling data from children under 13.

Both Reddit and Imgur have expressed a commitment to adhering to UK regulations. Reddit, for example, stated that it plans to roll out updates to meet new age-assurance requirements. Meanwhile, TikTok and Imgur have not yet responded to requests for comment.

The investigation comes amid stricter UK legislation aimed at safeguarding children online, including measures requiring social media platforms to limit harmful content and enforce age checks to prevent underage access to inappropriate material.

For more information on these topics, visit diplomacy.edu.

UK students increase use of AI for academic work

British universities have been urged to reassess their assessment methods after new research revealed a significant rise in students using genAI for their projects. A survey of 1,000 undergraduates found that 88% of students used AI tools like ChatGPT for assessments in 2025, up from 53% last year. Overall, 92% of students now use some form of AI, marking a substantial shift in academic behaviours in just a year.

The report, by the Higher Education Policy Institute and Kortext, highlights how AI is being used for tasks such as summarising articles, explaining concepts, and suggesting research ideas. While AI can enhance the quality of work and save time, some students admitted to directly including AI-generated content in their assignments, raising concerns about academic misconduct.

The research also found that concerns over AI’s potential impact on academic integrity vary across demographics. Women, wealthier students, and those studying STEM subjects were more likely to embrace AI, while others expressed fears about getting caught or receiving biased results. Despite these concerns, students generally feel that universities are addressing the issue of academic integrity, with many believing their institutions have clear policies on AI use.

Experts argue that universities need to adapt quickly to the changing landscape, with some suggesting that AI should be integrated into teaching rather than being seen solely as a threat to academic integrity. As AI tools become an essential part of education, institutions must find a balance between leveraging the technology and maintaining academic standards.

For more information on these topics, visit diplomacy.edu.

UK Home Office’s new vulnerability reporting policy creates legal risks for ethical researchers, experts warn

The UK Home Office has introduced a vulnerability reporting mechanism through the platform HackerOne, allowing cybersecurity researchers to report security issues in its systems. However, concerns have been raised that individuals who submit reports could still face legal risks under the UK’s Computer Misuse Act (CMA), even if they follow the department’s new guidance.

Unlike some private-sector initiatives, the Home Office program does not offer financial rewards for reporting vulnerabilities. The new guidelines prohibit researchers from disrupting systems or accessing and modifying data. However, they also caution that individuals must not ‘break any applicable law or regulations,’ a clause that some industry groups argue could discourage vulnerability disclosure due to the broad provisions of the CMA, which dates back to 1990.

The CyberUp Campaign, a coalition of industry professionals, academics, and cybersecurity experts, warns that the CMA’s definition of unauthorized access does not distinguish between malicious intent and ethical security research. While the Ministry of Defence has previously assured researchers they would not face prosecution, the Home Office provides no such assurances, leaving researchers uncertain about potential legal consequences.

A Home Office spokesperson declined to comment on the concerns.

The CyberUp Campaign acknowledged the growing adoption of vulnerability disclosure policies across the public and private sectors but highlighted the ongoing legal risks researchers face in the UK. The campaign noted that other countries, including Malta, Portugal, and Belgium, have updated their laws to provide legal protections for ethical security research, while the UK has yet to introduce similar reforms.

The Labour Party had previously proposed an amendment to the CMA that would introduce a public interest defense for cybersecurity researchers, but this was not passed. Last year, Labour’s security minister Dan Jarvis praised the contributions of cybersecurity professionals and stated that the government was considering CMA reforms, though no legislative changes have been introduced so far.

For more information on these topics, visit diplomacy.edu.