EPO strengthens industry collaboration on European patent innovation

The European Patent Office (EPO) has reinforced cooperation with industry stakeholders through discussions with the German Association of Industry IP Experts, focusing on strengthening the European patent system and supporting innovation.

A meeting that brought together representatives from major industrial actors to align priorities and explore future collaboration.

Discussions between the EPO and the stakeholders centred on enhancing technology transfer, empowering startups and fostering economic growth across Europe.

Participants emphasised the importance of inclusive engagement among patent system users instead of fragmented approaches, ensuring that innovation strategies reflect both industrial and societal needs.

The Unitary Patent system was highlighted as gaining traction, particularly among smaller entities such as SMEs, individual inventors and research organisations. Such a trend reflects broader efforts to improve accessibility and scalability within the European innovation ecosystem.

AI also featured prominently, with both sides recognising its growing role in improving efficiency and quality in patent processes.

A human-centric approach remains essential, ensuring that AI deployment supports responsible innovation while maintaining high standards in patent examination and services.

Would you like to learn more about AI, tech and digital diplomacyIf so, ask our Diplo chatbot!  

Italy fines major bank over data protection failures

The Italian Data Protection Authority has imposed a €31.8 million fine on Intesa Sanpaolo following serious shortcomings in its handling of personal data.

The case stems from unauthorised access by an employee to thousands of customer accounts, raising concerns about internal oversight and data protection safeguards.

Investigations revealed that monitoring systems failed to detect repeated unjustified access to sensitive financial information over an extended period. The breach also involved high-risk individuals, highlighting weaknesses in risk-based controls instead of robust, targeted protection measures.

Authorities in Italy identified violations of core data protection principles, including integrity, confidentiality and accountability. Additional concerns arose from delays in notifying both regulators and affected individuals, limiting the ability to respond effectively to the incident.

The case of Intesa Sanpaolo underscores increasing regulatory scrutiny of data governance practices in the financial sector. Strengthening internal controls and ensuring timely breach reporting remain essential for maintaining trust and compliance in data-driven banking environments.

Would you like to learn more about AI, tech and digital diplomacyIf so, ask our Diplo chatbot!  

UK authorities have fined an Apple subsidiary over a sanctions breach

The UK has fined Apple Inc. subsidiary Apple Distribution International £390,000 for breaching sanctions linked to Russia. The penalty relates to payments routed through a UK bank to a Russian streaming platform.

The payments, totalling more than £635,000, were made to Okko from a UK-based account. The subsidiary, responsible for Apple product sales across Europe and the Middle East, instructed the transfers despite the platform’s ownership links to sanctioned entities.

The Office of Financial Sanctions Implementation found the funds were linked to Sberbank and a company later sanctioned after the 2022 Ukraine invasion. Payments were made shortly after those restrictions came into force.

Regulators said the firm had voluntarily disclosed the transactions and had not been aware of the sanctions breach at the time. Apple stated it follows all applicable laws and has strengthened its compliance procedures following the incident.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Campaign highlights risks of profit-driven digital platforms

A global campaign led by the Norwegian Consumer Council (NCC) has drawn attention to the decline in quality across digital platforms, a phenomenon widely referred to as ‘enshitification’, in which services deteriorate over time as companies prioritise monetisation over user experience.

The initiative has gained momentum through a viral video and coordinated advocacy efforts across multiple regions.

Inshitification is a term coined by journalist Cory Doctorow that describes a pattern in which platforms initially serve users well, then shift towards extracting value from both users and business partners.

In practice, it often results in increased advertising, paywalls, and reduced functionality, with platforms leveraging user dependence to introduce less favourable conditions.

More than 70 advocacy groups across the EU, the US and Norway have urged policymakers to take stronger action, arguing that declining competition and market concentration allow platforms to degrade services without losing users.

Network effects and high switching costs further limit consumer choice, making it difficult to move to alternative platforms even when dissatisfaction grows.

Existing frameworks, such as the Digital Markets Act and the Digital Services Act, aim to address some of these issues by promoting interoperability, transparency, and accountability.

However, experts argue that enforcement remains too slow and insufficient to deter harmful practices, suggesting that stronger regulatory intervention will be necessary to restore balance between consumers, platforms, and competition in the digital economy.

Would you like to learn more about AI, tech and digital diplomacyIf so, ask our Diplo chatbot!  

Major service disruption affects DeepSeek chatbot in China

DeepSeek’s chatbot suffered a seven-hour-plus disruption in China, prompting multiple updates as the company worked to restore full functionality. Users began reporting issues on Sunday evening, with further performance problems recorded on Monday morning.

Initial alerts appeared on monitoring platforms and DeepSeek’s own status page, which acknowledged an incident shortly after it began. Although early fixes were deployed within hours, additional disruptions followed, requiring further corrective updates before the system stabilised.

The company has not disclosed the cause of the outage, and no official comment has been provided. The extended downtime stands out for a platform known for consistent performance, which has maintained a near 99 percent uptime record since the launch of its R1 model in 2025.

The disruption comes at a time of heightened anticipation for DeepSeek’s next major update, as speculation builds across China’s competitive AI sector, where firms continue to race to release new models.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

Cryptocurrency political donations banned under new Canada bill

Canada’s Liberal government has introduced Bill C-25 to prohibit cryptocurrency and other non-cash instruments from being used as political donations. The measure covers all registered parties, candidates, leadership, and nomination contests, and third-party advertisers, tightening campaign finance rules.

The proposal reverses a 2019 framework that had allowed limited crypto contributions under strict conditions, though uptake remained minimal and no major party reported receiving such donations in recent federal elections.

Authorities argue that pseudo-anonymous blockchain transactions make it difficult to verify the true source of funds, raising concerns about traceability and foreign interference risks.

Under the new rules, any prohibited donation must be returned, destroyed, or converted and forwarded to the Receiver General within 30 days. Enforcement includes fines of up to twice the illegal contribution’s value, reaching CA$25,000 for individuals and CA$100,000 for corporations.

Bill C-25 also revives provisions from the earlier Bill C-65, which collapsed in 2025 after Parliament was prorogued. The updated law aligns with UK restrictions and expands election oversight powers, including measures against deepfakes and foreign interference.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

UK regulator targets misleading online reviews in new crackdown

The Competition and Markets Authority has launched new investigations into five companies as part of a wider crackdown on fake and misleading online reviews, targeting practices that shape consumer decisions rather than reflect genuine customer experiences.

The cases involve Autotrader, Feefo, Dignity, Just Eat and Pasta Evangelists across sectors, including car sales, food delivery and funeral services.

CMA is examining whether negative reviews were suppressed, ratings inflated, or incentives offered in exchange for positive feedback without disclosure.

Concerns also extend to moderation practices and whether review systems provide a complete and accurate picture of customer experiences, rather than favouring reputational or commercial interests. No conclusions have yet been reached on whether consumer law has been breached.

Online reviews play a central role in consumer behaviour, influencing significant levels of spending across the UK economy.

Research indicates that a large majority of consumers rely on reviews when making purchasing decisions, raising concerns that misleading content can distort markets and undermine trust, particularly as AI makes it harder to detect fabricated reviews.

The investigations form part of a broader enforcement effort under the Digital Markets Competition and Consumers Act 2024, which introduced stricter rules on fake and misleading reviews.

Authorities aim to improve transparency and accountability across digital platforms, with potential penalties reaching up to 10% of global turnover for companies found to have breached consumer protection laws.

Would you like to learn more about AI, tech and digital diplomacyIf so, ask our Diplo chatbot!

National security rules to prioritise UK contracts in AI, steel and shipbuilding

The UK government has announced new procurement guidance that will treat shipbuilding, steel, AI, and energy infrastructure as critical to national security, with departments directed to prioritise British businesses where necessary to protect national security. The press release was published on 26 March by the Cabinet Office and its Minister, Chris Ward.

According to the government, the new approach is intended to respond to recent supply-chain fragility and strengthen domestic capacity in sectors it describes as vital to national security. The guidance is presented as the first clear framework for how departments can protect the UK’s economic security and build resilience in the four named sectors.

Additional measures in the package go beyond sector prioritisation. The government says departments will either use British steel or provide a justification if steel is sourced from overseas, linking the change to the UK Steel Strategy launched the previous week. Officials also say the reforms support the government’s Modern Industrial Strategy and follow the publication of the National Security Strategy.

Procurement reform is another part of the package. Under a new Public Interest Test, departments will be asked to assess whether outsourced service contracts worth more than £1 million could be delivered more effectively in-house. The government says the test will cover more than 95% of central government contracts by value.

Community impact is also being built into the contracting framework. Departments will be required to publish and report annually on a specific social value goal for contracts above £5 million, which the government says will cover more than 90% of central government contracts by value. Companies bidding for public contracts are also being encouraged to include commitments on local jobs, skills, and apprenticeships.

The press release also says a new suite of AI tools has been developed to streamline the commercial process. Contract terms will be simplified, and additional business information will be integrated into a central platform, with the stated aim of reducing repeated submissions by smaller businesses bidding for multiple contracts.

Chris Ward said: ‘This Government is backing British businesses and the working people who power them. These reforms are about using the full weight of Government spending to support British jobs, protect our national security and grow our economy.’ He added: ‘Whether you make steel in Scunthorpe, build ships on the Clyde or run a small tech firm in the Midlands, this Government is on your side.’

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

India AI governance faces court, privacy and cyber pressures

An opinion article published by the International Association of Privacy Professionals says India’s data protection and AI governance environment is facing growing pressure as compliance work around the Digital Personal Data Protection Act (DPDPA) unfolds, court challenges continue, and regulators widen oversight into new sectors. The piece, published on 26 March, is labelled as an opinion article and includes an editor’s note stating that the IAPP is policy neutral and publishes contributed opinion pieces to reflect a broad spectrum of views.

The article says several legal and regulatory developments are unfolding simultaneously. One example cited is a public interest litigation filed before India’s Supreme Court by journalist Geeta Seshu and the Software Freedom Law Centre, India, challenging parts of the DPDPA on constitutional and rights-related grounds. According to the piece, the Supreme Court later issued a notice to the Government of India on 12 March.

Concerns outlined in the article include the absence of journalistic exemptions, the lack of compensation for data breach victims when penalties are imposed to the government, broad state powers to exempt departments from the law, and questions about the independence of the Data Protection Board given the government’s control over appointments. The article notes that similar petitions had already been filed, but says this was the first time the court issued notice to the government.

The article also turns to proceedings before the Kerala High Court involving privacy concerns about biometric and personal data collected through Digi Yatra, a not-for-profit foundation that operates airport passenger-processing infrastructure in India. According to the piece, a public interest litigation filed by C R Neelakandan asked for a temporary restraint on the sharing of collected personal data and its commercial use without proper authorisation.

The article says the Kerala High Court issued notice to the Digi Yatra Foundation and sought clarification from the government on whether the Data Protection Board had been established to oversee such matters.

Alongside the litigation, the opinion piece points to government efforts to show legal preparedness for AI-related risks. It says Electronics and Information Technology Minister Ashwini Vaishnaw outlined existing safeguards during the ongoing parliamentary session, referring to the Information Technology Act, the DPDPA, and subordinate rules, along with published guidelines on AI governance, toy safety, harmful content, awareness-building measures, and cyber safety.

Cybersecurity developments also feature in the article. It says the Indian Computer Emergency Response Team, working with the SatCom Industry Association, issued guidelines on 26 February for space, including satellite communications. According to the piece, the framework is intended to strengthen resilience in India’s space ecosystem.

It applies to covered entities, including government agencies, satellite service providers, ground station operators, terminal equipment vendors, and private space entities. Incident reporting within six hours and annual audits are among the measures described.

A further section of the article draws on Thales’ 2026 Data Threat Report. The piece says 64% of surveyed organisations in India identified AI-driven transformation as their biggest security risk, while 55% said they had to deal with reputational damage caused by AI-generated misinformation. It also says 65% reported deepfake-driven attacks, 35% had a complete view of their data, and 36% could fully classify their data.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

EU demands stronger age verification from adult websites

The European Commission has preliminarily found that several major adult platforms, including Pornhub, Stripchat, XNXX, and XVideos, may be in breach of the Digital Services Act for failing to adequately protect minors from accessing harmful content.

These findings highlight concerns that children can easily access such platforms rather than being effectively prevented by robust safeguards.

The Commission’s investigation indicates that the platforms’ risk assessments were insufficient. In several cases, companies focused on reputational or business risks instead of fully addressing societal harms to minors.

Authorities also raised concerns that some platforms did not adequately consider input from civil society organisations specialising in children’s rights and age-assurance technologies, undermining the reliability of their evaluations.

Regarding risk mitigation, the Commission found that existing measures are ineffective. Simple self-declaration systems, in which users confirm they are over 18, were deemed inadequate, while additional features such as warnings, labels, or blurred content failed to prevent minors from accessing content.

The Commission considers that stronger, privacy-preserving age-verification solutions are necessary to ensure meaningful protection of children’s rights and well-being online.

The companies involved now have the opportunity to respond and propose corrective measures, while consultations with the European Board for Digital Services continue.

If the preliminary findings are confirmed, the Commission may impose fines of up to 6 percent of global annual turnover, alongside periodic penalties to enforce compliance.

The case forms part of broader efforts to enforce the Digital Services Act and strengthen online safety across the EU, rather than relying on voluntary measures by platforms.

Would you like to learn more about AI, tech and digital diplomacyIf so, ask our Diplo chatbot!