EU’s AI Act influences New Zealand’s digital strategy

As governments worldwide grapple with AI regulation and digital identity strategies, many are looking to the EU for guidance. In New Zealand, the EU’s AI Act and EUDI wallet program serve as valuable models. Dr Nessa Lynch, an expert on emerging technology regulation, highlights the need for legal and policy safeguards to ensure AI development prioritises public interests over commercial ones. She argues that the EU’s AI Act, framed as product safety legislation, protects people from high-risk AI uses and promotes trustworthy AI. However, she notes the controversial exceptions for law enforcement and national security.

Lynch emphasises that regulation must balance innovation and trust. For New Zealand, adopting a robust regulatory framework is crucial for fostering public trust in AI. The current gaps in its privacy and data protection laws, along with unclear AI usage guidelines, could hinder innovation and public confidence. Lynch stresses the importance of a people-centred approach to regulation, ensuring AI is used responsibly and ethically.

Similarly, New Zealand’s digital identity strategy is evolving alongside its AI regulation. The recent launch of the New Zealand Trust Framework Authority aims to verify digital identity service providers. Professor Markus Luczak-Roesch from Victoria University of Wellington highlights the transformative potential of digital ID, which must be managed in line with national values. He points to Estonia and Norway as models for integrating digital ID with robust data infrastructure and ethical AI development, stressing the importance of avoiding technologies that may carry unethical components or incompatible values.

AI cybersecurity in devices deemed high-risk by European Commission

AI-based cybersecurity and emergency services components in internet-connected devices are expected to be classified as high-risk under the AI Act, according to a European Commission document seen by Euractiv. The document, which interprets the relationship between the 2014 Radio Equipment Directive (RED) and the AI Act, marks the first known instance of how AI-based safety components will be treated under the new regulations. The RED pertains to wireless devices, including those using Wi-Fi and Bluetooth, beyond traditional radios.

Under the AI Act, high-risk AI systems will be subject to extensive testing, risk management, security measures, and documentation. The Act includes a list of use cases where AI deployment is automatically considered high-risk, such as in critical infrastructure and law enforcement. It also sets criteria for categorising other high-risk products, requiring third-party conformity assessments in line with sector-specific regulations. AI cybersecurity and emergency services components meet these criteria under the RED, thus being classified as high-risk.

Even in cases where the RED allows for self-assessment compliance with harmonised standards, these AI-based components are still deemed high-risk. The AI Act references numerous sectoral regulations that could classify AI products as high-risk, extending beyond electronics to medical devices, aviation, heavy machinery, and personal watercraft. The preliminary interpretation suggests that self-assessment standards are insufficient to remove the high-risk classification from AI products in these industries.

The AI Act imposes significant requirements on high-risk AI systems, while those not in this category face only minor transparency obligations. The Commission’s document is a preliminary interpretation, and the full application of the AI Act, which spans over 500 pages, remains to be seen. Despite initial estimates that 5-15% of AI systems would be classified as high-risk, a 2022 survey of EU-based startups indicated that 33-50% of these startups consider their products high-risk. Further interpretive work is needed to understand how the AI Act will impact various sectors.

Why does it matter?

The abovementioned proceedings highlight the European Commission’s stringent approach to regulating AI-based cybersecurity and emergency services in internet-connected devices. By classifying these components as high-risk, the AI Act mandates rigorous testing, security measures, and documentation, ensuring robust safety standards. This move underscores the EU’s commitment to protecting critical infrastructure and sensitive data and signals significant regulatory implications for various industries, potentially influencing global standards and practices in AI technology.

EU probes industry on China’s chip production

The European Commission has initiated a consultation with the semiconductor industry to gather views on China’s expanded production of older-generation computer chips, known as legacy chips. The effort precedes two voluntary surveys targeting the chip industry and major chip-using firms, which are due in September. The Commission aims to assess the role of legacy chips in supply chains and explore potential joint measures with the US to address dependency and market distortion.

That move comes amid rising tensions between the EU and China as the European Union seeks to shield its industries from Chinese competition. Recently, the Commission imposed provisional tariffs of up to 37.6% on Chinese electric vehicles, signalling a potentially tougher stance towards Beijing. Chinese investment in legacy chip production, driven by state subsidies and US restrictions on advanced chips, has raised concerns in the West about long-term market implications and potential oversupply.

The Commission’s antitrust chief, Margrethe Vestager, hinted in April at a possible investigation into legacy chips after discussions with US officials. A detailed report released by the European Commission earlier this year highlighted the extensive support provided by the Chinese government to domestic firms across various sectors, including semiconductors. The new chip-focused surveys are broader in scope than previous US security-focused surveys, aiming to gather comprehensive data on chip sourcing, products, pricing, and competitive estimates.

EU asks Amazon for DSA compliance details

The European Commission has requested that Amazon provide detailed information regarding its measures to comply with the Digital Services Act (DSA) obligations. Specifically, the Commission is interested in the transparency of Amazon’s recommender systems. Amazon has been given a deadline of 26 July to respond.

The DSA mandates that major tech companies, like Amazon, take more responsibility in addressing illegal and harmful content on their platforms. The regulatory push aims to create a safer and more predictable online environment for users. Amazon stated that it is currently reviewing the EU’s request and plans to work closely with the European Commission.

A spokesperson for Amazon expressed support for the Commission’s objectives, emphasising the company’s commitment to a safe and trustworthy shopping experience. Amazon highlighted its significant investments in protecting its platform from bad actors and illegal content and noted that these efforts align with DSA compliance.

EU faces major AI shortfall by 2030

According to a European Commission report, the EU must catch up to its 2030 AI targets. The investigation into the EU’s Digital Decade project revealed that only 11% of the EU enterprises currently use designated AI technologies, far short of the 75% target set for 2030. At this rate, the Commission estimates it would take almost a century to achieve this goal.

The report also highlighted other areas for improvement, such as the EU being over a decade behind in producing the desired number of tech unicorns and spreading basic tech skills among the general public. Despite these setbacks, European Commission leaders remain optimistic, pointing out that the report offers a clear path forward. Margrethe Vestager, the EC’s competition commissioner, stressed the need for increased State-level investments to reach the digital transformation targets.

Thierry Breton, the EU’s digital chief, echoed these sentiments, emphasising the importance of investments, cross-border cooperation, and the completion of the Digital Single Market to boost the adoption of key technologies like AI. The findings come amid concerns that the EU’s stringent AI regulations could hinder its global competitiveness, especially compared to less regulated regions like the US and China.

Meta faces EU charges on user privacy tech rules

The EU antitrust regulators have charged Meta Platforms with violating landmark tech rules through its new ‘pay or consent’ advertising model for Facebook and Instagram. The model, introduced last November, offers users a choice between a free, ad-supported service with tracking or a paid, ad-free service. The European Commission argues this binary choice breaches the Digital Markets Act (DMA) by forcing users to consent to data tracking without providing a less personalised but equivalent alternative.

Meta asserts that its model complies with a ruling from EU’s top court and is aligned with the DMA, expressing a willingness to engage with the Commission to resolve the issue. However, if found guilty, Meta could face fines of up to 10% of its global annual turnover. The Commission aims to conclude its investigation by March next year.

The charge follows a recent DMA-related charge against Apple for similar non-compliance, highlighting the EU’s efforts to regulate Big Tech and empower users to control their data.

EU sanctions six Russian-linked hackers

Six individuals were added to the EU’s sanctions list – they all have been involved in cyberattacks targeting critical infrastructure, state functions, classified information, and emergency response systems in EU member states, according to the official press release. These sanctions mark the first instance of measures against cybercriminals employing ransomware in essential services such as health and banking.

Among those sanctioned are Ruslan Peretyatko and Andrey Korinets of the ‘Callisto group,’ known for cyber operations against the EU and third countries through phishing campaigns aimed at stealing sensitive data in defense and external relations.

Also targeted are Oleksandr Sklianko and Mykola Chernykh of the ‘Armageddon hacker group,’ allegedly supported by Russia’s Federal Security Service (FSB), responsible for impactful cyberattacks on EU governments and Ukraine using phishing and malware.

Additionally, Mikhail Tsarev and Maksim Galochkin, involved in deploying ‘Conti‘ and ‘Trickbot‘ malware under the ‘Wizard Spider’ group, face sanctions. These ransomware campaigns have caused significant economic damage across sectors including health and banking in the EU.

The EU’s horizontal cyber sanctions regime now covers 14 individuals and four entities, involving asset freezes and travel bans, and prohibiting EU persons and entities from providing funds to those listed.

With these new measures, the EU and its member states emphasize their commitment to combating persistent malicious cyber activities. Last June, the European Council agreed that new measures were needed to strengthen its Cyber Diplomacy Toolbox.

EU charges Microsoft over Teams bundling

EU antitrust regulators have accused Microsoft of illegally bundling its Teams chat and video app with its Office product suite, claiming the company’s recent efforts to separate the two were insufficient. The European Commission stated that Microsoft breached antitrust rules by tying Teams to its popular Office 365 and Microsoft 365 suites, which stifled competition.

The regulatory action follows a 2020 complaint by Slack, a rival workspace messaging app owned by Salesforce. Microsoft introduced Teams to Office 365 in 2017 at no extra cost, replacing Skype for Business, and its use surged during the pandemic due to its video conferencing capabilities.

The European Commission has preliminarily determined that Microsoft’s changes don’t adequately address the competition concerns and that more actions are needed. Microsoft has expressed willingness to work with the EU regulators to find acceptable solutions.

EU cybersecurity exercise organised to test energy sector’s cyber resilience

The 7th edition of Cyber Europe, organised by the European Union Agency for Cybersecurity (ENISA), tested the resilience of the EU energy sector, highlighting cybersecurity as an increasing threat to critical infrastructure. In 2023, over 200 cyber incidents targeted the energy sector, with more than half aimed specifically at Europe, underscoring the sector’s vulnerability due to its crucial role in the European economy.

Juhan Lepassaar, Executive Director of ENISA, highlighted the exercise’s role in enhancing preparedness and response capacities to protect critical infrastructure, essential for the single market’s stability.

According to ENISA’s Network and Information Security (NIS) Investments report, 32% of energy sector operators lack Security Operations Center (SOC) monitoring for critical Operation Technology (OT) processes, while 52% integrate OT and Information Technology (IT) under a single SOC.

This year’s Cyber Europe exercise focused on a scenario involving cyber threats to EU energy infrastructure amidst geopolitical tensions. Over two days, stakeholders from 30 national cybersecurity agencies and numerous EU bodies collaborated, developing crisis management skills and coordinating responses to simulated cyber incidents. The exercise, one of Europe’s largest, involved over thousand experts across various domains, facilitated by ENISA, which celebrates its 20th anniversary in 2024.

EU faces controversy over proposed AI scanning law

The EU is facing significant controversy over a proposed law that would require AI scanning of users’ photos and videos on messaging apps to detect child sexual abuse material (CSAM). Critics, including major tech companies like WhatsApp and Signal, argue that this law threatens privacy and encryption, undermining fundamental rights. They also warn that the AI detection systems could produce numerous false positives, overwhelming law enforcement.

A recent meeting among the EU member states’ representatives failed to reach a consensus on the proposal, leading to further delays. The Belgian presidency had hoped to finalise a negotiating mandate, but disagreements among member states prevented progress. The ongoing division means that discussions on the proposal will likely continue under Hungary’s upcoming EU Council presidency.

Opponents of the proposal, including Signal President Meredith Whittaker and Proton founder Andy Yen, emphasise the dangers of mass surveillance and the need for more targeted approaches to child protection. Despite the current setback, there’s concern that efforts to push the law forward will persist, necessitating continued vigilance from privacy advocates.