EU digital laws simplified by CEPS Task Force to boost innovation

The Centre for European Policy Studies (CEPS) Task Force, titled ‘Next Steps for EU Law and Regulation for the Digital World’, aims to refine and simplify the EU’s digital rulebook.

This rulebook now covers key legislation, including the Digital Markets Act (DMA), Digital Services Act (DSA), GDPR, Data Act, AI Act, Data Governance Act (DGA), and Cyber Resilience Act (CRA).

While these laws position Europe as a global leader in digital regulation, they also create complexity, overlaps, and legal uncertainty.

The Task Force focuses on enhancing coherence, efficiency, and consistency across digital acts while maintaining strong protections for consumers and businesses.

The CEPS Task Force emphasises targeted reforms to reduce compliance burdens, especially for SMEs, and strengthen safeguards.

It also promotes procedural improvements, including robust impact assessments, independent ex-post evaluations, and the adoption of RegTech solutions to streamline compliance and make regulation more adaptive.

Between November 2025 and January 2026, the Task Force will hold four workshops addressing: alignment of the DMA with competition law, fine-tuning the DSA, improving data governance, enhancing GDPR trust, and ensuring AI Act coherence.

The findings will be published in a Final Report in March 2026, outlining a simpler, more agile EU digital regulatory framework that fosters innovation, reduces regulatory burdens, and upholds Europe’s values.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Brazil advances first national cybersecurity law

Brazil is preparing to pass its first national cybersecurity law, aiming to centralise oversight and strengthen protection for citizens and companies. The Cybersecurity Legal Framework would establish a new National Cybersecurity Authority to coordinate defence efforts across government and industry.

The legislation comes after a series of high-profile cyberattacks disrupted hospitals and exposed millions of personal records, highlighting gaps in Brazil’s digital defences. The authority would create nationwide standards, replacing fragmented rules currently managed by individual ministries and agencies.

Under the bill, public procurement will require compliance with official security standards, and suppliers will share responsibility for incidents. Companies meeting the rules could be listed as trusted providers, potentially boosting competitiveness in both public and private sectors.

The framework also includes incentives: financing through the National Public Security Fund and priority for locally developed technologies. While the bill still awaits approval in Congress, its adoption would make Brazil one of Latin America’s first countries with a comprehensive cybersecurity law.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

Deloitte’s AI blunder: A costly lesson in consultancy business

Deloitte has agreed to refund the Australian government the full amount of $440,000 after acknowledging major errors in a consultancy report concerning welfare mutual obligations. These errors were the result of using AI tools, which led to fabricated content, including false quotes related to the Federal Court case on the Robodebt scheme and fictitious academic references.

That incident underscores the challenges of deploying AI in crucial government consultancy projects without sufficient human oversight, raising questions about the credibility of government policy decisions influenced by such flawed reports.

In response to these errors, Deloitte has publicly accepted full responsibility and committed to refunding the government. The firm is re-evaluating its internal quality assurance procedures and has emphasised the necessity of rigorous human review to maintain the integrity of consultancy projects that utilise AI.

The situation has prompted the government of Australia to reassess its reliance on AI-generated content for policy analysis, and it is currently investigating the oversight mechanisms to prevent future occurrences. The inaccuracies in the report had previously swayed discussions on welfare compliance, thereby shaking public trust in the consultancy services employed for critical government policymaking.

The broader consultancy industry is feeling the ripple effects, as this incident highlights the reputational and financial dangers of unchecked AI outputs. As AI becomes more prevalent for its efficiency, this case serves as a stark reminder of its limitations, particularly in sensitive government matters.

Industry pressure is growing for firms to enhance their quality control measures, disclose the level of AI involvement in their reports, and ensure that technology use does not compromise information quality. The Deloitte case adds to ongoing discussions about the ethical and practical integration of AI into professional services, reinforcing the imperative for human oversight and editorial controls even as AI technology progresses.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

Gamers report widespread disconnections across multiple services

Several major gaming and online platforms have reportedly faced simultaneous disruptions across multiple devices and regions. Platforms like Steam and Riot Games experienced connection issues, blocking access to major titles such as Counter-Strike, Dota 2, Valorant, and League of Legends.

Some users reported issues with PlayStation Network, Epic Games, Hulu, AWS, and other services.

Experts suggest the outages may be linked to a possible DDoS attack from the Aisuru botnet. While official confirmations remain limited, reports indicate unusually high traffic, with one source claiming bandwidth levels near 30 terabits per second.

Similar activity from Aisuru has been noted in incidents dating back to 2024, targeting a range of internet-connected devices.

The botnet is thought to exploit vulnerabilities in routers, cameras, and other connected devices, potentially controlling hundreds of thousands of nodes. Researchers say the attacks are widespread across countries and industries, though their full scale and purpose remain uncertain.

Further investigations are ongoing, and platforms continue to monitor and respond to potential threats. Users are advised to remain aware of service updates and exercise caution when accessing online networks during periods of unusual activity.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

India’s competition watchdog urges AI self-audits to prevent market distortions

The Competition Commission of India (CCI) has urged companies to self-audit their AI systems to prevent anti-competitive practices and ensure responsible autonomy.

A call came as part of the CCI’s market study on AI, emphasising the risks of opacity and algorithmic collusion while highlighting AI’s potential to enhance innovation and productivity.

The study warned that dominant firms could exploit their control over data, infrastructure, and proprietary models to reinforce market power, creating barriers to entry. It also noted that opaque AI systems in user sectors may lead to tacit algorithmic coordination in pricing and strategy, undermining fair competition.

The regulatory approach of India, the CCI said, aims to balance technological progress with accountability through a co-regulatory framework that promotes both competition and innovation.

Additionally, the Commission plans to strengthen its technical capacity, establish a digital markets think tank and host a conference on AI and regulatory challenges.

A report recommended a six-step self-audit framework for enterprises, requiring evaluation of AI systems against competition risks, senior management oversight and clear accountability in high-risk deployments.

It also highlighted AI’s pro-competitive effects, particularly for MSMEs, which benefit from improved efficiency and greater access to digital markets.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

New bill creates National Cybersecurity Authority in Brazil

Brazil is set to approve its first comprehensive Cybersecurity Legal Framework with Bill No. 4752/2025. The legislation creates a National Cybersecurity Authority and requires compliance for government procurement, with shared responsibility for supply chain security incidents.

The framework aims to unify the country’s fragmented cybersecurity policies. Government agencies will follow ANC standards, while companies delivering services to public entities must meet minimum cybersecurity requirements.

The ANC will also publish lists of compliant suppliers, providing a form of certification that could enhance trust in both public and private partnerships.

Supply chain oversight is a key element of the bill. Public bodies must assess supplier risks, and liability will be shared in the event of breaches.

The law encourages investment in national cybersecurity technologies and offers opportunities for companies to access financing and participate in the National Cybersecurity Program.

Approval would make Brazil one of the first Latin American countries with a robust federal cybersecurity law. The framework aims to strengthen protections, encourage innovation, and boost confidence for citizens, businesses, and international partners.

Companies that prepare now will gain a competitive advantage when the law comes into effect.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

OpenAI and AMD strike 6GW GPU deal to power next-generation AI infrastructure

AMD and OpenAI have announced a strategic partnership to deploy up to six gigawatts of AMD GPUs, marking one of the largest AI compute collaborations.

The multi-year agreement will begin with the rollout of one gigawatt of AMD Instinct MI450 GPUs in the second half of 2026, with further deployments planned across future AMD generations.

A deal that deepens a long-standing relationship between the two companies began with AMD’s MI300X and MI350X series.

OpenAI will adopt AMD as a core strategic compute partner, integrating its technology into large-scale AI systems and jointly optimising product roadmaps to support next-generation AI workloads.

To strengthen alignment, AMD has issued OpenAI a warrant for up to 160 million shares, with tranches vesting as the partnership achieves deployment and share-price milestones. AMD expects the collaboration to deliver tens of billions in revenue and boost its non-GAAP earnings per share.

AMD CEO Dr Lisa Su called the deal ‘a true win-win’ for both companies, while OpenAI’s Sam Altman said the partnership will ‘accelerate progress and bring advanced AI benefits to everyone faster’.

The collaboration positions AMD as a leading hardware supplier in the race to build global-scale AI infrastructure.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Breach at third-party support provider exposes Discord user data

Discord has disclosed a security incident after a third-party customer service provider was compromised. The breach exposed personal data from users who contacted Discord’s support and Trust & Safety teams.

An unauthorised party accessed the provider’s ticketing system and targeted user data in an extortion attempt. Discord revoked access, launched an investigation with forensic experts, and notified law enforcement. Impacted users will be contacted via official email.

Compromised information may include usernames, contact details, partial billing data, IP addresses, customer service messages, and limited government-ID images. Passwords, authentication data, and full credit card numbers were not affected.

Discord has notified data protection authorities and strengthened security controls for third-party providers. It has also reviewed threat detection systems to prevent similar incidents.

The company urges affected users to remain vigilant against suspicious messages. Service agents are available to answer questions and provide additional support.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

A new AI strategy by the EU to cut reliance on the US and China

The EU is preparing to unveil a new strategy to reduce reliance on American and Chinese technology by accelerating the growth of homegrown AI.

The ‘Apply AI strategy’, set to be presented by the EU tech chief Henna Virkkunen, positions AI as a strategic asset essential for the bloc’s competitiveness, security and resilience.

According to draft documents, the plan will prioritise adopting European-made AI tools across healthcare, defence and manufacturing.

Public administrations are expected to play a central role by integrating open-source EU AI systems, providing a market for local start-ups and reducing dependence on foreign platforms. The Commission has pledged €1bn from existing financing programmes to support the initiative.

Brussels has warned that foreign control of the ‘AI stack’ (the hardware and software that underpin advanced systems) could be ‘weaponised’ by state and non-state actors.

These concerns have intensified following Europe’s continued dependence on American tech infrastructure. Meanwhile, China’s rapid progress in AI has further raised fears that the Union risks losing influence in shaping the technology’s future.

Several high-potential AI firms have already been hosted by the EU, including France’s Mistral and Germany’s Helsing. However, they rely heavily on overseas suppliers for software, hardware, and critical minerals.

The Commission wants to accelerate the deployment of European AI-enabled defence tools, such as command-and-control systems, which remain dependent on NATO and US providers. The strategy also outlines investment in sovereign frontier models for areas like space defence.

President Ursula von der Leyen said the bloc aims to ‘speed up AI adoption across the board’ to ensure it does not miss the transformative wave.

Brussels hopes to carve out a more substantial global role in the next phase of technological competition by reframing AI as an industrial sovereignty and security instrument.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Thousands affected by AI-linked data breach in New South Wales

A major data breach has affected the Northern Rivers Resilient Homes Program in New South Wales.

Authorities confirmed that personal information was exposed after a former contractor uploaded data to the AI platform ChatGPT between 12 and 15 March 2025.

The leaked file contained over 12,000 records, with details including names, addresses, contact information and health data. Up to 3,000 individuals may be impacted.

While there is no evidence yet that the information has been accessed by third parties, the NSW Reconstruction Authority (RA) and Cyber Security NSW have launched a forensic investigation.

Officials apologised for the breach and pledged to notify all affected individuals in the coming week. ID Support NSW is offering free advice and resources, while compensation will be provided for any costs linked to replacing compromised identity documents.

The RA has also strengthened its internal policies to prevent unauthorised use of AI platforms. An independent review of the incident is underway to determine how the breach occurred and why notification took several months.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!