European Commission targets end-to-end encryption and proposes expanding Europol’s powers into an EU-level FBI equivalent

The European Commission announced ProtectEU, a new internal security strategy that sets out the broad priorities it intends to pursue in the coming years in response to evolving security challenges. While the document outlines strategic objectives, it does not include specific legislative proposals.

The Commission highlighted the need to revisit the European Union’s approach to internal security, citing what it described as ‘a changed security environment and an evolving geopolitical landscape.’ Among the identified challenges are hybrid threats from state and non-state actors, organised crime, and increasing levels of online criminal activity.

One of the key elements of the strategy is the proposed strengthening of Europol’s operational role. The Commission suggests developing Europol into a truly operational police agency to reinforce support to member states, with the capacity to assist in cross-border, large-scale, and complex investigations that present serious risks to the Union’s internal security.

That would bring Europol closer in function to agencies such as the US Federal Bureau of Investigation. The strategy also notes the Commission’s intention to develop roadmaps on ‘lawful and effective access to data for law enforcement’ and encryption.

The strategy aims to ‘identify and assess technological solutions that would enable law enforcement authorities to access encrypted data lawfully, safeguarding cybersecurity and fundamental rights.’ These issues continue to be the subject of technical and legal discussion across jurisdictions.

Other aspects of the strategy address long-standing challenges within the EU’s security framework, including limited situational awareness and coordination at the executive level. The strategy proposes enhancing intelligence-sharing through the EU’s Single Intelligence Analysis Capacity, a mechanism for the voluntary sharing of intelligence by member states, which is currently supported by open-source analysis.

The report further emphasised that the effectiveness of any reforms in this area would depend on the commitment of member states, citing ongoing challenges related to differing national priorities and levels of political alignment. In addition, the Commission announced its intention to propose a new Cybersecurity Act and new measures to secure cloud and telecom services and develop technological sovereignty.

For more information on these topics, visit diplomacy.edu.

Singapore issues new guidelines to strengthen resilience and security of cloud services and data centres

The Infocomm Media Development Authority (IMDA) has issued new Advisory Guidelines (AGs) intended to support the resilience and security of Cloud Services and Data Centres (DCs) in Singapore. The guidelines set out best practices for Cloud Service Providers (CSPs) and DC operators, aiming to reduce service disruptions and limit their potential impact on economic and social functions.

A wide range of digital services—including online banking, ride-hailing, e-commerce, and digital identity systems—depend on the continued availability of cloud infrastructure and data centre operations. Service interruptions may affect the delivery of these services.

The AGs encourage service providers to adopt measures that improve their ability to recover from outages and maintain operational continuity. The AGs recommend various practices to address risks associated with technical misconfigurations, physical incidents, and cybersecurity threats.

Key proposals include conducting risk and business impact assessments, establishing business continuity arrangements, and strengthening cybersecurity capabilities. For Cloud Services, the guidelines outline seven measures to reinforce security and resilience.

These cover security testing, access controls, data governance, and disaster recovery planning. Concerning Data Centres, the AGs provide a framework for business continuity management to minimise operational disruptions and maintain high service availability.

That involves the implementation of relevant policies, operational controls, and ongoing review processes. The development of the AGs forms part of wider national efforts led by the inter-agency task force on the Resilience and Security of Digital Infrastructure and Services.

These guidelines are intended to complement regulatory initiatives, including planned amendments to the Cybersecurity Act and the Digital Infrastructure Act (DIA) introduction, which will establish requirements for critical digital infrastructure providers such as major CSPs and DC operators. To inform the guidelines, the IMDA conducted consultations with a broad range of stakeholders, including CSPs, DC operators, and end user enterprises across sectors such as banking, healthcare, and digital platforms.

The AGs will be updated periodically to reflect technological developments, incident learnings, and further industry input. A coordinated approach is encouraged across the digital services ecosystem. Businesses that provide digital services are advised to assess operational risks and establish appropriate business continuity plans to support service reliability.

The AGs also refer to international standards, including IMDA’s Multi-Tier Cloud Security Standard, the Cloud Security Alliance Cloud Controls Matrix, ISO 27001, and ISO 22301. Providers are encouraged to designate responsible personnel to oversee resilience and security efforts.

These guidelines form part of Singapore’s broader strategy to strengthen its digital infrastructure. The government will continue to engage with sectoral regulators and stakeholders to promote resilience, cybersecurity awareness, and preparedness across industries and society.

As digital systems evolve, sustained attention to infrastructure resilience and security remains essential. The AGs are intended to support organisations in maintaining reliable services while aligning with recognised standards and best practices.

For more information on these topics, visit diplomacy.edu.

US Cyber Command integrates generative AI for enhanced cybersecurity operations

A senior official at US Cyber Command has stated that the agency has begun employing generative AI tools to significantly reduce the time required to analyse network traffic for potentially malicious activity. Speaking at an event hosted by the Information Technology Industry Council in Washington, D.C., Executive Director Morgan Adamski said Cyber Command is already observing operational benefits from its efforts to integrate AI across various mission areas, particularly in cybersecurity functions.

Cyber Command developed an AI roadmap last year outlining approximately 100 tasks to embed AI into logistics, security operations, and national defence functions. An AI task force within the Cyber National Mission Force conducts 90-day development cycles to test and integrate large language models and other AI technologies into command operations.

The task force is responsible for deploying, evaluating, and assessing the viability of these tools for broader implementation. The agency also examines how AI can be adopted at scale across its cybersecurity enterprise.

General Timothy Haugh, Commander of Cyber Command, noted last year that the task force was created ‘to move us from opportunistic AI application to systematic adoption.’ Through its Constellation initiative—a collaboration with the Defense Advanced Research Projects Agency (DARPA)—Cyber Command is working with private-sector AI firms to accelerate the deployment of new capabilities.

One such tool enables continuous Department of Defense Information Network (DoDIN) monitoring, which supports over three million global users daily. Adamski explained that the tool is strategically placed within key segments of the DoDIN where known adversary tactics may appear.

‘We can monitor traffic at those points and have been able to identify previously unseen malicious activity,’ she said. She also highlighted Panoptic Junction, a pilot initiative led by Army Cyber Command that uses AI to monitor network traffic for compliance, threat intelligence, and anomaly detection.

According to Adamski, the project produced results that have prompted considerations for wider adoption across the DoDIN.

For more information on these topics, visit diplomacy.edu.

OpenAI backs Adaptive Security in the battle against AI threats

AI-driven cyber threats are on the rise, making it easier than ever for hackers to deceive employees through deepfake scams and phishing attacks.

OpenAI, a leader in generative AI, has recognised the growing risk and made its first cybersecurity investment in New York-based startup Adaptive Security. The company has secured $43 million in Series A funding, co-led by OpenAI’s startup fund and Andreessen Horowitz.

Adaptive Security helps companies prepare for AI-driven cyberattacks by simulating deepfake calls, texts, and emails. Employees may receive a phone call that sounds like their CTO, asking for sensitive information, but in reality, it is an AI-generated test.

The platform identifies weak points in a company’s security and trains staff to recognise potential threats. Social engineering scams, which trick employees into revealing sensitive data, have already led to massive financial losses, such as the $600 million Axie Infinity hack in 2022.

CEO Brian Long, a seasoned entrepreneur, says the funding will go towards hiring engineers and improving the platform to keep pace with evolving AI threats.

The investment comes amid a surge in cybersecurity funding, with companies like Cyberhaven, Snyk, and GetReal also securing major investments.

As cyber risks become more advanced, Long advises employees to take simple precautions, including deleting voicemails to prevent hackers from cloning their voices.

For more information on these topics, visit diplomacy.edu.

National Crime Agency responds to AI crime warning

The National Crime Agency (NCA) has pledged to ‘closely examine’ recommendations from the Alan Turing Institute after a recent report highlighted the UK’s insufficient preparedness for AI-enabled crime.

The report, from the Centre for Emerging Technology and Security (CETaS), urges the NCA to create a task force to address AI crime within the next five years.

Despite AI-enabled crime being in its early stages, the report warns that criminals are rapidly advancing their use of AI, outpacing law enforcement’s ability to respond.

CETaS claims that UK police forces have been slow to adopt AI themselves, which could leave them vulnerable to increasingly sophisticated crimes, such as child sexual abuse, cybercrime, and fraud.

The Alan Turing Institute emphasises that although AI-specific legislation may be needed eventually, the immediate priority is for law enforcement to integrate AI into their crime-fighting efforts.

An initiative like this would involve using AI tools to combat AI-enabled crimes effectively, as fraudsters and criminals exploit AI’s potential to deceive.

While AI crime remains a relatively new phenomenon, recent examples such as the $25 million Deepfake CFO fraud show the growing threat.

The report also highlights the role of AI in phishing scams, romance fraud, and other deceptive practices, warning that future AI-driven crimes may become harder to detect as technology evolves.

For more information on these topics, visit diplomacy.edu.

Microsoft rethinks AI data centre strategy amid market shifts

Microsoft has reportedly scaled back or delayed several major data centre projects, just three months after announcing plans to invest $80 billion in AI infrastructure through the current fiscal year.

According to Bloomberg, the company has paused developments in multiple locations, including Australia, Indonesia, the United Kingdom, and US states such as Illinois, North Dakota, and Wisconsin.

Instead of denying the report, Microsoft confirmed adjustments to its plans, citing the need for long-term flexibility. A spokesperson said the company continuously reviews future infrastructure needs to ensure alignment with growing AI demand, adding that the changes reflect Microsoft’s adaptable strategy.

The halted projects include negotiations for high-performance AI chip facilities in the UK and a site near Chicago, along with construction delays in Jakarta and Wisconsin.

These moves come amid growing scrutiny over whether the AI sector is entering a bubble, especially as emerging models challenge the assumption that vast computing power is always necessary for innovation.

Instead of sticking to high-cost development, Microsoft may be responding to a new trend: efficient, lower-cost AI models from Chinese firms that rival those of Western tech giants.

With AI development costs dropping and access expanding, Microsoft’s strategic pause could reflect a shift towards a more sustainable and agile future in AI infrastructure.

For more information on these topics, visit diplomacy.edu.

Law firm investigates potential fraud in Libra meme coin launch

The Treanor Law Firm is investigating potential fraud, market manipulation, and racketeering. These issues are related to the controversial launch of the Libra meme coin (LIBRA).

The token, which was heavily promoted by Argentine President Javier Milei, quickly soared to a market cap of $1.17 billion. It crashed 97% after Milei distanced himself from the project. The firm is seeking victims to support a potential lawsuit against those behind the token’s creation and promotion.

The Libra token was marketed as a project designed to boost the Argentine economy and fund small businesses. However, its rapid collapse has raised questions about the validity of the claims made to investors.

The Treanor Law Firm’s investigation is focused on whether investors were misled during the sale and whether market manipulation occurred. Over 75,000 wallets have reportedly lost money, with total losses exceeding $280 million.

In addition to investigating fraud and market manipulation, the firm is considering whether racketeering violations are involved. If racketeering is proven, victims could be entitled to triple damages.

For more information on these topics, visit diplomacy.edu.

India among few developing nations with strong AI investment

India and China were the only developing nations to attract notable private investment in AI in 2023, according to the UN’s Technology and Innovation Report 2025. Instead of the US simply leading the field, it dominated with $67 billion in AI investment, accounting for 70 per cent of the global total.

China followed with $7.8 billion, while India ranked tenth worldwide with $1.4 billion. Instead of being evenly distributed, access to AI infrastructure and research remains heavily concentrated in a handful of countries, mainly the US and China.

India’s rise in the AI space stems from policy-driven innovation and education rather than organic growth alone. It climbed to 36th place out of 170 on the UNCTAD Frontier Technologies Readiness Index in 2024, improving from 48th in 2022.

Instead of only focusing on economic size, the index measures readiness through ICT availability, skills, R&D, industrial capacity, and financing. India performed well in R&D and industrial capacity but fell behind in ICT access and skill development.

India has supported its AI ecosystem through collaboration between the government, academia, and the private sector. The country hosts a large developer base, around 13 million, and contributes actively to generative AI projects on platforms like GitHub.

Programmes such as the India AI Mission aim to boost AI education and innovation in smaller cities, instead of keeping progress limited to major urban centres. Institutes like IIT Hyderabad and IIT Kharagpur were named among the country’s key centres of AI excellence.

Still, India faces challenges in expanding its AI capabilities across all sectors. Instead of allowing AI to widen inequalities, the report urges investment in workforce reskilling and inclusion. While AI can boost productivity, it may also displace jobs unless paired with supportive policies.

The technology, if harnessed wisely, could create new industries and strengthen employment rather than replace it.

For more information on these topics, visit diplomacy.edu.

Australia’s largest pension funds face coordinated cyber attacks

Several of Australia’s largest pension funds have recently been under suspected cyberattacks, exposing sensitive personal data and leading to confirmed financial losses in some cases.

AustralianSuper, the country’s biggest fund, confirmed that hackers used stolen passwords to access around 600 accounts, resulting in a reported A$500,000 loss from four members.

Rest Super, which manages A$93 billion for two million members, reported unauthorised access affecting about 8,000 accounts.

The fund quickly shut down its member portal and launched an investigation, stating that while personal information was accessed, no money was taken during the incident.

Other major superannuation providers, including Hostplus, Australian Retirement Trust (ART), and Insignia Financial, also reported suspicious activity.

ART detected login attempts across hundreds of accounts but confirmed no theft, while Insignia acknowledged attempted breaches with no reported losses.

Authorities believe the attacks were primarily conducted using ‘credential stuffing,’ a method where attackers use passwords leaked in unrelated breaches to access other platforms.

Here, the continued risks of weak password reuse are highlighted, as well as the importance of additional protections like two-factor authentication.

In response to the breaches, the National Cyber Security Coordinator of Australia, Michelle McGuinness, has initiated a government-wide review in cooperation with regulators and industry representatives.

Prime Minister Anthony Albanese addressed the attacks, reaffirming his administration’s commitment to strengthening cybersecurity defences.

Superannuation funds are contacting affected members and urging all users to update their credentials and take additional precautions.

For more information on these topics, visit diplomacy.edu.

UK’s Royal Mail investigates major data breach

Royal Mail is investigating a significant cybersecurity incident after a hacker known as ‘GHNA’ claimed to have leaked 144GB of sensitive customer data. The files were allegedly obtained through Spectos, a third-party analytics provider, and posted on the BreachForums platform. While the leaked information includes names, addresses, parcel data, and internal recordings, Royal Mail stated that its delivery services remain unaffected.

Spectos confirmed a breach on 29 March, explaining that the attack stemmed from a 2021 malware infection that compromised an employee’s credentials. Cybersecurity firm Hudson Rock linked the same login data to another recent attack involving Samsung. The exposed dataset includes thousands of files containing mailing lists from Mailchimp, Zoom meetings, logistics details, and a WordPress database, raising concerns about the security of Royal Mail’s extended network.

The breach is the latest in a series of cyber incidents targeting the UK’s Royal Mail, following a 2023 ransomware attack that halted international shipping and a 2022 outage in its tracking systems. While the full extent of the latest leak remains under investigation, experts warn that prolonged access to internal systems may have occurred before the data was released. No public notification procedures have yet been confirmed.

For more information on these topics, visit diplomacy.edu.