Indonesia’s cyber push faces capacity challenges in the provinces

Indonesia is decentralising its approach to cybersecurity, launching eight regional Cyber Crime Directorates within provincial police forces in September 2024.

These directorates, located in areas including Jakarta, East Java, Bali, and Papua, aim to boost local responses to increasingly complex cyber threats—from data breaches and financial fraud to hacktivism and disinformation.

The move marks a shift from Jakarta-led cybersecurity efforts toward a more distributed model, aligning with Indonesia’s broader decentralisation goals. It reflects the state’s recognition that digital threats are not only national in scope, but deeply rooted in local contexts.

However, experts warn that regionalising cyber governance comes with significant challenges. Provincial police commands often lack specialised personnel, digital forensics capabilities, and adaptive institutional structures.

Many still rely on rotations from central agencies or basic training programs—insufficient for dealing with fast-moving and technically advanced cyberattacks.

Moreover, the culture of rigid hierarchy and limited cross-agency collaboration may further hinder rapid response and innovation at the local level. Without reforms to increase flexibility, autonomy, and inter-agency cooperation, these new directorates risk becoming symbolic rather than operationally impactful.

The inclusion of provinces like Central Sulawesi and Papua also reveals a political dimension. These regions are historically security-sensitive, and the presence of cyber directorates could serve both policing and state surveillance functions, raising concerns over the balance between security and civil liberties.

To be effective, the initiative requires more than administrative expansion. It demands sustained investment in talent development, modern infrastructure, and trusted partnerships with local stakeholders—including the private sector and academia.

If these issues are not addressed, Indonesia’s push to regionalise cybersecurity may reinforce old hierarchies rather than build meaningful local capacity. Stronger, smarter institutions—not just new offices—will determine whether Indonesia can secure its digital future.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Graphite spyware used against European reporters, experts warn

A new surveillance scandal has emerged in Europe as forensic evidence confirms that an Israeli spyware firm Paragon used its Graphite tool to target journalists through zero-click attacks on iOS devices. The attacks, requiring no user interaction, exposed sensitive communications and location data.

Citizen Lab and reports from Schneier on Security identified the spyware on multiple journalists’ devices on April 29, 2025. The findings mark the first confirmed use of Paragon’s spyware against members of the press, raising alarms over digital privacy and press freedom.

Backed by US investors, Paragon has operated outside of Israel under claims of aiding national security. But its spyware is now at the center of a widening controversy, particularly in Italy, where the government recently ended its contract with the company after two journalists were targeted.

Experts warn that such attacks undermine the confidentiality crucial to journalism and could erode democratic safeguards. Even Apple’s secure devices proved vulnerable, according to Bleeping Computer, highlighting the advanced nature of Graphite.

The incident has sparked calls for tighter international regulation of spyware firms. Without oversight, critics argue, tools meant for fighting crime risk being used to silence dissent and target civil society.

The Paragon case underscores the urgent need for transparency, accountability, and stronger protections in an age of powerful, invisible surveillance tools.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Smart machines, dark intentions: UN urges global action on AI threats

The United Nations has warned that terrorists could seize control of AI-powered vehicles to launch devastating attacks in public spaces. A new report outlines how extremists might exploit autonomous cars and drones to bypass traditional defences.

AI is also feared to be a tool for facial recognition targeting and mass ‘swarm’ assaults using aerial devices. Experts suggest that key parts of modern infrastructure could be turned against the public if hacked.

Britain’s updated counter-terrorism strategy now reflects these growing concerns, including the risk of AI-generated propaganda and detailed attack planning. The UN has called for immediate global cooperation to limit how such technologies can be misused.

Security officials maintain that AI also offers valuable tools in the fight against extremism, enabling quicker intelligence processing and real-time threat identification. Nonetheless, authorities have been urged to prepare for worst-case scenarios involving AI-directed violence.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

New cyberattack method poses major threat to smart grids, study finds

A new study published in ‘Engineering’ highlights a growing cybersecurity threat to smart grids as they become more complex due to increased integration of distributed energy sources.

The research, conducted by Zengji Liu, Mengge Liu, Qi Wang, and Yi Tang, focuses on a sophisticated form of cyberattack known as a false data injection attack (FDIA) that targets data-driven algorithms used in smart grid operations.

As modern power systems adopt technologies like battery storage and solar panels, they rely more heavily on algorithms to manage energy distribution and grid stability. However, these algorithms can be exploited.

The study introduces a novel black-box FDIA method that injects false data directly at the measurement modules of distributed power supplies, using generative adversarial networks (GANs) to produce stealthy attack vectors.

What makes this method particularly dangerous is that it doesn’t require detailed knowledge of the grid’s internal workings, making it more practical and harder to detect in real-world scenarios.

The researchers also proposed an approach to estimate controller and filter parameters in distributed energy systems, making it easier to launch these attacks.

To test the method, the team simulated attacks on the New England 39-bus system, specifically targeting a deep learning model used for transient stability prediction. Results showed a dramatic drop in accuracy—from 98.75% to 56%—after the attack.

The attack also proved effective across multiple neural network models and on larger grid systems, such as IEEE’s 118-bus and 145-bus networks.

These findings underscore the urgent need for better cybersecurity defenses in the evolving smart grid landscape. As systems grow more complex and reliant on AI-driven management, developing robust protection against FDIA threats will be critical.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Taiwan tightens rules on chip shipments to China

Taiwan has officially banned the export of chips and chiplets to China’s Huawei and SMIC, joining the US in tightening restrictions on advanced semiconductor transfers.

The decision follows reports that TSMC, the world’s largest contract chipmaker, was unknowingly misled into supplying chiplets used in Huawei’s Ascend 910B AI accelerator. The US Commerce Department had reportedly considered a fine of over $1 billion against TSMC for that incident.

Taiwan’s new rules aim to prevent further breaches by requiring export permits for any transactions with Huawei or SMIC.

The distinction between chips and chiplets is key to the case. Traditional chips are built as single-die monoliths using the same process node, while chiplets are modular and can combine various specialised components, such as CPU or AI cores.

Huawei allegedly used shell companies to acquire chiplets from TSMC, bypassing existing US restrictions. If TSMC had known the true customer, it likely would have withheld the order. Taiwan’s new export controls are designed to ensure stricter oversight of future transactions and prevent repeat deceptions.

The broader geopolitical stakes are clear. Taiwan views the transfer of advanced chips to China as a national security threat, given Beijing’s ambitions to reunify with Taiwan and the potential militarisation of high-end semiconductors.

With Huawei claiming its processors are nearly on par with Western chips—though analysts argue they lag two to three generations behind—the export ban could further isolate China’s chipmakers.

Speculation persists that Taiwan’s move was partly influenced by negotiations with the US to avoid the proposed fine on TSMC, bringing both countries into closer alignment on chip sanctions.

Would you like to learn more about AI, tech and digital diplomacyIf so, ask our Diplo chatbot!

Turing Institute urges stronger AI research security

The Alan Turing Institute has warned that urgent action is needed to protect the UK’s AI research from espionage, intellectual property theft and risky international collaborations.

Its Centre for Emerging Technology and Security (CETaS) has published a report calling for a culture shift across academia to better recognise and mitigate these risks.

The report highlights inconsistencies in how security risks are understood within universities and a lack of incentives for researchers to follow government guidelines. Sensitive data, the dual-use potential of AI, and the risk of reverse engineering make the field particularly vulnerable to foreign interference.

Lead author Megan Hughes stressed the need for a coordinated response, urging government and academia to find the right balance between academic freedom and security.

The report outlines 13 recommendations, including expanding support for academic due diligence and issuing clearer guidance on high-risk international partnerships.

Further proposals call for compulsory research security training, better threat communication from national agencies, and standardised risk assessments before publishing AI research.

The aim is to build a more resilient research ecosystem as global interest in UK-led AI innovation continues to grow.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Europe’s new digital diplomacy: From principles to power

In a decisive geopolitical shift, the European Union has unveiled its 2025 International Digital Strategy, signalling a turn from a values-first diplomacy to a focus on security and competitiveness. As Jovan Kurbalija explains in his blog post titled ‘EU Digital Diplomacy: Geopolitical shift from focus on values to economic security‘, the EU is no longer simply exporting its regulatory ideals — often referred to as the ‘Brussels effect’ — but is now positioning digital technology as central to its economic and geopolitical resilience.

The strategy places special emphasis on building secure digital infrastructure, such as submarine cables and AI factories, and deepening digital partnerships across continents. Unlike the 2023 Council Conclusions, which promoted a human-centric, rights-based approach to digital transformation, the 2025 Strategy prioritises tech sovereignty, resilient supply chains, and strategic defence-linked innovations.

Human rights, privacy, and inclusivity still appear, but mainly in supporting roles to broader goals of power and resilience. The EU’s new path reflects a realpolitik understanding that its survival in the global tech race depends on alliances, capability-building, and a nimble response to the rapid evolution of AI and cyber threats.

In practice, this means more digital engagement with key partners like India, Japan, and South Korea and coordinated global investments through the ‘Tech Team Europe’ initiative. The strategy introduces new structures like a Digital Partnership Network while downplaying once-central instruments like the AI Act.

With China largely sidelined and relations with the US in ‘wait and see’ mode, the EU seems intent on building an independent but interconnected digital path, reaching out to the Global South with a pragmatic offer of secure digital infrastructure and public-private investments.

Why does it matter?

Yet, major questions linger: how will these ambitious plans be implemented, who will lead them, and can the EU maintain coherence between its internal democratic values and this outward-facing strategic assertiveness? As Kurbalija notes, the success of this new digital doctrine will hinge on whether the EU can fuse its soft power legacy with the hard power realities of a turbulent tech-driven world.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

OpenAI cracks down on misuse of ChatGPT by foreign threat actors

OpenAI has shut down a network of ChatGPT accounts allegedly linked to nation-state actors from Russia, China, Iran, North Korea, and others after uncovering their use in cyber and influence operations.

The banned accounts were used to assist in developing malware, automate social media content, and conduct reconnaissance on sensitive technologies.

According to OpenAI’s latest threat report, a Russian-speaking group used the chatbot to iteratively improve malware code written in Go. Each account was used only once to refine the code before being abandoned, a tactic highlighting the group’s emphasis on operational security.

The malicious software was later disguised as a legitimate gaming tool and distributed online, infecting victims’ devices to exfiltrate sensitive data and establish long-term access.

Chinese-linked groups, including APT5 and APT15, were found using OpenAI’s models for a range of technical tasks—from researching satellite communications to developing scripts for Android app automation and penetration testing.

Other accounts were linked to influence campaigns that generated propaganda or polarising content in multiple languages, including efforts to pose as journalists and simulate public discourse around elections and geopolitical events.

The banned activities also included scams, social engineering, and politically motivated disinformation. OpenAI stressed that although some misuse was detected, none involved sophisticated or large-scale attacks enabled solely by its tools.

The company said it is continuing to improve detection and mitigation efforts to prevent abuse of its models.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

FBI warns BADBOX 2.0 malware is infecting millions

The FBI has issued a warning about the resurgence of BADBOX 2.0, a dangerous form of malware infecting millions of consumer electronics globally.

Often preloaded onto low-cost smart TVs, streaming boxes, and IoT devices, primarily from China, the malware grants cyber criminals backdoor access, enabling theft, surveillance, and fraud while remaining essentially undetectable.

BADBOX 2.0 forms part of a massive botnet and can also infect devices through malicious apps and drive-by downloads, especially from unofficial Android stores.

Once activated, the malware enables a range of attacks, including click fraud, fake account creation, DDoS attacks, and the theft of one-time passwords and personal data.

Removing the malware is extremely difficult, as it typically requires flashing new firmware, an option unavailable for most of the affected devices.

Users are urged to check their hardware against a published list of compromised models and to avoid sideloading apps or purchasing unverified connected tech.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Europe gets new cybersecurity support from Microsoft

Microsoft has launched a free cybersecurity initiative for European governments aimed at countering increasingly sophisticated cyber threats powered by AI. Company President Brad Smith said Europe would benefit from tools already developed and deployed in the US.

The programme is designed to identify and disrupt AI-driven threats, including deepfakes and disinformation campaigns, which have previously been used to target elections and undermine public trust.

Smith acknowledged that AI is a double-edged sword, with malicious actors exploiting it for attacks, while defenders increasingly use it to stay ahead. Microsoft continues to monitor how its AI products are used, blocking known cybercriminals and working to ensure AI serves as a stronger shield than weapon.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!