Police warn of scammers posing as AFP officers in crypto fraud

Cybercriminals are exploiting Australia’s national cybercrime reporting platform, ReportCyber, to trick people into handing over cryptocurrency. The AFP-led Joint Policing Cybercrime Coordination Centre (JPC3) warns scammers are posing as police and using stolen data to file fake reports.

In one recent case, a victim was contacted by someone posing as an AFP officer and informed that their details had been found in a data breach linked to cryptocurrency. The impersonator provided an official reference number, which appeared genuine when checked on the ReportCyber portal.

A second caller, pretending to be from a crypto platform, then urged the target to transfer funds to a so-called ‘Cold Storage’ account. The victim realised the deception and ended the call before losing money.

Detective Superintendent Marie Andersson said the scam’s sophistication lay in its false sense of legitimacy and urgency. Criminals verify personal data and act quickly to pressure victims, she explained. However, growing awareness within the community has helped authorities detect such scams sooner.

Authorities are reminding the public that legitimate officers will never request access to wallets, bank accounts, or seed phrases. Australians should remain cautious, verify unexpected calls, and report any suspicious activity through official channels.

The AFP reaffirmed that ReportCyber remains a safe platform for genuine reports and continues to be a vital tool in tracking and preventing cybercrime nationwide.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

UK moves to curb AI-generated child abuse imagery with pre-release testing

The UK government plans to let approved organisations test AI models before release to ensure they cannot generate child sexual abuse material. The amendment to the Crime and Policing Bill aims to build safeguards into AI tools at the design stage rather than after deployment.

The Internet Watch Foundation reported 426 AI-related abuse cases this year, up from 199 in 2024. Chief Executive Kerry Smith said the move could make AI products safer before they are launched. The proposal also extends to detecting extreme pornography and non-consensual intimate images.

The NSPCC’s Rani Govender welcomed the reform but said testing should be mandatory to make child safety part of product design. Earlier this year, the Home Office introduced new offences for creating or distributing AI tools used to produce abusive imagery, punishable by up to five years in prison.

Technology Secretary Liz Kendall said the law would ensure that trusted groups can verify the safety of AI systems. In contrast, Safeguarding Minister Jess Phillips said it would help prevent predators from exploiting legitimate tools.

Would you like to learn more about AI, tech and digital diplomacyIf so, ask our Diplo chatbot!

IMY investigates major ransomware attack on Swedish IT supplier

Sweden’s data protection authority, IMY, has opened an investigation into a massive ransomware-related data breach that exposed personal information belonging to 1.5 million people. The breach originated from a cyberattack on IT provider Miljödata in August, which affected roughly 200 municipalities.

Hackers reportedly stole highly sensitive data, including names, medical certificates, and rehabilitation records, much of which has since been leaked on the dark web. Swedish officials have condemned the incident, calling it one of the country’s most serious cyberattacks in recent years.

The IMY said the investigation will examine Miljödata’s data protection measures and the response of several affected public bodies, such as Gothenburg, Älmhult, and Västmanland. The regulator’s goal is to identify security shortcomings for future cyber threats.

Authorities have yet to confirm how the attackers gained access to Miljödata’s systems, and no completion date for the investigation has been announced. The breach has reignited calls for tighter cybersecurity standards across Sweden’s public sector.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Google flags adaptive malware that rewrites itself with AI

Hackers are experimenting with malware that taps large language models to morph in real time, according to Google’s Threat Intelligence Group. An experimental family dubbed PROMPTFLUX can rewrite and obfuscate its own code as it executes, aiming to sidestep static, signature-based detection.

PROMPTFLUX interacts with Gemini’s API to request on-demand functions and ‘just-in-time’ evasion techniques, rather than hard-coding behaviours. GTIG describes the approach as a step toward more adaptive, partially autonomous malware that dynamically generates scripts and changes its footprint.

Investigators say the current samples appear to be in development or testing, with incomplete features and limited Gemini API access. Google says it has disabled associated assets and has not observed a successful compromise, yet warns that financially motivated actors are exploring such tooling.

Researchers point to a maturing underground market for illicit AI utilities that lowers barriers for less-skilled offenders. State-linked operators in North Korea, Iran, and China are reportedly experimenting with AI to enhance reconnaissance, influence, and intrusion workflows.

Defenders are turning to AI, using security frameworks and agents like ‘Big Sleep’ to find flaws. Teams should expect AI-assisted obfuscation, emphasise behaviour-based detection, watch model-API abuse, and lock down developer and automation credentials.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

Central Bank warns of new financial scams in Ireland

The Central Bank of Ireland has launched a new campaign to alert consumers to increasingly sophisticated scams targeting financial services users. Officials warned that scammers are adapting, making caution essential with online offers and investments.

Scammers are now using tactics such as fake comparison websites that appear legitimate but collect personal information for fraudulent products or services. Fraud recovery schemes are also common, promising to recover lost funds for an upfront fee, which often leads to further financial loss.

Advanced techniques include AI-generated social media profiles and ads, or ‘deepfakes’, impersonating public figures to promote fake investment platforms.

Deputy Governor Colm Kincaid warned that scams now offer slightly above-market returns, making them harder to spot. Consumers are encouraged to verify information, use regulated service providers, and seek regulated advice before making financial decisions.

The Central Bank advises using trusted comparison sites, checking ads and investment platforms, ignoring unsolicited recovery offers, and following the SAFE test: Stop, Assess, Factcheck, Expose. Reporting suspected scams to the Central Bank or An Garda Síochána remains crucial to protecting personal finances.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

Inside the rise and fall of a cybercrime kingpin

Ukrainian hacker Vyacheslav Penchukov, once known online as ‘Tank’, climbed from gaming forums in Donetsk to the top of the global cybercrime scene. As leader of the notorious Jabber Zeus and later Evil Corp affiliates, he helped steal tens of millions from banks, charities and businesses around the world while remaining on the FBI Most Wanted list for nearly a decade.

After years on the run, he was dramatically arrested in Switzerland in 2022 and is now serving time in a Colorado prison. In a rare interview, Penchukov revealed how cybercrime evolved from simple bank theft to organised ransomware targeting hospitals and major corporations. He admits paranoia became his constant companion, as betrayal within hacker circles led to his downfall.

Today, the former cyber kingpin spends his sentence studying languages and reflecting on the empire he built and lost. While he shows little remorse for his victims, his story offers a rare glimpse into the hidden networks that fuel global hacking and the blurred line between ambition and destruction.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

Inside OpenAI’s battle to protect AI from prompt injection attacks

OpenAI has identified prompt injection as one of the most pressing new challenges in AI security. As AI systems gain the ability to browse the web, handle personal data and act on users’ behalf, they become targets for malicious instructions hidden within online content.

These attacks, known as prompt injections, can trick AI models into taking unintended actions or revealing sensitive information.

To counter the issue, OpenAI has adopted a multi-layered defence strategy that combines safety training, automated monitoring and system-level security protections. The company’s research into ‘Instruction Hierarchy’ aims to help models distinguish between trusted and untrusted commands.

Continuous red-teaming and automated detection systems further strengthen resilience against evolving threats.

OpenAI also provides users with greater control, featuring built-in safeguards such as approval prompts before sensitive actions, sandboxing for code execution, and ‘Watch Mode’ when operating on financial or confidential sites.

These measures ensure that users remain aware of what actions AI agents perform on their behalf.

While prompt injection remains a developing risk, OpenAI expects adversaries to devote significant resources to exploiting it. The company continues to invest in research and transparency, aiming to make AI systems as secure and trustworthy as a cautious, well-informed human colleague.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Washington Post confirms hit in Oracle-linked Cl0p hacking spree

The Washington Post said it was affected by a wider breach tied to Oracle’s E-Business Suite, joining a growing list of victims. The vulnerability was reportedly exploited by the Cl0p ransomware gang, which demands payment from victims in exchange for not leaking stolen files.

Oracle, a major enterprise software provider, disclosed in October that a zero-day flaw in its E-Business Suite had been exploited over the summer. Google also warned that Oracle systems were being targeted in what appeared to be a broader wave of data theft attempts. An initial emergency patch on 2 October failed, and a second critical fix on 11 October left customers exposed for days.

Cl0p’s campaign has already hit high-profile targets including Harvard University, Envoy Air, DXC Technology and Chicago Public Schools. The group, active since at least 2019, previously abused MOVEit, GoAnywhere and Cleo file-transfer tools.

Would you like to learn more aboutAI, tech and digital diplomacyIf so, ask our Diplo chatbot!

Bank Indonesia reports over 370 million cyber threat attempts in 2024

Bank Indonesia (BI) has reported more than 370 million attempted cyber threats targeting the country, highlighting the growing exposure linked to Indonesia’s rapid digital transformation.

The central bank also noted a 25% increase in anomalous cyber traffic in 2024 compared to the previous year. Deputy Governor Filianingsih Hendarta stated that the rise in cyber activity underscores the need for all stakeholders to remain vigilant as Indonesia continues to develop its digital infrastructure.

She also added that public trust is essential to sustaining a resilient digital ecosystem, as trust takes a long time to build and can be lost in to moment.

To strengthen cybersecurity and prepare for continued digitalisation, BI has developed the Indonesian Payment System Blueprint (BSPI) 2030, a strategic framework intended to enhance institutional collaboration and reinforce the security of the national payment system.

BI data shows that internet penetration in Indonesia has reached 80.66%, equivalent to approximately 229 million people, surpassing the global average of 68.7% (around 6.66 billion people worldwide).

Filianingsih also emphasised that strengthening digital infrastructure requires cross-sectoral and international cooperation, given the global and rapidly evolving nature of cyber threats.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

New law aims to make the internet safer in Singapore

Singapore’s Parliament has passed the Online Safety (Relief and Accountability) Bill, a landmark law designed to provide faster protection and redress for victims of online harm. After over eight hours of debate, MPs approved the Bill, which will establish the Online Safety Commission (OSC) by June 2026, a one-stop agency empowered to direct online platforms, group administrators, and internet service providers to remove harmful content or restrict the accounts of perpetrators.

The move follows findings that social media platforms often take five days or more to act on harmful content reports, leaving victims exposed to harassment and abuse.

The new law introduces civil remedies and enforcement powers for a wide range of online harms, including harassment, doxing, stalking, intimate image abuse, and child exploitation. Victims can seek compensation for lost income or force perpetrators to surrender profits gained from harmful acts.

In severe cases, individuals or entities that ignore OSC orders may face fines of up to S$500,000, and daily penalties may be applied until compliance is achieved. The OSC can also order access blocks or app removals for persistent offenders.

Ministers Josephine Teo, Rahayu Mahzam, and Edwin Tong emphasised that the Bill aims to empower victims rather than punish expression, while ensuring privacy safeguards. Victims will be able to request the disclosure of a perpetrator’s identity to pursue civil claims, though misuse of such data, such as doxing in retaliation, will be an offence. The law also introduces a ‘no wrong door’ approach, ensuring that victims will not have to navigate multiple agencies to seek help.

Singapore joins a small group of nations, such as Australia, that have created specialised agencies for digital safety. The government hopes the OSC will help rebuild trust in online spaces and establish new norms for digital behaviour.

As Minister Teo noted, ‘Our collective well-being is compromised when those who are harmed are denied restitution. By fostering trust in online spaces, Singaporeans can participate safely and confidently in our digital society.’

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!