Cybercriminals are exploiting Australia’s national cybercrime reporting platform, ReportCyber, to trick people into handing over cryptocurrency. The AFP-led Joint Policing Cybercrime Coordination Centre (JPC3) warns scammers are posing as police and using stolen data to file fake reports.
In one recent case, a victim was contacted by someone posing as an AFP officer and informed that their details had been found in a data breach linked to cryptocurrency. The impersonator provided an official reference number, which appeared genuine when checked on the ReportCyber portal.
A second caller, pretending to be from a crypto platform, then urged the target to transfer funds to a so-called ‘Cold Storage’ account. The victim realised the deception and ended the call before losing money.
Detective Superintendent Marie Andersson said the scam’s sophistication lay in its false sense of legitimacy and urgency. Criminals verify personal data and act quickly to pressure victims, she explained. However, growing awareness within the community has helped authorities detect such scams sooner.
Authorities are reminding the public that legitimate officers will never request access to wallets, bank accounts, or seed phrases. Australians should remain cautious, verify unexpected calls, and report any suspicious activity through official channels.
The AFP reaffirmed that ReportCyber remains a safe platform for genuine reports and continues to be a vital tool in tracking and preventing cybercrime nationwide.
Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!
A global insurance leader, Chubb, launched a new AI-driven embedded insurance optimisation engine within its Chubb Studio platform during the Singapore FinTech Festival. The announcement marks a significant step in enabling digital distribution partners to offer personalised insurance products more effectively.
The engine uses proprietary AI to analyse customer data, identify personas, recommend relevant insurance products (such as phone damage, travel insurance, hospital cash or life cover) at the point of sale, and deliver click-to-engage options for higher-value products.
Integration models range from Chubb-managed to partner-managed or hybrid, giving flexibility in how partners embed the solution.
From a digital-economy and policy perspective, this development highlights how insurance firms are leveraging AI to personalise customer journeys and integrate insurance seamlessly into consumer platforms and apps.
The shift raises essential questions about data utilisation, transparency of recommendation engines and how insurers strike the balance between innovation and consumer protection.
Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!
In a move that underscores the evolving balance between capability and privacy in AI, Google today introduced Private AI Compute. This new cloud-based processing platform supports its most advanced models, such as those in the Gemini family, while maintaining what it describes as on-device-level data security.
The blog post explains that many emerging AI tasks now exceed the capabilities of on-device hardware alone. To solve this, Google built Private AI Compute to offload heavy computation to its cloud, powered by custom Tensor Processing Units (TPUs) and wrapped in a fortified enclave environment called Titanium Intelligence Enclaves (TIE).
The system uses remote attestation, encryption and IP-blinding relays to ensure user data remains private and inaccessible; ot even Google’s supposed to gain access.
Google identifies initial use-cases in its Pixel devices: features such as Magic Cue and Recorder will benefit from the extra compute, enabling more timely suggestions, multilingual summarisation and advanced context-aware assistance.
At the same time, the company says this platform ‘opens up a new set of possibilities for helpful AI experiences’ that go beyond what on-device AI alone can fully achieve.
This announcement is significant from both a digital policy and platform economy perspective. It illustrates how major technology firms are reconciling user privacy demands with the computational intensity of next-generation AI.
For organisations and governments focused on AI governance and digital diplomacy, the move raises questions about data sovereignty, transparency of remote enclaves and the true nature of ‘secure ‘cloud processing.
Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!
A US federal judge has ruled that a landmark copyright case against OpenAI can proceed, rejecting the company’s attempt to dismiss claims brought by authors and the Authors Guild.
The authors argue that ChatGPT’s summaries of copyrighted works, including George R.R. Martin’s Game of Thrones, unlawfully replicate the original tone, plot, and characters, raising concerns about AI-generated content infringing on creative rights.
The Publishers Association (PA) welcomed the ruling, warning that generative AI could ‘devastate the market’ for books and other creative works by producing infringing content at scale.
It urged the UK government to strengthen transparency rules to protect authors and publishers, stressing that AI systems capable of reproducing an author’s style could undermine the value of original creation.
The case follows a £1.5bn settlement against Anthropic earlier this year for using pirated books to train its models and comes amid growing scrutiny of AI firms.
In Britain, Stability AI recently avoided a copyright ruling after a claim by Getty Images was dismissed on grounds of jurisdiction. Still, the PA stated that the outcome highlighted urgent gaps in UK copyright law regarding AI training and output.
Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!
The European Commission is preparing a Digital Package on simplification for 19 November. A leaked draft outlines instruments covering GDPR, ePrivacy, Data Act and AI Act reforms.
Plans include a single breach portal and a higher reporting threshold. Authorities would receive notifications within 96 hours, with standardised forms and narrower triggers. Controllers could reject or charge for data subject access requests used to pursue disputes.
Cookie rules would shift toward browser-level preference signals respected across services. Aggregated measurement and security uses would not require popups, while GDPR lawful bases expand. News publishers could receive limited exemptions recognising reliance on advertising revenues.
Drafting recognises legitimate interest for training AI models on personal data. Narrow allowances are provided for sensitive data during development, along with EU-wide data protection impact assessment templates. Critics warn proposals dilute safeguards and may soften the AI Act.
Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!
Samsung has unveiled the Vision AI Companion, an advanced conversational AI platform designed to transform the television into a connected household hub.
Unlike voice assistants meant for personal devices, the Vision AI Companion operates on the communal screen, enabling families to ask questions, plan activities, and receive visualised, contextual answers through natural dialogue.
Built into Samsung’s 2025 TV lineup, the system integrates an upgraded Bixby and supports multiple large language models, including Microsoft Copilot and Perplexity.
With its multi-AI agent platform, Vision AI Companion allows users to access personalised recommendations, real-time information, and multimedia responses without leaving their current programme.
It supports 10 languages and includes features such as Live Translate, AI Gaming Mode, Generative Wallpaper, and AI Upscaling Pro. The platform runs on One UI Tizen, offering seven years of software upgrades to ensure longevity and security.
By embedding generative AI into televisions, Samsung aims to redefine how households interact with technology, turning the TV into an intelligent companion that informs, entertains, and connects families across languages and experiences.
Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!
The UK government is introducing landmark legislation to prevent AI from being exploited to generate child sexual abuse material. The new law empowers authorised bodies, such as the Internet Watch Foundation, to test AI models and ensure safeguards prevent misuse.
Reports of AI-generated child abuse imagery have surged, with the IWF recording 426 cases in 2025, more than double the 199 cases reported in 2024. The data also reveals a sharp rise in images depicting infants, increasing from five in 2024 to 92 in 2025.
Officials say the measures will enable experts to identify vulnerabilities within AI systems, making it more difficult for offenders to exploit the technology.
The legislation will also require AI developers to build protections against non-consensual intimate images and extreme content. A group of experts in AI and child safety will be established to oversee secure testing and ensure the well-being of researchers.
Ministers emphasised that child safety must be built into AI systems from the start, not added as an afterthought.
By collaborating with the AI sector and child protection groups, the government aims to make the UK the safest place for children to be online. The approach strikes a balance between innovation and strong protections, thereby reinforcing public trust in AI.
Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!
In a recent statement, the UN highlighted the growing field of neuro-technology, which encompasses devices and software that can measure, access, or manipulate the nervous system, as posing new risks to human rights.
The UN highlighted how such technologies could challenge fundamental concepts like ‘mental integrity’, autonomy and personal identity by enabling unprecedented access to brain data.
It warned that without robust regulation, the benefits of neuro-technology may come with costs such as privacy violations, unequal access and intrusive commercial uses.
The concerns align with broader debates about how advanced technologies, such as AI, are reshaping society, ethics, and international governance.
Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!
Hackers are experimenting with malware that taps large language models to morph in real time, according to Google’s Threat Intelligence Group. An experimental family dubbed PROMPTFLUX can rewrite and obfuscate its own code as it executes, aiming to sidestep static, signature-based detection.
PROMPTFLUX interacts with Gemini’s API to request on-demand functions and ‘just-in-time’ evasion techniques, rather than hard-coding behaviours. GTIG describes the approach as a step toward more adaptive, partially autonomous malware that dynamically generates scripts and changes its footprint.
Researchers point to a maturing underground market for illicit AI utilities that lowers barriers for less-skilled offenders. State-linked operators in North Korea, Iran, and China are reportedly experimenting with AI to enhance reconnaissance, influence, and intrusion workflows.
Defenders are turning to AI, using security frameworks and agents like ‘Big Sleep’ to find flaws. Teams should expect AI-assisted obfuscation, emphasise behaviour-based detection, watch model-API abuse, and lock down developer and automation credentials.
Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!
The Central Bank of Ireland has launched a new campaign to alert consumers to increasingly sophisticated scams targeting financial services users. Officials warned that scammers are adapting, making caution essential with online offers and investments.
Scammers are now using tactics such as fake comparison websites that appear legitimate but collect personal information for fraudulent products or services. Fraud recovery schemes are also common, promising to recover lost funds for an upfront fee, which often leads to further financial loss.
Advanced techniques include AI-generated social media profiles and ads, or ‘deepfakes’, impersonating public figures to promote fake investment platforms.
Deputy Governor Colm Kincaid warned that scams now offer slightly above-market returns, making them harder to spot. Consumers are encouraged to verify information, use regulated service providers, and seek regulated advice before making financial decisions.
The Central Bank advises using trusted comparison sites, checking ads and investment platforms, ignoring unsolicited recovery offers, and following the SAFE test: Stop, Assess, Factcheck, Expose. Reporting suspected scams to the Central Bank or An Garda Síochána remains crucial to protecting personal finances.
Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!