US court system suffers sweeping cyber intrusion

A sweeping cyberattack has compromised the federal court filing system across multiple US states, exposing sensitive case data and informant identities. The breach affects core systems used by legal professionals and the public.

Sources say the Administrative Office of the US Courts first realised the scale of the hack in early July, with authorities still assessing the damage. Nation-state-linked actors or organised crime are suspected.

Critical systems like CM/ECF and PACER were impacted, raising fears over sealed indictments, search warrants and cooperation records now exposed. A dozen dockets were reportedly tampered with in at least one district.

Calls to modernise the ageing court infrastructure have intensified, with officials warning of rising cyber threats and the urgent need for system replacements.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

OpenAI expands reach with models now accessible on AWS platforms

Amazon Web Services (AWS) now offers access to OpenAI’s gpt‑oss‑120b and gpt‑oss‑20b models through both Amazon Bedrock and SageMaker JumpStart. Bedford’s unified API lets developers experiment and switch models without rewriting code, while SageMaker offers fine‑tuning, deployment pipelines, and robust enterprise controls.

AWS CEO Matt Garman celebrated the partnership as a ‘powerhouse combination’, noting that the models outperform comparable options, claiming they are three times more price-efficient than Gemini and five times more than DeepSeek‑R1, when deployed via Bedrock.

Rich functionality comes with these models: wide context capacity, chain-of-thought transparency, adjustable reasoning levels, and compatibility with agentic workflows. Bedrock offers secure deployment with Guardrails support, while SageMaker enables experimentation across AWS regions.

Financial markets took notice. AWS stock rose after the announcement, as analysts viewed the pairing with OpenAI’s open models as a meaningful step toward boosting its AI offerings amid fierce cloud rivalry.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Google adds clever email safety feature

Thanks to a new feature that shows verified brand logos, Gmail users will now find it easier to spot phishing emails. The update uses BIMI, a standard that allows trusted companies to display official logos next to their messages.

To qualify, brands must secure their domain with DMARC and have their logos verified by authorities such as Entrust or DigiCert. Once approved, they receive a Verified Mark Certificate, linking their logo to their domain.

The feature helps users quickly distinguish between genuine emails and fraudulent ones. Early adopters include Bank of America in the US, whose logo now appears directly in inboxes.

Google’s move is expected to drive broader adoption, with services like MailChimp and Verizon Media already supporting the system. The change could significantly reduce phishing risks for Gmail’s vast user base.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

WhatsApp shuts down 6.8 million scam accounts

As part of its anti-scam efforts, WhatsApp has removed 6.8 million accounts linked to fraudulent activity, according to its parent company, Meta.

The crackdown follows the discovery that organised criminal groups are operating scam centres across Southeast Asia, hacking WhatsApp accounts or adding users to group chats to lure victims into fake investment schemes and other types of fraud.

In one case, WhatsApp, Meta, and OpenAI collaborated to disrupt a Cambodian cybercrime group that used ChatGPT to generate fake instructions for a rent-a-scooter pyramid scheme.

Victims were enticed with offers of cash for social media engagement before being moved to private chats and pressured to make upfront payments via cryptocurrency platforms.

Meta warned that these scams often stem from well-organised networks in Southeast Asia, some exploiting forced labour. Authorities continue to urge the public to remain vigilant, enable features such as WhatsApp’s two-step verification, and be wary of suspicious or unsolicited messages.

It should be mentioned that these scams have also drawn political attention in the USA. Namely, US Senator Maggie Hassan has urged SpaceX CEO Elon Musk to act against transnational criminal groups in Southeast Asia that use Starlink satellite internet to run massive online fraud schemes targeting Americans.

Despite SpaceX’s policies allowing service termination for fraud, Starlink remains active in regions where these scams, often linked to forced labour and human trafficking, operate.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Venice Film Festival hit by data breach

The Venice Film Festival has confirmed that a cyberattack compromised the personal data of accredited attendees, including journalists and industry members. The breach affected names, contact details, and tax information.

The cybersecurity attackers accessed the festival’s servers on 7 July and copied and stored documents. Festival organisers responded by isolating systems and informing authorities.

Those affected received a formal notification and are encouraged to contact the event’s data protection officer for support or updates.

Despite the breach, the 82nd edition of the festival will proceed as scheduled from 27 August to 9 September in Italy.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Android spyware posing as antivirus

LunaSpy is a new Android spyware campaign disguised as an antivirus or banking protection app. It spreads via messenger links and fake channels, tricking users into installing what appears to be a helpful security tool.

Once installed, the app mimics a real scanner, shows fake threat detections and operates unnoticed. In reality, it monitors everything on the device and sends sensitive data to attackers.

Active since at least February 2025, LunaSpy spreads through hijacked contact accounts and emerging Telegram channels. It poses as legitimate software to build trust before beginning surveillance.

Android users must avoid apps from unofficial links, scrutinise messenger invites, and only install from trusted stores. Reliable antivirus software and cautious permission granting provide essential defence.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Malaysia tackles online scams with AI and new cyber guidelines

Cybercrime involving financial scams continues to rise in Malaysia, with 35,368 cases reported in 2024, a 2.53 per cent increase from the previous year, resulting in losses of RM1.58 billion.

The situation remains severe in 2025, with over 12,000 online scam cases recorded in the first quarter alone, involving fake e-commerce offers, bogus loans, and non-existent investment platforms. Losses during this period reached RM573.7 million.

Instead of waiting for the situation to worsen, the Digital Ministry is rolling out proactive safeguards. These include new AI-related guidelines under development by the Department of Personal Data Protection, scheduled for release by March 2026.

The documents will cover data protection impact assessments, automated decision-making, and privacy-by-design principles.

The ministry has also introduced an official framework for responsible AI use in the public sector, called GPAISA, to ensure ethical compliance and support across government agencies.

Additionally, training initiatives such as AI Untuk Rakyat and MD Workforce aim to equip civil servants and enforcement teams with skills to handle AI and cyber threats.

In partnership with CyberSecurity Malaysia and Universiti Kebangsaan Malaysia, the ministry is also creating an AI-powered application to verify digital images and videos.

Instead of relying solely on manual analysis, the tool will help investigators detect online fraud, identity forgery, and synthetic media more effectively.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Law curbs AI use in mental health services across US state

A new law in a US state has banned the use of AI for delivering mental health care, drawing a firm line between digital tools and licensed professionals. The legislation limits AI systems to administrative tasks such as note-taking and scheduling, explicitly prohibiting them from offering therapy or clinical advice.

The move comes as concerns grow over the use of AI chatbots in sensitive care roles. Lawmakers in the midwestern state of Illinois approved the measure, citing the need to protect residents from potentially harmful or misleading AI-generated responses.

Fines of up to $10,000 may be imposed on companies or individuals who violate the ban. Officials stressed that AI lacks the empathy, accountability and clinical oversight necessary to ensure safe and ethical mental health treatment.

One infamous case saw an AI-powered chatbot suggest drug use to a fictional recovering addict, a warning signal, experts say, of what can go wrong without strict safeguards. The law is named the Wellness and Oversight for Psychological Resources Act.

Other parts of the United States are considering similar steps. Florida’s governor recently described AI as ‘the biggest issue’ facing modern society and pledged new state-level regulations within months.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

ChatGPT checkout could sideline major platforms

OpenAI is preparing to add a payment system into ChatGPT, allowing users to complete purchases without ever leaving the chatbot. Retail leaders are calling it a turning point in e-commerce, as it may significantly simplify how people shop online.

The company is expected to take a cut of transactions and work with platforms such as Shopify to streamline operations. With over 77 million users, ChatGPT has the reach to become a dominant shopping tool, potentially bypassing platforms like Amazon.

Executives worry visibility could depend on revenue-sharing, forcing brands to pay for prominence in the chatbot. Some fear this pay-to-play model could leave smaller retailers behind and limit consumer choice.

At the same time, personalised AI-driven recommendations may enhance user experiences while raising questions about data use and bias. Entrepreneurs on X are already predicting widespread AI-led shopping within a year.

Retailers are now adjusting strategies to remain visible in this new market. While some early adopters show success using AI to complete purchases, others highlight technical challenges in integration and website compatibility.

Observers say search engines could lose relevance as shoppers turn to AI instead. Regulators remain cautious, particularly in markets like Australia, where many consumers are open to AI-led transactions.

The industry faces a shift where chatbots may evolve into full-scale digital marketplaces. Brands are urged to act quickly, or risk losing out as AI commerce becomes the norm.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

New malware steals 200,000 passwords and credit card details through fake software

Hackers are now using fake versions of familiar software and documents to spread a new info-stealing malware known as PXA Stealer.

First discovered by Cisco Talos, the malware campaign is believed to be operated by Vietnamese-speaking cybercriminals and has already compromised more than 4,000 unique IP addresses across 62 countries.

Instead of targeting businesses alone, the attackers are now focusing on ordinary users in countries including the US, South Korea, and the Netherlands.

PXA Stealer is written in Python and designed to collect passwords, credit card data, cookies, autofill information, and even crypto wallet details from infected systems.

It spreads by sideloading malware into files like Microsoft Word executables or ZIP archives that also contain legitimate-looking programs such as Haihaisoft PDF Reader.

The malware uses malicious DLL files to gain persistence through the Windows Registry and downloads additional harmful files via Dropbox. After infection, it uses Telegram to exfiltrate stolen data, which is then sold on the dark web.

Once activated, the malware even attempts to open a fake PDF in Microsoft Edge, though the file fails to launch and shows an error message — by that point, it has already done the damage.

To avoid infection, users should avoid clicking unknown email links and should not open attachments from unfamiliar senders. Instead of saving passwords and card details in browsers, a trusted password manager is a safer choice.

Although antivirus software remains helpful, hackers in the campaign have used sophisticated methods to bypass detection, making careful online behaviour more important than ever.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!