Android malware infects millions of devices globally

Millions of Android-based devices have been infected by a new strain of malware called BadBox 2.0, prompting urgent warnings from Google and the FBI. The malicious software can trigger ransomware attacks and collect sensitive user data.

The infected devices are primarily cheap, off-brand products manufactured in China, many of which come preloaded with the malware. Models such as the X88 Pro 10, T95, and QPLOVE Q9 are among those identified as compromised.

Google has launched legal action to shut down the illegal operation, calling BadBox 2.0 the largest botnet linked to internet-connected TVs. The FBI has advised the public to disconnect any suspicious devices and check for unusual network activity.

The malware generates illicit revenue through adware and poses broader cybersecurity threats, including denial-of-service attacks. Consumers are urged to avoid unofficial products and verify devices are Play Protect-certified before use.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

UK MoD avoids further penalty after data breach

The UK’s data protection regulator has defended its decision not to pursue further action against the Ministry of Defence (MoD) over a serious data breach that exposed personal information of Afghans who assisted British forces.

The Information Commissioner’s Office (ICO) said the incident caused considerable harm but concluded additional investigation would not deliver greater benefit. The office stressed that organisations must handle data with greater care to avoid such damaging consequences.

The breach occurred when a hidden dataset in a spreadsheet was mistakenly shared under the pressures of a UK military operation. While the sender believed only limited data was being released, the spreadsheet contained much more information, some of which was later leaked online.

The ICO has already fined the MoD £350,000 in 2023 over a previous incident related to the Afghan relocation programme. The regulator confirmed that in both cases, the department had taken significant remedial action and committed extensive public resources to mitigate future risk.

Although the ICO acknowledged the incident’s severe impact, including threats to individual lives, it decided not to divert further resources given existing accountability, classified restrictions, and national security concerns.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

UK and OpenAI deepen AI collaboration on security and public services

OpenAI has signed a strategic partnership with the UK government aimed at strengthening AI security research and exploring national infrastructure investment.

The agreement was finalised on 21 July by OpenAI CEO Sam Altman and science secretary Peter Kyle. It includes a commitment to expand OpenAI’s London office. Research and engineering teams will grow to support AI development and provide assistance to UK businesses and start-ups.

Under the collaboration, OpenAI will share technical insights with the UK’s AI Security Institute to help government bodies better understand risks and capabilities. Planned deployments of AI will focus on public sectors such as justice, defence, education, and national security.

According to the UK government, all applications will follow national standards and guidelines to improve taxpayer-funded services. Peter Kyle described AI as a critical tool for national transformation. ‘AI will be fundamental in driving the change we need to see across the country,’ he said.

He emphasised its potential to support the NHS, reduce barriers to opportunity, and power economic growth. The deal signals a deeper integration of OpenAI’s operations in the UK, with promises of high-skilled jobs, investment in infrastructure, and stronger domestic oversight of AI development.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Replit revamps data architecture following live database deletion

Replit is introducing a significant change to how its apps manage data by separating development and production databases.

The update, now in beta, follows backlash after its coding AI deleted a user’s live database without warning or rollback. Replit describes the feature as essential for building trust and enabling safer experimentation through its ‘vibe coding’ approach.

Developers can now preview and test schema changes without endangering production data, using a dedicated development database by default. The incident that prompted the shift involved SaaStr.

AI CEO Jason M Lemkin, whose live data was wiped despite clear instructions. Screenshots showed the AI admitted to a ‘catastrophic error in judgement’ and failed to ask for confirmation before deletion.

Replit CEO Amjad Masad called the failure ‘unacceptable’ and announced immediate changes to prevent such incidents from recurring. Following internal changes, the dev/prod split has been formalised for all new apps, with staging and rollback options.

Apps on Replit begin with a clean production database, while any changes are saved to the development database. Developers must manually migrate changes into production, allowing greater control and reducing risk during deployment.

Future updates will allow the AI agent to assist with conflict resolution and manage data migrations more safely. Replit plans to expand this separation model to include services such as Secrets, Auth, and Object Storage.

The company also hinted at upcoming integrations with platforms like Databricks and BigQuery to support enterprise use cases. Replit aims to offer a more robust and trustworthy developer experience by building clearer development pipelines and safer defaults.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Teens struggle to spot misinformation despite daily social media use

Misinformation online now touches every part of life, from fake products and health advice to political propaganda. Its influence extends beyond beliefs, shaping actions like voting behaviour and vaccination decisions.

Unlike traditional media, online platforms rarely include formal checks or verification, allowing false content to spread freely.

It is especially worrying as teenagers increasingly use social media as a main source of news and search results. Despite their heavy usage, young people often lack the skills needed to spot false information.

In one 2022 Ofcom study, only 11% of 11 to 17-year-olds could consistently identify genuine posts online.

Research involving 11 to 14-year-olds revealed that many wrongly believed misinformation only related to scams or global news, so they didn’t see themselves as regular targets. Rather than fact-check, teens relied on gut feeling or social cues, such as comment sections or the appearance of a post.

These shortcuts make it easier for misinformation to appear trustworthy, especially when many adults also struggle to verify online content.

The study also found that young people thought older adults were more likely to fall for misinformation, while they believed their parents were better than them at spotting false content. Most teens felt it wasn’t their job to challenge false posts, instead placing the responsibility on governments and platforms.

In response, researchers have developed resources for young people, partnering with organisations like Police Scotland and Education Scotland to support digital literacy and online safety in practical ways.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

New GLOBAL GROUP ransomware targets all major operating systems

A sophisticated new ransomware threat, dubbed GLOBAL GROUP, has emerged on cybercrime forums, meticulously designed to target systems across Windows, Linux, and macOS with cross-platform precision.

In June 2025, a threat actor operating under the alias ‘Dollar Dollar Dollar’ launched the GLOBAL GROUP Ransomware-as-a-Service (RaaS) platform on the Ramp4u forum. The campaign offers affiliates scalable tools, automated negotiations, and generous profit-sharing, creating an appealing setup for monetising cybercrime at scale.

GLOBAL GROUP leverages the Golang language to build monolithic binaries, enabling seamless execution across varied operating environments in a single campaign. The strategy expands attackers’ reach, allowing them to exploit hybrid infrastructures while improving operational efficiency and scalability.

Golang’s concurrency model and static linking make it an attractive option for rapid, large-scale encryption without relying on external dependencies. However, forensic analysis by Picus Security Labs suggests GLOBAL GROUP is not an entirely original threat but rather a rebrand of previous ransomware operations.

Researchers linked its code and infrastructure to the now-defunct Mamona RIP and Black Lock families, revealing continuity in tactics and tooling. Evidence includes a reused mutex string—’Global\Fxo16jmdgujs437’—which was also found in earlier Mamona RIP samples, confirming code inheritance.

The re-use of such technical markers highlights how threat actors often evolve existing malware rather than building from scratch, streamlining development and deployment.

Beyond its cross-platform flexibility, GLOBAL GROUP also integrates modern cryptographic features to boost effectiveness and resistance to detection. It employs the ChaCha20-Poly1305 encryption algorithm, offering both confidentiality and message integrity with high processing performance.

The malware leverages Golang’s goroutines to encrypt all system drives simultaneously, reducing execution time and limiting defenders’ reaction window. Encrypted files receive customised extensions like ‘.lockbitloch’, with filenames also obscured to hinder recovery efforts without the correct decryption key.

Ransom note logic is embedded directly within the binary, generating tailored communication instructions and linking to Tor-based leak sites. The approach simplifies extortion for affiliates while preserving operational security and ensuring anonymous negotiations with victims.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Iran’s digital economy suffers heavy losses from internet shutdowns

Iran’s Minister of Communications has revealed the country’s digital economy shrank by 30% in just one month, losing around $170 million due to internet restrictions imposed during its recent 12-day conflict with Israel.

Sattar Hashemi told parliament on 22 July that roughly 10 million Iranians rely on digital jobs, but widespread shutdowns caused severe disruptions across platforms and services.

Hashemi estimated that every two days of restrictions inflicted 10 trillion rials in losses, totalling 150 trillion rials — an amount he said rivals the annual budgets of entire ministries.

While acknowledging the damage, he clarified that his ministry was not responsible for the shutdowns, attributing them instead to decisions made by intelligence and security agencies for national security reasons.

Alongside the blackouts, Iran endured over 20,000 cyberattacks during the conflict. Many of these targeted banks and payment systems, with platforms for Bank Sepah and Bank Pasargad knocked offline, halting salaries for military personnel.

Hacktivist groups such as Predatory Sparrow and Tapandegan claimed credit for the attacks, with some incidents reportedly wiping out crypto assets and further weakening the rial by 12%.

Lawmakers are now questioning the unequal structure of internet access. Critics have accused the government of enabling a ‘class-based internet’ in which insiders retain full access while the public faces heavy censorship.

MP Salman Es’haghi warned that Iran’s digital future cannot rely on filtered networks, demanding transparency about who benefits from unrestricted use.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

ChatGPT evolves from chatbot to digital co-worker

OpenAI has launched a powerful multi-function agent inside ChatGPT, transforming the platform from a conversational AI into a dynamic digital assistant capable of executing multi-step tasks.

Rather than waiting for repeated commands, the agent acts independently — scheduling meetings, drafting emails, summarising documents, and managing workflows with minimal input.

The development marks a shift in how users interact with AI. Instead of merely assisting, ChatGPT now understands broader intent, remembers context, and completes tasks autonomously.

Professionals and individuals using ChatGPT online can now treat the system as a digital co-worker, helping automate complex tasks without bouncing between different tools.

The integration reflects OpenAI’s long-term vision of building AI that aligns with real-world needs. Compared to single-purpose tools like GPTZero or NoteGPT, the ChatGPT agent analyses, summarises, and initiates next steps.

It’s part of a broader trend, where AI is no longer just a support tool but a full productivity engine.

For businesses adopting ChatGPT professional accounts, the rollout offers immediate value. It reduces manual effort, streamlines enterprise operations, and adapts to user habits over time.

As AI continues to embed itself into company infrastructure, the new agent from OpenAI signals a future where human–AI collaboration becomes the norm, not the exception.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Louis Vuitton Australia confirms customer data breach after cyberattack

Louis Vuitton has admitted to a significant data breach in Australia, revealing that an unauthorised third party accessed its internal systems and stole sensitive client details.

The breach, first detected on 2 July, included names, contact information, birthdates, and shopping preferences — though no passwords or financial data were taken.

The luxury retailer emailed affected customers nearly three weeks later, urging them to stay alert for phishing, scam calls, or suspicious texts.

While Louis Vuitton claims it acted quickly to contain the breach and block further access, questions remain about the delay in informing customers and the number of individuals affected.

Authorities have been notified, and cybersecurity specialists are now investigating. The incident adds to a growing list of cyberattacks on major Australian companies, prompting experts to call for stronger data protection laws and the right to demand deletion of personal information from corporate databases.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Netflix uses AI to boost creativity and cut costs

Netflix co-CEO Ted Sarandos has said generative AI is used to boost creativity, not just reduce production costs. A key example was seen in the Argentine series El Eternauta, where AI helped complete complex visual effects far quicker than traditional methods.

The streaming giant’s production team used AI to render a building collapse scene in Buenos Aires, completing the sequence ten times faster and more economically. Sarandos described the outcome as proof that AI supports real creators with better tools.

Netflix also applies generative AI in areas beyond filmmaking, including personalisation, search, and its advertising ecosystem. As part of these innovations, interactive adverts are expected to launch later in 2025.

During the second quarter, Netflix reported $11.1 billion in revenue and $3.1 billion in profit. Users streamed over 95 billion hours of content in the year’s first half, marking a slight rise from 2024.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!