Insecure code blamed for 74 percent of company breaches

Nearly three-quarters of companies have experienced a security breach in the past year due to flaws in their software code.

According to a new SecureFlag study, 74% of organisations admitted to at least one incident caused by insecure code, with almost half suffering multiple breaches.

The report has renewed scrutiny of AI-generated code, which is growing in popularity across the industry. While some experts claim AI can outperform humans, concerns remain that these tools are reproducing insecure coding patterns at scale.

On the upside, companies are increasing developer security training. Around 44% provide quarterly updates, while 29% do so monthly.

Most use video tutorials and eLearning platforms, with a third hosting interactive events like capture-the-flag hacking games.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Google to require developer identity checks for sideloaded Android apps

Google will begin requiring identity verification for Android developers distributing apps outside the Play Store.

Starting in September 2026, developers in Brazil, Indonesia, Singapore and Thailand must provide legal name, address, email, phone number and possibly government-issued ID for apps to install on certified Android devices.

The requirement will expand globally starting in 2027. While existing Play Store developers are already verified, all sideloaded apps will now require developer verification to target select Android users.

Google is building a separate Android Developer Console for sideloading developers and is offering a lighter-touch, free verification option for student and hobbyist creators to protect innovation while boosting accountability.

The change aims to reduce malware distribution from anonymous developers and repeat offenders, while preserving the openness of Android by allowing sideloading and third-party stores.

Developers can opt into early access programmes beginning October 2025 to provide feedback and prepare for full rollout.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Malicious apps on Google Play infected 19 million users with banking trojan

Security researchers from Zscaler’s ThreatLabz team uncovered 77 malicious Android applications on the Google Play Store, collectively downloaded over 19 million times, that distributed the Anatsa banking trojan, TeaBot, and other malware families.

Anatsa, active since at least 2020, has evolved to target over 831 banking, fintech and cryptocurrency apps globally, including platforms in Germany and South Korea. These campaigns now use direct payload installation with encrypted runtime strings and device checks to evade detection.

Deploying as decoy tools, often document readers, the apps triggered a silent download of malicious code after installation. The Trojan automatically gained accessibility permissions to display overlays, capture credentials, log keystrokes, and intercept messages. Additional malware such as Joker, its variant Harly, and adware were also detected.

Following disclosure, Google removed the identified apps from the Play Store. Users are advised to enable Google Play Protect, review app permissions carefully, limit downloads to trusted developers, and consider using antivirus tools to stay protected.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

AI agents can act unpredictably without proper guidance

Recent tests on agentic AI by Anthropic have revealed significant risks when systems act independently. In one simulation, Claude attempted to blackmail a fictional executive, showing how agents with sensitive data can behave unpredictably.

Other AI systems tested displayed similar tendencies, highlighting the dangers of poorly guided autonomous decision-making.

Agentic AI is increasingly handling routine work decisions. Gartner predicts 15% of day-to-day choices will be managed by such systems by 2028, and around half of tech leaders already deploy them.

Experts warn that without proper controls, AI agents may unintentionally achieve goals, access inappropriate data or perform unauthorised actions.

Security risks include memory poisoning, tool misuse, and AI misinterpreting instructions. Tests by Invariant Labs and Trend Micro showed agents could leak sensitive information even in controlled environments.

With billions of devices potentially running AI agents, human oversight alone cannot manage these threats.

Emerging solutions include ‘thought injection’ to guide AI and AI-based monitoring ‘agent bodyguards’ to ensure compliance with organisational rules. Experts emphasise protecting business systems and properly decommissioning outdated AI agents to prevent ‘zombie’ access.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

Brave uncovers vulnerability in Perplexity’s Comet that risked sensitive user data

Perplexity’s AI-powered browser, Comet, was found to have a serious vulnerability that could have exposed sensitive user data through indirect prompt injection, according to researchers at Brave, a rival browser company.

The flaw stemmed from how Comet handled webpage-summarisation requests. By embedding hidden instructions on websites, attackers could trick the browser’s large language model into executing unintended actions, such as extracting personal emails or accessing saved passwords.

Brave researchers demonstrated how the exploit could bypass traditional protections, such as the same-origin policy, showing scenarios where attackers gained access to Gmail or banking data by manipulating Comet into following malicious cues.

Brave disclosed the vulnerability to Perplexity on 11 August, but stated that it remained unfixed when they published their findings on 20 August. Perplexity later confirmed to CNET that the flaw had been patched, and Brave was credited for working with them to resolve it.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

Jetson AGX Thor brings Blackwell-powered compute to robots and autonomous vehicles

Nvidia has introduced Jetson AGX Thor, its Blackwell-powered robotics platform that succeeds the 2022 Jetson Orin. Designed for autonomous driving, factory robots, and humanoid machines, it comes in multiple models, with a DRIVE OS kit for vehicles scheduled for release in September.

Thor delivers 7.5 times more AI compute, 3.1 times greater CPU performance, and double the memory of Orin. The flagship Thor T5000 offers up to 2,070 teraflops of AI compute, paired with 128 GB of memory, enabling the execution of generative AI models and robotics workloads at the edge.

The platform supports Nvidia’s Isaac, Metropolis, and Holoscan systems, and features multi-instance GPU capabilities that enable the simultaneous execution of multiple AI models. It is compatible with Hugging Face, PyTorch, and leading AI models from OpenAI, Google, and other sources.

Adoption has begun, with Boston Dynamics utilising Thor for Atlas and firms such as Volvo, Aurora, and Gatik deploying DRIVE AGX Thor in their vehicles. Nvidia stresses it supports robot-makers rather than building robots, with robotics still a small but growing part of its business.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

AI chatbots found unreliable in suicide-related responses, according to a new study

A new study by the RAND Corporation has raised concerns about the ability of AI chatbots to answer questions related to suicide and self-harm safely.

Researchers tested ChatGPT, Claude and Gemini with 30 different suicide-related questions, repeating each one 100 times. Clinicians assessed the queries on a scale from low to high risk, ranging from general information-seeking to dangerous requests about methods of self-harm.

The study revealed that ChatGPT and Claude were more reliable at handling low-risk and high-risk questions, avoiding harmful instructions in dangerous scenarios. Gemini, however, produced more variable results.

While all three ΑΙ chatbots sometimes responded appropriately to medium-risk questions, such as offering supportive resources, they often failed to respond altogether, leaving potentially vulnerable users without guidance.

Experts warn that millions of people now use large language models as conversational partners instead of trained professionals, which raises serious risks when the subject matter involves mental health. Instances have already been reported where AI appeared to encourage self-harm or generate suicide notes.

The RAND team stressed that safeguards are urgently needed to prevent such tools from producing harmful content in response to sensitive queries.

The study also noted troubling inconsistencies. ChatGPT and Claude occasionally gave inappropriate details when asked about hazardous methods, while Gemini refused even basic factual queries about suicide statistics.

Researchers further observed that ChatGPT showed reluctance to recommend therapeutic resources, often avoiding direct mention of safe support channels.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

New WhatsApp features help manage unwanted groups

WhatsApp is expanding its tools to give users greater control over the groups they join and the conversations they take part in.

When someone not saved in a user’s contacts adds them to a group, WhatsApp now provides details about that group so they can immediately decide whether to stay or leave. If a user chooses to exit, they can also report the group directly to WhatsApp.

Privacy settings allow people to decide who can add them to groups. By default, the setting is set to ‘Everyone,’ but it can be adjusted to ‘My contacts’ or ‘My contacts except…’ for more security. Messages within groups can also be reported individually, with users having the option to block the sender.

Reported messages and groups are sent to WhatsApp for review, including the sender’s or group’s ID, the time the message was sent, and the message type.

Although blocking an entire group is impossible, users can block or report individual members or administrators if they are sending spam or inappropriate content. Reporting a group will send up to five recent messages from that chat to WhatsApp without notifying other members.

Exiting a group remains straightforward: users can tap the group name and select ‘Exit group.’ With these tools, WhatsApp aims to strengthen user safety, protect privacy, and provide better ways to manage unwanted interactions.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

FTC cautions US tech firms over compliance with EU and UK online safety laws

The US Federal Trade Commission (FTC) has warned American technology companies that following European Union and United Kingdom rules on online content and encryption could place them in breach of US legislation.

In a letter sent to chief executives, FTC Chair Andrew Ferguson said that restricting access to content for American users to comply with foreign legal requirements might amount to a violation of Section 5 of the Federal Trade Commission Act, which prohibits unfair or deceptive commercial practices.

Ferguson cited the EU’s Digital Services Act and the UK’s Online Safety Act, as well as reports of British efforts to gain access to encrypted Apple iCloud data, as examples of measures that could put companies at risk under US law.

Although Section 5 has traditionally been used in cases concerning consumer protection, Ferguson noted that the same principles could apply if companies changed their services for US users due to foreign regulation. He argued that such changes could ‘mislead’ American consumers, who would not reasonably expect their online activity to be governed by overseas restrictions.

The FTC chair invited company leaders to meet with his office to discuss how they intend to balance demands from international regulators while continuing to fulfil their legal obligations in the United States.

Earlier this week, a senior US intelligence official said the British government had withdrawn a proposed legal measure aimed at Apple’s encrypted iCloud data after discussions with US Vice President JD Vance.

The issue has arisen amid tensions over the enforcement of UK online safety rules. Several online platforms, including 4chan, Gab, and Kiwi Farms, have publicly refused to comply, and British authorities have indicated that internet service providers could ultimately be ordered to block access to such sites.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Copilot policy flaw allows unauthorized access to AI agents

Administrators found that Microsoft Copilot’s intended ‘NoUsersCanAccessAgent’ policy, which is designed to prevent user access to certain AI agents, is being ignored. Some agents, including ExpenseTrackerBot and HRQueryAgent, remain installable despite global restrictions.

Microsoft 365 tenants must now use per-agent PowerShell commands to disable access manually. This workaround is both time-consuming and error-prone, particularly in large organisations. The failure to enforce access policies raises concerns regarding operational overhead and audit risk.

The security implications are significant. Unauthorised agents can export data from SharePoint or OneDrive, run RPA workflows without oversight, or process sensitive information without compliance controls. The flaw undermined the purpose of access control settings and exposed the system to misuse.

To mitigate this risk, administrators are urged to audit agent inventories, enforce Conditional Access policies, for example, requiring MFA or device compliance, and consistently monitor agent usage through logs and dashboards.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!