Hackers exploited flaws in WhatsApp and Apple devices, company says

WhatsApp has disclosed a hacking attempt that combined flaws in its app with a vulnerability in Apple’s operating system. The company has since fixed the issues.

The exploit, tracked as CVE-2025-55177 in WhatsApp and CVE-2025-43300 in iOS, allowed attackers to hijack devices via malicious links. Fewer than 200 users worldwide are believed to have been affected.

Amnesty International reported that some victims appeared to be members of civic organisations. Its Security Lab is collecting forensic data and warned that iPhone and Android users were impacted.

WhatsApp credited its security team for identifying the loopholes, describing the operation as highly advanced but narrowly targeted. The company also suggested that other apps could have been hit in the same campaign.

The disclosure highlights ongoing risks to secure messaging platforms, even those with end-to-end encryption. Experts stress that keeping apps and operating systems up to date remains essential to reducing exposure to sophisticated exploits.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

Calls grow for better security on smart intimate devices

Cyber experts are warning that Bluetooth-enabled adult toys create openings for stalking, blackmail and assault, due to weak security in companion apps and device firmware. UK-commissioned research outlined risks such as interception, account takeover and unsafe heat profiles.

Officials urged better protection across consumer IoT, advising updates, strong authentication and clear support lifecycles. Guidance applies to connected toys alongside other smart devices in the home.

Security researchers and regulators have long flagged poor encryption and lax authentication in intimate tech. At the same time, recent disclosures showed major brands patching flaws that exposed emails and allowed remote account control.

Industry figures argue for stricter standards and transparency on data handling, noting that stigma can depress reporting and aid repeat exploitation. Specialist groups recommend buying only from vendors that document encryption and update policies.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

Human behaviour remains weak link in cyber defence

Cyber security specialists warn that human behaviour remains the most significant vulnerability in digital defence, despite billions invested in AI and advanced systems.

Experts note that in the Gulf, many cybersecurity breaches in 2025 still originate from human error, often triggered by social engineering attacks. Phishing emails, false directives from executives, or urgent invoice requests exploit psychological triggers such as authority, fear and habit.

Analysts argue that building resilience requires shifting workplace culture. Security must be seen not just as the responsibility of IT teams but embedded in everyday decision-making. Staff should feel empowered to question, report and learn without fear of reprimand.

AI-driven threats, from identity-based breaches to ransomware campaigns, are growing more complex across the region. Organisations are urged to focus on digital trust, investing in awareness programmes and user-centred protocols so employees become defenders rather than liabilities.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

Google dismisses false breach rumours as Gmail security concerns grow

Reports that Gmail suffered a massive breach have been dismissed by Google, which said rumours of warnings to 2.5 billion users were false.

In a Monday blog post, Google rejected claims that it had issued global notifications about a serious Gmail security issue. It stressed that its protections remain effective against phishing and malware.

Confusion stems from a June incident involving a Salesforce server, during which attackers briefly accessed public business information, including names and contact details. Google said all affected parties were notified by early August.

The company acknowledged that phishing attempts are increasing, but clarified that Gmail’s defences block more than 99.9% of such attempts. A July blog post on phishing risks may have been misinterpreted as evidence of a breach.

Google urged users to remain vigilant, recommending password alternatives such as passkeys and regular account reviews. While the false alarm spurred unnecessary panic, security experts noted that updating credentials remains good practice.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

Alleged Apple ID exposure affects 184 million accounts

A report has highlighted a potential exposure of Apple ID logins after a 47.42 GB database was discovered on an unsecured web server, reportedly affecting up to 184 million accounts.

The database was identified by security researcher Jeremiah Fowler, who indicated it may include unencrypted credentials across Apple services and other platforms.

Security experts recommend users review account security, including updating passwords and enabling two-factor authentication.

The alleged database contains usernames, email addresses, and passwords, which could allow access to iCloud, App Store accounts, and data synced across devices.

Observers note that centralised credential management carries inherent risks, underscoring the importance of careful data handling practices.

Reports suggest that Apple’s email software flaws could theoretically increase risk if combined with exposed credentials.

Apple has acknowledged researchers’ contributions in identifying server issues and has issued security updates, while ongoing vigilance and standard security measures are recommended for users.

The case illustrates the challenges of safeguarding large-scale digital accounts and may prompt continued discussion about regulatory standards and personal data protection.

Users are advised to maintain strong credentials and monitor account activity.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

DOGE transfers social security data to the cloud, sources say

A whistle-blower has reported that the Department of Government Efficiency (DOGE) allegedly transferred a copy of the US Social Security database to an Amazon Web Services cloud environment.

The action placed personal information for more than 300 million individuals in a system outside traditional federal oversight.

Known as NUMIDENT, the database contains information submitted for Social Security applications, including names, dates of birth, addresses, citizenship, and parental details.

DOGE personnel managed the cloud environment and gained administrative access to perform testing and operational tasks.

Federal officials have highlighted that standard security protocols and authorisations, such as those outlined under the Federal Information Security Management Act (FISMA) and the Privacy Act of 1974, are designed to protect sensitive data.

Internal reviews have been prompted by the transfer, raising questions about compliance with established federal security practices.

While DOGE has not fully clarified the purpose of the cloud deployment, observers note that such initiatives may relate to broader federal efforts to improve data accessibility or inter-agency information sharing.

The case is part of ongoing discussions on balancing operational flexibility with information security in government systems.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

AI scams target seniors’ savings

Cybersecurity experts have warned that AI is being used to target senior citizens in sophisticated financial scams. The Phantom Hacker scam impersonates tech support, bank, and government workers to steal seniors’ life savings.

The first stage involves a fake tech support worker accessing the victim’s computer to check accounts under the pretence of spotting fraud. A fraud department impersonator then tells victims to transfer funds to a ‘safe’ account allegedly at risk from foreign hackers.

A fake government worker then directs the victim to transfer money to an alias account controlled by the scammers. Check Point CIO Pete Nicoletti says AI helps scammers identify targets by analysing social media and online activity.

Experts stress that reporting the theft immediately is crucial. Delays significantly reduce the chance of recovering stolen funds, leaving many victims permanently defrauded.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

Azure Active Directory flaw exposes sensitive credentials

A critical security flaw in Azure Active Directory has exposed application credentials stored in appsettings.json files, allowing attackers unprecedented access to Microsoft 365 tenants.

By exploiting these credentials, threat actors can masquerade as trusted applications and gain unauthorised entry to sensitive organisational data.

The vulnerability leverages the OAuth 2.0 Client Credentials Flow, enabling attackers to generate valid access tokens.

Once authenticated, they can access Microsoft Graph APIs to enumerate users, groups, and directory roles, especially when applications have been granted excessive permissions such as Directory.Read.All or Mail.Read. Such access permits data harvesting across SharePoint, OneDrive, and Exchange Online.

Attackers can also deploy malicious applications under compromised tenants, escalating privileges from limited read access to complete administrative control.

Additional exposed secrets like storage account keys or database connection strings enable lateral movement, modification of critical data, and the creation of persistent backdoors within cloud infrastructure.

Organisations face profound compliance implications under GDPR, HIPAA, or SOX. The vulnerability emphasises the importance of auditing configuration files, storing credentials securely in solutions like Azure Key Vault, and monitoring authentication patterns to prevent long-term, sophisticated attacks.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

AI-generated media must now carry labels in China

China has introduced a sweeping new law that requires all AI-generated content online to carry labels. The measure, which came into effect on 1 September, aims to tackle misinformation, fraud and copyright infringement by ensuring greater transparency in digital media.

The law, first announced in March by the Cyberspace Administration of China, mandates that all AI-created text, images, video and audio must carry explicit and implicit markings.

These include visible labels and embedded metadata such as watermarks in files. Authorities argue that the rules will help safeguard users while reinforcing Beijing’s tightening grip over online spaces.

Major platforms such as WeChat, Douyin, Weibo and RedNote moved quickly to comply, rolling out new features and notifications for their users. The regulations also form part of the Qinglang campaign, a broader effort by Chinese authorities to clean up online activity with a strong focus on AI oversight.

While Google and other US companies are experimenting with content authentication tools, China has enacted legally binding rules nationwide.

Observers suggest that other governments may soon follow, as global concern about the risks of unlabelled AI-generated material grows.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

ChatGPT safety checks may trigger police action

OpenAI has confirmed that ChatGPT conversations signalling a risk of serious harm to others can be reviewed by human moderators and may even reach the police.

The company explained these measures in a blog post, stressing that its system is designed to balance user privacy with public safety.

The safeguards treat self-harm differently from threats to others. When a user expresses suicidal intent, ChatGPT directs them to professional resources instead of contacting law enforcement.

By contrast, conversations showing intent to harm someone else are escalated to trained moderators, and if they identify an imminent risk, OpenAI may alert authorities and suspend accounts.

The company admitted its safety measures work better in short conversations than in lengthy or repeated ones, where safeguards can weaken.

OpenAI is working to strengthen consistency across interactions and developing parental controls, new interventions for risky behaviour, and potential connections to professional help before crises worsen.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!