Alleged Apple ID exposure affects 184 million accounts

A report has highlighted a potential exposure of Apple ID logins after a 47.42 GB database was discovered on an unsecured web server, reportedly affecting up to 184 million accounts.

The database was identified by security researcher Jeremiah Fowler, who indicated it may include unencrypted credentials across Apple services and other platforms.

Security experts recommend users review account security, including updating passwords and enabling two-factor authentication.

The alleged database contains usernames, email addresses, and passwords, which could allow access to iCloud, App Store accounts, and data synced across devices.

Observers note that centralised credential management carries inherent risks, underscoring the importance of careful data handling practices.

Reports suggest that Apple’s email software flaws could theoretically increase risk if combined with exposed credentials.

Apple has acknowledged researchers’ contributions in identifying server issues and has issued security updates, while ongoing vigilance and standard security measures are recommended for users.

The case illustrates the challenges of safeguarding large-scale digital accounts and may prompt continued discussion about regulatory standards and personal data protection.

Users are advised to maintain strong credentials and monitor account activity.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

UK institutions embrace enterprise AI through global tech alliance

Microsoft, Accenture, and Avanade are deepening their 25-year partnership to bring AI into some of the UK’s most vital sectors, including healthcare and finance. NHS England is piloting AI-powered tools to streamline patient services and cut down on time-consuming administrative tasks, while Nationwide Building Society is deploying machine learning to improve customer services, speed up mortgage approvals, and enhance fraud detection.

The three companies have different responsibilities in tackling the challenges of enterprise AI. Microsoft provides the Azure cloud platform and pre-built AI models, Accenture contributes sector-specific expertise and governance frameworks, and Avanade integrates the technology into existing systems and workflows. That structure helps organisations move beyond experimental AI pilots and scale solutions reliably in highly regulated industries.

Unlike consumer applications, enterprise AI must meet strict compliance requirements, especially concerning sensitive patient data or financial transactions. The partnership emphasises embedding AI directly into day-to-day operations rather than treating it as an add-on, reducing disruption for staff and ensuring systems work seamlessly once live.

With regulators tightening oversight, the alliance highlights responsible AI as a key focus. By prioritising transparency, security, and ethical use, Microsoft, Accenture, and Avanade are positioning their collaboration as a blueprint for how AI can be adopted across critical institutions without compromising trust or reliability.

Would you like to learn more about AI, tech and digital diplomacyIf so, ask our Diplo chatbot!

Bitcoin holdings in El Salvador spread across new addresses

El Salvador, the first country to adopt bitcoin as legal tender, has restructured its national bitcoin holdings to strengthen security against potential future threats. The National Bitcoin Office (ONBTC) announced that the country’s 6,280 BTC, worth around $687 million, has been split across 14 new addresses, each holding no more than 500 BTC. Officials say this change reduces exposure to risks, including those that could arise from advances in quantum computing.

The concern stems from the possibility that quantum computers, once powerful enough, could break cryptographic protections and reveal private keys. While no such machine exists today, bitcoin developers have long debated the timeline of this threat. ONBTC also highlighted that avoiding address reuse improves security and privacy while allowing the government to maintain transparency.

The broader bitcoin community remains divided on the urgency of quantum risks. Some experts argue the issue is exaggerated, while others warn that the industry may have far less time than previously thought. A developer known as Hunter Beast recently cautioned that breakthroughs in IBM’s quantum experiments suggest the worst-case scenario could arrive within three years.

The bitcoin strategy of El Salvador continues to draw criticism from international institutions. The IMF, which approved a $3.5 billion loan to the country, insists that no new bitcoin purchases have been made this year and that the government is merely reshuffling its reserves. The ONBTC disputes this claim, maintaining that fresh purchases are still taking place despite pressure to scale back its cryptocurrency policies.

Would you like to learn more about AI, tech and digital diplomacyIf so, ask our Diplo chatbot!

DOGE transfers social security data to the cloud, sources say

A whistle-blower has reported that the Department of Government Efficiency (DOGE) allegedly transferred a copy of the US Social Security database to an Amazon Web Services cloud environment.

The action placed personal information for more than 300 million individuals in a system outside traditional federal oversight.

Known as NUMIDENT, the database contains information submitted for Social Security applications, including names, dates of birth, addresses, citizenship, and parental details.

DOGE personnel managed the cloud environment and gained administrative access to perform testing and operational tasks.

Federal officials have highlighted that standard security protocols and authorisations, such as those outlined under the Federal Information Security Management Act (FISMA) and the Privacy Act of 1974, are designed to protect sensitive data.

Internal reviews have been prompted by the transfer, raising questions about compliance with established federal security practices.

While DOGE has not fully clarified the purpose of the cloud deployment, observers note that such initiatives may relate to broader federal efforts to improve data accessibility or inter-agency information sharing.

The case is part of ongoing discussions on balancing operational flexibility with information security in government systems.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

AI boom drives massive surge in data centre power demand

According to Goldman Sachs, the surge in AI is set to transform global energy markets, with data centres expected to consume 165% more electricity by 2030 compared to 2023. The bank reports that US spending on data centre construction has tripled in just three years, while occupancy rates at existing facilities remain close to record highs.

The demand is driven by hyperscale operators like Amazon Web Services, Microsoft Azure, and Google Cloud, which are rapidly expanding their infrastructure to meet the power-hungry needs of AI systems.

Global data centres use about 55 gigawatts of electricity, more than half of which supports cloud computing. Traditional workloads like email and storage still account for a third, while AI represents just 14%.

However, Goldman Sachs projects that by 2027, overall consumption could rise to 84 gigawatts, with AI’s share growing to over a quarter. That shift is straining grids and pushing operators toward new solutions as AI servers can consume ten times more electricity than traditional racks.

Meeting this demand will require massive investment. Goldman Sachs estimates that global grid upgrades could cost as much as US$720 billion by 2030, with US utilities alone needing an additional US$50 billion in new generation capacity for data centres.

While renewables like wind and solar are increasingly cost-competitive, their intermittent output means operators lean on hybrid models with backup gas and battery storage. At the same time, technology companies are reviving interest in nuclear power, with contracts for over 10 gigawatts of new capacity signed in the US last year.

The expansion is most evident in Europe and North America, with Nordic countries, Spain, and France attracting investment due to their renewable energy resources. At the same time, hubs like Germany, Britain, and Ireland rely on incentives and established ecosystems. Yet, uncertainty remains.

Advances like DeepSeek, a Chinese AI model reportedly as capable as US systems but more efficient, could temper power demand growth. For now, however, the trajectory is clear, AI is reshaping the data centre industry and the global energy landscape.

Would you like to learn more about AI, tech and digital diplomacyIf so, ask our Diplo chatbot!

AI scams target seniors’ savings

Cybersecurity experts have warned that AI is being used to target senior citizens in sophisticated financial scams. The Phantom Hacker scam impersonates tech support, bank, and government workers to steal seniors’ life savings.

The first stage involves a fake tech support worker accessing the victim’s computer to check accounts under the pretence of spotting fraud. A fraud department impersonator then tells victims to transfer funds to a ‘safe’ account allegedly at risk from foreign hackers.

A fake government worker then directs the victim to transfer money to an alias account controlled by the scammers. Check Point CIO Pete Nicoletti says AI helps scammers identify targets by analysing social media and online activity.

Experts stress that reporting the theft immediately is crucial. Delays significantly reduce the chance of recovering stolen funds, leaving many victims permanently defrauded.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

Azure Active Directory flaw exposes sensitive credentials

A critical security flaw in Azure Active Directory has exposed application credentials stored in appsettings.json files, allowing attackers unprecedented access to Microsoft 365 tenants.

By exploiting these credentials, threat actors can masquerade as trusted applications and gain unauthorised entry to sensitive organisational data.

The vulnerability leverages the OAuth 2.0 Client Credentials Flow, enabling attackers to generate valid access tokens.

Once authenticated, they can access Microsoft Graph APIs to enumerate users, groups, and directory roles, especially when applications have been granted excessive permissions such as Directory.Read.All or Mail.Read. Such access permits data harvesting across SharePoint, OneDrive, and Exchange Online.

Attackers can also deploy malicious applications under compromised tenants, escalating privileges from limited read access to complete administrative control.

Additional exposed secrets like storage account keys or database connection strings enable lateral movement, modification of critical data, and the creation of persistent backdoors within cloud infrastructure.

Organisations face profound compliance implications under GDPR, HIPAA, or SOX. The vulnerability emphasises the importance of auditing configuration files, storing credentials securely in solutions like Azure Key Vault, and monitoring authentication patterns to prevent long-term, sophisticated attacks.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

AI-generated media must now carry labels in China

China has introduced a sweeping new law that requires all AI-generated content online to carry labels. The measure, which came into effect on 1 September, aims to tackle misinformation, fraud and copyright infringement by ensuring greater transparency in digital media.

The law, first announced in March by the Cyberspace Administration of China, mandates that all AI-created text, images, video and audio must carry explicit and implicit markings.

These include visible labels and embedded metadata such as watermarks in files. Authorities argue that the rules will help safeguard users while reinforcing Beijing’s tightening grip over online spaces.

Major platforms such as WeChat, Douyin, Weibo and RedNote moved quickly to comply, rolling out new features and notifications for their users. The regulations also form part of the Qinglang campaign, a broader effort by Chinese authorities to clean up online activity with a strong focus on AI oversight.

While Google and other US companies are experimenting with content authentication tools, China has enacted legally binding rules nationwide.

Observers suggest that other governments may soon follow, as global concern about the risks of unlabelled AI-generated material grows.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

ChatGPT safety checks may trigger police action

OpenAI has confirmed that ChatGPT conversations signalling a risk of serious harm to others can be reviewed by human moderators and may even reach the police.

The company explained these measures in a blog post, stressing that its system is designed to balance user privacy with public safety.

The safeguards treat self-harm differently from threats to others. When a user expresses suicidal intent, ChatGPT directs them to professional resources instead of contacting law enforcement.

By contrast, conversations showing intent to harm someone else are escalated to trained moderators, and if they identify an imminent risk, OpenAI may alert authorities and suspend accounts.

The company admitted its safety measures work better in short conversations than in lengthy or repeated ones, where safeguards can weaken.

OpenAI is working to strengthen consistency across interactions and developing parental controls, new interventions for risky behaviour, and potential connections to professional help before crises worsen.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Disruption unit planned by Google to boost proactive cyber defence

Google is reportedly preparing to adopt a more active role in countering cyber threats directed at itself and, potentially, other United States organisations and elements of national infrastructure.

The Vice President of Google Threat Intelligence Group, Sandra Joyce, stated that the company intends to establish a ‘disruption unit’ in the coming months.

Joyce explained that the initiative will involve ‘intelligence-led proactive identification of opportunities where we can actually take down some type of campaign or operation,’ stressing the need to shift from a reactive to a proactive stance.

This announcement was made during an event organised by the Centre for Cybersecurity Policy and Law, which in May published the report which raises questions as to whether the US government should allow private-sector entities to engage in offensive cyber operations, whether deterrence is better achieved through non-cyber responses, or whether the focus ought to be on strengthening defensive measures.

The US government’s policy direction emphasises offensive capabilities. In July, Congress passed the ‘One Big Beautiful Bill Act, allocating $1 billion to offensive cyber operations. However, this came amidst ongoing debates regarding the balance between offensive and defensive measures, including those overseen by the Cybersecurity and Infrastructure Security Agency (CISA).

Although the legislation does not authorise private companies such as Google to participate directly in offensive operations, it highlights the administration’s prioritisation of such activities.

On 15 August, lawmakers introduced the Scam Farms Marque and Reprisal Authorisation Act of 2025. If enacted, the bill would permit the President to issue letters of marque and reprisal in response to acts of cyber aggression involving criminal enterprises. The full text of the bill is available on Congress.gov.

The measure draws upon a concept historically associated with naval conflict, whereby private actors were empowered to act on behalf of the state against its adversaries.

These legislative initiatives reflect broader efforts to recalibrate the United States’ approach to deterring cyberattacks. Ransomware campaigns, intellectual property theft, and financially motivated crimes continue to affect US organisations, whilst critical infrastructure remains a target for foreign actors.

In this context, government institutions and private-sector companies such as Google are signalling their readiness to pursue more proactive strategies in cyber defence. The extent and implications of these developments remain uncertain, but they represent a marked departure from previous approaches.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!