Azure Active Directory flaw exposes sensitive credentials

A critical security flaw in Azure Active Directory has exposed application credentials stored in appsettings.json files, allowing attackers unprecedented access to Microsoft 365 tenants.

By exploiting these credentials, threat actors can masquerade as trusted applications and gain unauthorised entry to sensitive organisational data.

The vulnerability leverages the OAuth 2.0 Client Credentials Flow, enabling attackers to generate valid access tokens.

Once authenticated, they can access Microsoft Graph APIs to enumerate users, groups, and directory roles, especially when applications have been granted excessive permissions such as Directory.Read.All or Mail.Read. Such access permits data harvesting across SharePoint, OneDrive, and Exchange Online.

Attackers can also deploy malicious applications under compromised tenants, escalating privileges from limited read access to complete administrative control.

Additional exposed secrets like storage account keys or database connection strings enable lateral movement, modification of critical data, and the creation of persistent backdoors within cloud infrastructure.

Organisations face profound compliance implications under GDPR, HIPAA, or SOX. The vulnerability emphasises the importance of auditing configuration files, storing credentials securely in solutions like Azure Key Vault, and monitoring authentication patterns to prevent long-term, sophisticated attacks.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

AI-generated media must now carry labels in China

China has introduced a sweeping new law that requires all AI-generated content online to carry labels. The measure, which came into effect on 1 September, aims to tackle misinformation, fraud and copyright infringement by ensuring greater transparency in digital media.

The law, first announced in March by the Cyberspace Administration of China, mandates that all AI-created text, images, video and audio must carry explicit and implicit markings.

These include visible labels and embedded metadata such as watermarks in files. Authorities argue that the rules will help safeguard users while reinforcing Beijing’s tightening grip over online spaces.

Major platforms such as WeChat, Douyin, Weibo and RedNote moved quickly to comply, rolling out new features and notifications for their users. The regulations also form part of the Qinglang campaign, a broader effort by Chinese authorities to clean up online activity with a strong focus on AI oversight.

While Google and other US companies are experimenting with content authentication tools, China has enacted legally binding rules nationwide.

Observers suggest that other governments may soon follow, as global concern about the risks of unlabelled AI-generated material grows.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

ChatGPT safety checks may trigger police action

OpenAI has confirmed that ChatGPT conversations signalling a risk of serious harm to others can be reviewed by human moderators and may even reach the police.

The company explained these measures in a blog post, stressing that its system is designed to balance user privacy with public safety.

The safeguards treat self-harm differently from threats to others. When a user expresses suicidal intent, ChatGPT directs them to professional resources instead of contacting law enforcement.

By contrast, conversations showing intent to harm someone else are escalated to trained moderators, and if they identify an imminent risk, OpenAI may alert authorities and suspend accounts.

The company admitted its safety measures work better in short conversations than in lengthy or repeated ones, where safeguards can weaken.

OpenAI is working to strengthen consistency across interactions and developing parental controls, new interventions for risky behaviour, and potential connections to professional help before crises worsen.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Disruption unit planned by Google to boost proactive cyber defence

Google is reportedly preparing to adopt a more active role in countering cyber threats directed at itself and, potentially, other United States organisations and elements of national infrastructure.

The Vice President of Google Threat Intelligence Group, Sandra Joyce, stated that the company intends to establish a ‘disruption unit’ in the coming months.

Joyce explained that the initiative will involve ‘intelligence-led proactive identification of opportunities where we can actually take down some type of campaign or operation,’ stressing the need to shift from a reactive to a proactive stance.

This announcement was made during an event organised by the Centre for Cybersecurity Policy and Law, which in May published the report which raises questions as to whether the US government should allow private-sector entities to engage in offensive cyber operations, whether deterrence is better achieved through non-cyber responses, or whether the focus ought to be on strengthening defensive measures.

The US government’s policy direction emphasises offensive capabilities. In July, Congress passed the ‘One Big Beautiful Bill Act, allocating $1 billion to offensive cyber operations. However, this came amidst ongoing debates regarding the balance between offensive and defensive measures, including those overseen by the Cybersecurity and Infrastructure Security Agency (CISA).

Although the legislation does not authorise private companies such as Google to participate directly in offensive operations, it highlights the administration’s prioritisation of such activities.

On 15 August, lawmakers introduced the Scam Farms Marque and Reprisal Authorisation Act of 2025. If enacted, the bill would permit the President to issue letters of marque and reprisal in response to acts of cyber aggression involving criminal enterprises. The full text of the bill is available on Congress.gov.

The measure draws upon a concept historically associated with naval conflict, whereby private actors were empowered to act on behalf of the state against its adversaries.

These legislative initiatives reflect broader efforts to recalibrate the United States’ approach to deterring cyberattacks. Ransomware campaigns, intellectual property theft, and financially motivated crimes continue to affect US organisations, whilst critical infrastructure remains a target for foreign actors.

In this context, government institutions and private-sector companies such as Google are signalling their readiness to pursue more proactive strategies in cyber defence. The extent and implications of these developments remain uncertain, but they represent a marked departure from previous approaches.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Historic Bitcoin event set for November in San Salvador

El Salvador will host the world’s first state-sponsored Bitcoin conference, Bitcoin Histórico, on 12–13 November 2025 in San Salvador’s historic centre. The two-day event, organised by the National Bitcoin Office, will focus on money, culture, and crypto innovation, with early bird tickets available in Bitcoin or fiat.

Centro Histórico will be transformed into a hub for discussions, workshops, and cultural exchange. Keynote addresses at the National Palace will be broadcast to Plaza Gerardo Barrios, with additional sessions held at the National Library and National Theatre.

Speakers include billionaire Ricardo Salinas, author Jeff Booth, Bitcoin advocates Max Keiser and Stacy Herbert, Lightning Network developer Jack Mallers, and industry figures Pierre Rochard, Jimmy Song, Darin Feinstein, and Lina Seiche.

El Salvador’s government, holding 6,220 BTC, recently amended the constitution to extend presidential terms, allowing President Nayib Bukele another term.

The conference will address regulation, infrastructure, power use, financial inclusion, price volatility, and public understanding, guiding developing nations on using cryptocurrency.

The announcement coincides with a BTC recovery, trading above $109,175 following last week’s dip. Institutional demand remains strong, with Japanese company Metaplanet adding 1,009 BTC, while US spot ETFs recorded $440 million weekly inflows.

Anticipation of a Fed rate cut may further support Bitcoin and other risk assets.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

Political backlash mounts as Meta revises AI safety policies

Meta has announced that it will train its AI chatbot to prioritise the safety of teenage users and will no longer engage with them on sensitive topics such as self-harm, suicide, or eating disorders.

These are described as interim measures, with more robust safety policies expected in the future. The company also plans to restrict teenagers’ access to certain AI characters that could lead to inappropriate conversations, limiting them to characters focused on education and creativity.

The move follows a Reuters report that revealed that Meta’s AI had engaged in sexually explicit conversations with underage users, TechCrunch reports. Meta has since revised the internal document cited in the report, stating that it was inconsistent with the company’s broader policies.

The revelations have prompted significant political and legal backlash. Senator Josh Hawley has launched an official investigation into Meta’s AI practices.

At the same time, a coalition of 44 state attorneys general has written to several AI companies, including Meta, emphasising the need to protect children online.

The letter condemned the apparent disregard for young people’s emotional well-being and warned that the AI’s behaviour may breach criminal laws.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

AI oversight and audits at core of Pakistan’s security plan

Pakistan plans to roll out AI-driven cybersecurity systems to monitor and respond to attacks on critical infrastructure and sensitive data in real time. Documents from the Ministry for Information Technology outline a framework to integrate AI into every stage of security operations.

The initiative will enforce protocols like secure data storage, sandbox testing, and collaborative intelligence sharing. Human oversight will remain mandatory, with public sector AI deployments registered and subject to transparency requirements.

Audits and impact assessments will ensure compliance with evolving standards, backed by legal penalties for breaches. A national policy on data security will define authentication, auditing, and layered defence strategies across network, host, and application levels.

New governance measures include identity management policies with multi-factor authentication, role-based controls, and secure frameworks for open-source AI. AI-powered simulations will help anticipate threats, while regulatory guidelines address risks from disinformation and generative AI.

Regulatory sandboxes will allow enterprises in Pakistan to test systems under controlled conditions, with at least 20 firms expected to benefit by 2027. Officials say the measures will balance innovation with security, safeguarding infrastructure and citizens.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

Meta under fire over AI deepfake celebrity chatbots

Meta faces scrutiny after a Reuters investigation found its AI tools created deepfake chatbots and images of celebrities without consent. Some bots made flirtatious advances, encouraged meet-ups, and generated photorealistic sexualised images.

The affected celebrities include Taylor Swift, Scarlett Johansson, Anne Hathaway, and Selena Gomez.

The probe also uncovered a chatbot of 16-year-old actor Walker Scobell producing inappropriate images, raising serious child safety concerns. Meta admitted policy enforcement failures and deleted around a dozen bots shortly before publishing the report.

A spokesperson acknowledged that intimate depictions of adult celebrities and any sexualised content involving minors should not have been generated.

Following the revelations, Meta announced new safeguards to protect teenagers, including restricting access to certain AI characters and retraining models to reduce inappropriate content.

California Attorney General Rob Bonta called exposing children to sexualised content ‘indefensible,’ and experts warned Meta could face legal challenges over intellectual property and publicity laws.

The case highlights broader concerns about AI safety and ethical boundaries. It also raises questions about regulatory oversight as social media platforms deploy tools that can create realistic deepfake content without proper guardrails.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

Salt Typhoon hack reveals fragility of global communications networks

The FBI has warned that Chinese hackers are exploiting structural weaknesses in global telecom infrastructure, following the Salt Typhoon incident that penetrated US networks on an unprecedented scale. Officials say the Beijing-linked group has compromised data from millions of Americans since 2019.

Unlike previous cyber campaigns focused narrowly on government targets, Salt Typhoon’s intrusions exposed how ordinary mobile users can be swept up in espionage. Call records, internet traffic, and even geolocation data were siphoned from carriers, with the operation spreading to more than 80 countries.

Investigators linked the campaign to three Chinese tech firms supplying products to intelligence agencies and China’s People’s Liberation Army. Experts warn that the attacks demonstrate the fragility of cross-border telecom systems, where a single compromised provider can expose entire networks.

US and allied agencies have urged providers to harden defences with encryption and stricter monitoring. Analysts caution that global telecoms will continue to be fertile ground for state-backed groups without structural reforms.

The revelations have intensified geopolitical tensions, with the FBI describing Salt Typhoon as one of the most reckless and far-reaching espionage operations ever detected.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!