US regulators offer clarity on spot crypto products

The US Securities and Exchange Commission (SEC) and the Commodity Futures Trading Commission (CFTC) have announced a joint effort to clarify spot cryptocurrency trading. Regulators confirmed that US and foreign exchanges can list spot crypto products- leveraged and margin ones.

The guidance follows the President’s Working Group on Digital Asset Markets recommendations, which called for rules that keep blockchain innovation within the country.

Regulators said they are ready to review filings, address custody and clearing, and ensure spot markets meet transparency and investor protection standards.

Under the new approach, major venues such as the New York Stock Exchange, Nasdaq, CME Group and Cboe Global Markets could seek to list spot crypto assets. Foreign boards of trade recognised by the CFTC may also be eligible.

The move highlights a policy shift under President Donald Trump’s administration, with Congress and the White House pressing for greater regulatory clarity.

In July, the House of Representatives passed the CLARITY Act, a bill on crypto market structure now before the Senate. The moves and the regulators’ statement mark a key step in aligning US digital assets with established financial rules.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

Alleged Apple ID exposure affects 184 million accounts

A report has highlighted a potential exposure of Apple ID logins after a 47.42 GB database was discovered on an unsecured web server, reportedly affecting up to 184 million accounts.

The database was identified by security researcher Jeremiah Fowler, who indicated it may include unencrypted credentials across Apple services and other platforms.

Security experts recommend users review account security, including updating passwords and enabling two-factor authentication.

The alleged database contains usernames, email addresses, and passwords, which could allow access to iCloud, App Store accounts, and data synced across devices.

Observers note that centralised credential management carries inherent risks, underscoring the importance of careful data handling practices.

Reports suggest that Apple’s email software flaws could theoretically increase risk if combined with exposed credentials.

Apple has acknowledged researchers’ contributions in identifying server issues and has issued security updates, while ongoing vigilance and standard security measures are recommended for users.

The case illustrates the challenges of safeguarding large-scale digital accounts and may prompt continued discussion about regulatory standards and personal data protection.

Users are advised to maintain strong credentials and monitor account activity.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

DOGE transfers social security data to the cloud, sources say

A whistle-blower has reported that the Department of Government Efficiency (DOGE) allegedly transferred a copy of the US Social Security database to an Amazon Web Services cloud environment.

The action placed personal information for more than 300 million individuals in a system outside traditional federal oversight.

Known as NUMIDENT, the database contains information submitted for Social Security applications, including names, dates of birth, addresses, citizenship, and parental details.

DOGE personnel managed the cloud environment and gained administrative access to perform testing and operational tasks.

Federal officials have highlighted that standard security protocols and authorisations, such as those outlined under the Federal Information Security Management Act (FISMA) and the Privacy Act of 1974, are designed to protect sensitive data.

Internal reviews have been prompted by the transfer, raising questions about compliance with established federal security practices.

While DOGE has not fully clarified the purpose of the cloud deployment, observers note that such initiatives may relate to broader federal efforts to improve data accessibility or inter-agency information sharing.

The case is part of ongoing discussions on balancing operational flexibility with information security in government systems.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

AI-generated media must now carry labels in China

China has introduced a sweeping new law that requires all AI-generated content online to carry labels. The measure, which came into effect on 1 September, aims to tackle misinformation, fraud and copyright infringement by ensuring greater transparency in digital media.

The law, first announced in March by the Cyberspace Administration of China, mandates that all AI-created text, images, video and audio must carry explicit and implicit markings.

These include visible labels and embedded metadata such as watermarks in files. Authorities argue that the rules will help safeguard users while reinforcing Beijing’s tightening grip over online spaces.

Major platforms such as WeChat, Douyin, Weibo and RedNote moved quickly to comply, rolling out new features and notifications for their users. The regulations also form part of the Qinglang campaign, a broader effort by Chinese authorities to clean up online activity with a strong focus on AI oversight.

While Google and other US companies are experimenting with content authentication tools, China has enacted legally binding rules nationwide.

Observers suggest that other governments may soon follow, as global concern about the risks of unlabelled AI-generated material grows.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

ChatGPT safety checks may trigger police action

OpenAI has confirmed that ChatGPT conversations signalling a risk of serious harm to others can be reviewed by human moderators and may even reach the police.

The company explained these measures in a blog post, stressing that its system is designed to balance user privacy with public safety.

The safeguards treat self-harm differently from threats to others. When a user expresses suicidal intent, ChatGPT directs them to professional resources instead of contacting law enforcement.

By contrast, conversations showing intent to harm someone else are escalated to trained moderators, and if they identify an imminent risk, OpenAI may alert authorities and suspend accounts.

The company admitted its safety measures work better in short conversations than in lengthy or repeated ones, where safeguards can weaken.

OpenAI is working to strengthen consistency across interactions and developing parental controls, new interventions for risky behaviour, and potential connections to professional help before crises worsen.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Disruption unit planned by Google to boost proactive cyber defence

Google is reportedly preparing to adopt a more active role in countering cyber threats directed at itself and, potentially, other United States organisations and elements of national infrastructure.

The Vice President of Google Threat Intelligence Group, Sandra Joyce, stated that the company intends to establish a ‘disruption unit’ in the coming months.

Joyce explained that the initiative will involve ‘intelligence-led proactive identification of opportunities where we can actually take down some type of campaign or operation,’ stressing the need to shift from a reactive to a proactive stance.

This announcement was made during an event organised by the Centre for Cybersecurity Policy and Law, which in May published the report which raises questions as to whether the US government should allow private-sector entities to engage in offensive cyber operations, whether deterrence is better achieved through non-cyber responses, or whether the focus ought to be on strengthening defensive measures.

The US government’s policy direction emphasises offensive capabilities. In July, Congress passed the ‘One Big Beautiful Bill Act, allocating $1 billion to offensive cyber operations. However, this came amidst ongoing debates regarding the balance between offensive and defensive measures, including those overseen by the Cybersecurity and Infrastructure Security Agency (CISA).

Although the legislation does not authorise private companies such as Google to participate directly in offensive operations, it highlights the administration’s prioritisation of such activities.

On 15 August, lawmakers introduced the Scam Farms Marque and Reprisal Authorisation Act of 2025. If enacted, the bill would permit the President to issue letters of marque and reprisal in response to acts of cyber aggression involving criminal enterprises. The full text of the bill is available on Congress.gov.

The measure draws upon a concept historically associated with naval conflict, whereby private actors were empowered to act on behalf of the state against its adversaries.

These legislative initiatives reflect broader efforts to recalibrate the United States’ approach to deterring cyberattacks. Ransomware campaigns, intellectual property theft, and financially motivated crimes continue to affect US organisations, whilst critical infrastructure remains a target for foreign actors.

In this context, government institutions and private-sector companies such as Google are signalling their readiness to pursue more proactive strategies in cyber defence. The extent and implications of these developments remain uncertain, but they represent a marked departure from previous approaches.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Salt Typhoon hack reveals fragility of global communications networks

The FBI has warned that Chinese hackers are exploiting structural weaknesses in global telecom infrastructure, following the Salt Typhoon incident that penetrated US networks on an unprecedented scale. Officials say the Beijing-linked group has compromised data from millions of Americans since 2019.

Unlike previous cyber campaigns focused narrowly on government targets, Salt Typhoon’s intrusions exposed how ordinary mobile users can be swept up in espionage. Call records, internet traffic, and even geolocation data were siphoned from carriers, with the operation spreading to more than 80 countries.

Investigators linked the campaign to three Chinese tech firms supplying products to intelligence agencies and China’s People’s Liberation Army. Experts warn that the attacks demonstrate the fragility of cross-border telecom systems, where a single compromised provider can expose entire networks.

US and allied agencies have urged providers to harden defences with encryption and stricter monitoring. Analysts caution that global telecoms will continue to be fertile ground for state-backed groups without structural reforms.

The revelations have intensified geopolitical tensions, with the FBI describing Salt Typhoon as one of the most reckless and far-reaching espionage operations ever detected.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

US appeals court reverses key findings in Sonos-Google patent case

The US Court of Appeals for the Federal Circuit (CAFC) issued a reversed-in-part and affirmed-in-part a district court decision in the ongoing legal battle between Sonos and Google over smart speaker technologies. The court reversed the district court’s finding that Sonos’s ‘Zone Scene’ patents were unenforceable due to prosecution laches, a legal doctrine that can bar the enforcement of patents if the owner unreasonably delays in pursuing claims.

The district court had held that Sonos waited too long (13 years) to file specific claims following its 2006 provisional application, allegedly prejudicing Google, which had begun developing similar products by 2015.

However, the CAFC found that Google had failed to establish actual prejudice. It noted a lack of evidence that Google had meaningfully invested in the accused technology based on the assumption that Sonos had not already invented it. As a result, the court held that the lower court had abused its discretion in declaring the patents unenforceable.

The CAFC also reversed the district court’s invalidation of the Zone Scene patents for lack of written description, citing sufficient detail in Sonos’s 2019 patents. Google’s argument that the patents described only alternative embodiments was rejected, particularly as Google had presented no expert testimony to rebut Sonos’s claims.

Case background

Essentially, in 2020, Sonos filed a lawsuit against Google in the US, accusing it of infringing on key patents related to wireless multi-room speaker technology. Sonos claimed that after collaborating with Google years earlier, Google used its proprietary technology without permission in products like Google Home and Chromecast.

In 2022, the US International Trade Commission sided with Sonos, leading to a limited import ban on some Google products. In response, Google had to remove or change certain features, such as group volume control.

However, Google later challenged the validity of Sonos’s patents, and some were ruled invalid by a federal court. The legal battle has continued in various jurisdictions, reflecting broader conflicts over intellectual property rights and innovation in the tech world. Both companies have appealed different aspects of the rulings.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

Anthropic updates Claude’s policy with new data training choices

The US AI startup has announced an update to its data policy for Claude users, introducing an option to allow conversations and coding sessions to be used for training future AI models.

Anthropic stated that all Claude Free, Pro, and Max users, including those using Claude Code, will be asked to make a decision by September 28, 2025.

According to Anthropic, users who opt in will permit retention of their conversations for up to five years, with the data contributing to improvements in areas such as reasoning, coding, and analysis.

Those who choose not to participate will continue under the current policy, where conversations are deleted within thirty days unless flagged for legal or policy reasons.

The new policy does not extend to enterprise products, including Claude for Work, Claude Gov, Claude for Education, or API access through partners like Amazon Bedrock and Google Cloud Vertex AI. These remain governed by separate contractual agreements.

Anthropic noted that the choice will also apply to new users during sign-up, while existing users will be prompted through notifications to review their privacy settings.

The company emphasised that users remain in control of their data and that manually deleted conversations will not be used for training.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

ENISA takes charge of new EU Cybersecurity Reserve operations with €36 million in funding

The European Commission has signed a contribution agreement with the European Union Agency for Cybersecurity (ENISA), assigning the agency responsibility for operating and administering the EU Cybersecurity Reserve.

The arrangement includes a €36 million allocation over three years, complementing ENISA’s existing budget.

The EU Cybersecurity Reserve, established under the EU Cyber Solidarity Act, will provide incident response services through trusted managed security providers.

The services are designed to support EU Member States, institutions, and critical sectors in responding to large-scale cybersecurity incidents, with access also available to third countries associated with the Digital Europe Programme.

ENISA will oversee the procurement of these services and assess requests from national authorities and EU bodies, while also working with the Commission and EU-CyCLONe to coordinate crisis response.

If not activated for incident response, the pre-committed services may be redirected towards prevention and preparedness measures.

The reserve is expected to become fully operational by the end of 2025, aligning with the planned conclusion of ENISA’s existing Cybersecurity Support Action in 2026.

ENISA is also preparing a candidate certification scheme for Managed Security Services, with a focus on incident response, in line with the Cyber Solidarity Act.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!