FTC cautions US tech firms over compliance with EU and UK online safety laws

The US Federal Trade Commission (FTC) has warned American technology companies that following European Union and United Kingdom rules on online content and encryption could place them in breach of US legislation.

In a letter sent to chief executives, FTC Chair Andrew Ferguson said that restricting access to content for American users to comply with foreign legal requirements might amount to a violation of Section 5 of the Federal Trade Commission Act, which prohibits unfair or deceptive commercial practices.

Ferguson cited the EU’s Digital Services Act and the UK’s Online Safety Act, as well as reports of British efforts to gain access to encrypted Apple iCloud data, as examples of measures that could put companies at risk under US law.

Although Section 5 has traditionally been used in cases concerning consumer protection, Ferguson noted that the same principles could apply if companies changed their services for US users due to foreign regulation. He argued that such changes could ‘mislead’ American consumers, who would not reasonably expect their online activity to be governed by overseas restrictions.

The FTC chair invited company leaders to meet with his office to discuss how they intend to balance demands from international regulators while continuing to fulfil their legal obligations in the United States.

Earlier this week, a senior US intelligence official said the British government had withdrawn a proposed legal measure aimed at Apple’s encrypted iCloud data after discussions with US Vice President JD Vance.

The issue has arisen amid tensions over the enforcement of UK online safety rules. Several online platforms, including 4chan, Gab, and Kiwi Farms, have publicly refused to comply, and British authorities have indicated that internet service providers could ultimately be ordered to block access to such sites.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Copilot policy flaw allows unauthorized access to AI agents

Administrators found that Microsoft Copilot’s intended ‘NoUsersCanAccessAgent’ policy, which is designed to prevent user access to certain AI agents, is being ignored. Some agents, including ExpenseTrackerBot and HRQueryAgent, remain installable despite global restrictions.

Microsoft 365 tenants must now use per-agent PowerShell commands to disable access manually. This workaround is both time-consuming and error-prone, particularly in large organisations. The failure to enforce access policies raises concerns regarding operational overhead and audit risk.

The security implications are significant. Unauthorised agents can export data from SharePoint or OneDrive, run RPA workflows without oversight, or process sensitive information without compliance controls. The flaw undermined the purpose of access control settings and exposed the system to misuse.

To mitigate this risk, administrators are urged to audit agent inventories, enforce Conditional Access policies, for example, requiring MFA or device compliance, and consistently monitor agent usage through logs and dashboards.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Gmail accounts targeted in phishing wave after Google data leak

Hackers linked to the ShinyHunters group have compromised Google’s Salesforce systems, leading to a data leak that puts Gmail and Google Cloud users at risk of phishing attacks.

Google confirmed that customer and company names were exposed, though no passwords were stolen. Attackers are now exploiting the breach with phishing schemes, including fake account resets and malware injection attempts through outdated access points.

With Gmail and Google Cloud serving around 2.5 billion users worldwide, both companies and individuals could be targeted. Early reports on Reddit describe callers posing as Google staff warning of supposed account breaches.

Google urges users to strengthen protections by running its Security Checkup, enabling Advanced Protection, and switching to passkeys instead of passwords. The company emphasised that its staff never initiates unsolicited password resets by phone or email.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

INTERPOL reports over 1,200 arrests in Africa-wide cybercrime operation

INTERPOL has announced that a continent-wide law enforcement initiative targeting cybercrime and fraud networks led to more than 1,200 arrests between June and August 2025. The operation, known as Serengeti 2.0, was carried out across multiple African states and focused on ransomware, online fraud, and business email compromise schemes. Authorities reported the recovery of approximately USD 97.4 million, allegedly stolen from more than 88,000 victims worldwide.

In Angola, police closed 25 unauthorised cryptocurrency mining sites, reportedly operated by 60 Chinese nationals. In Zambia, authorities dismantled a large-scale fraudulent investment scheme involving cryptocurrency platforms, which is estimated to have defrauded around 65,000 individuals of roughly USD 300 million. Fifteen suspects were detained, and assets, including domains, mobile numbers, and bank accounts, were seized.

In a separate raid in Lusaka, police disrupted a suspected human trafficking network and confiscated hundreds of forged passports from seven different countries.

INTERPOL has previously noted that Africa’s rapid uptake of digital technologies, particularly in finance and e-commerce, has increased the scope for cybercriminal activity. At the same time, comparatively weak cybersecurity frameworks have left financial institutions and government systems exposed to data breaches, economic losses, and disruption to trade.

Separately, in June, a Nigerian court sentenced nine Chinese nationals to prison for running an online fraud syndicate that recruited young Nigerians. Following the verdict, China’s ambassador to Nigeria proposed the creation of a joint working group to investigate cybercrime involving Chinese nationals in the region.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Google Cloud’s new AI tools expand enterprise threat protection

Following last week’s announcements on AI-driven cybersecurity, Google Cloud has unveiled further tools at its Security Summit 2025 aimed at protecting enterprise AI deployments and boosting efficiency for security teams.

The updates build on prior innovations instead of replacing them, reinforcing Google’s strategy of integrating AI directly into security operations.

Vice President and General Manager Jon Ramsey highlighted the growing importance of agentic approaches as AI agents operate across increasingly complex enterprise environments.

Building on the previous rollout, Google now introduces Model Armor protections, designed to shield AI agents from prompt injections, jailbreaking, and data leakage, enhancing safeguards without interrupting existing workflows.

Additional enhancements include the Alert Investigation agent, which automates event enrichment and analysis while offering actionable recommendations.

By combining Mandiant threat intelligence feeds with Google’s Gemini AI, organisations can now detect and respond to incidents across distributed agent networks more rapidly and efficiently than before.

SecOps Labs and updated SOAR dashboards provide early access to AI-powered threat detection experiments and comprehensive visualisations of security operations.

These tools allow teams to continue scaling agentic AI security, turning previous insights into proactive, enterprise-ready protections for real-world deployments.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Musicians report surge in AI fakes appearing on Spotify and iTunes

Folk singer Emily Portman has become the latest artist targeted by fraudsters releasing AI-generated music in her name. Fans alerted her to a fake album called Orca appearing on Spotify and iTunes, which she said sounded uncannily like her style but was created without her consent.

Portman has filed copyright complaints, but says the platforms were slow to act, and she has yet to regain control of her Spotify profile. Other artists, including Josh Kaufman, Jeff Tweedy, Father John Misty, Sam Beam, Teddy Thompson, and Jakob Dylan, have faced similar cases in recent weeks.

Many of the fake releases appear to originate from the same source, using similar AI artwork and citing record labels with Indonesian names. The tracks are often credited to the same songwriter, Zyan Maliq Mahardika, whose name also appears on imitations of artists in other genres.

Industry analysts say streaming platforms and distributors are struggling to keep pace with AI-driven fraud. Tatiana Cirisano of Midia Research noted that fraudsters exploit passive listeners to generate streaming revenue, while services themselves are turning to AI and machine learning to detect impostors.

Observers warn the issue is likely to worsen before it improves, drawing comparisons to the early days of online piracy. Artists and rights holders may face further challenges as law enforcement attempts to catch up with the evolving abuse of AI.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

Energy and government sectors in Poland face mounting hacktivist threats

Poland has become the leading global target for politically and socially motivated cyberattacks, recording over 450 incidents in the second quarter of 2025, according to Spain’s Industrial Cybersecurity Center.

The report ranked Poland ahead of Ukraine, the UK, France, Germany, and other European states in hacktivist activity. Government institutions and the energy sector were among the most targeted, with organisations supporting Ukraine described as especially vulnerable.

ZIUR’s earlier first-quarter analysis had warned of a sharp rise in attacks against state bodies across Europe. Pro-Russian groups were identified as among the most active, increasingly turning to denial-of-service campaigns to disrupt critical operations.

Europe accounted for the largest share of global hacktivism in the second quarter, with more than 2,500 successful denial-of-service attacks recorded between April and June, underlining the region’s heightened exposure.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

Global tech competition intensifies as the UK outlines a £1 trillion digital blueprint

The United Kingdom has unveiled a strategy to grow its digital economy to £1 trillion by harnessing AI, quantum computing, and cybersecurity. The plan emphasises public-private partnerships, training, and international collaboration to tackle skills shortages and infrastructure gaps.

The initiative builds on the UK tech sector’s £1.2 trillion valuation, with regional hubs in cities such as Bristol and Manchester fuelling expansion in emerging technologies. Experts, however, warn that outdated systems and talent deficits could stall progress unless workforce development accelerates.

AI is central to the plan, with applications spanning healthcare and finance. Quantum computing also features, with investments in research and cybersecurity aimed at strengthening resilience against supply disruptions and future threats.

The government highlights sustainability as a priority, promoting renewable energy and circular economies to ensure digital growth aligns with environmental goals. Regional investment in blockchain, agri-tech, and micro-factories is expected to create jobs and diversify innovation-driven growth.

By pursuing these initiatives, the UK aims to establish itself as a leading global tech player alongside the US and China. Ethical frameworks and adaptive strategies will be key to maintaining public trust and competitiveness.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

Australia weighs cyber militia to counter rising digital threats

Cyberattacks are intensifying worldwide, with Australia now ranked fourth globally for threats against operational technology and industrial sectors. Rising AI-powered incursions have exposed serious vulnerabilities in the country’s national defence and critical infrastructure.

The 2023–2030 Cyber Security Strategy designed by the Government of Australia aims to strengthen resilience through six ‘cyber shields’, including legislation and intelligence sharing. But a skills shortage leaves organisations vulnerable as ransomware attacks on mining and manufacturing continue to rise.

One proposal gaining traction is the creation of a volunteer ‘cyber militia’. Inspired by the cyber defence unit in Estonia, this network would mobilise unconventional talent, retirees, hobbyist hackers, and students, to bolster monitoring, threat hunting, and incident response.

Supporters argue that such a force could fill gaps left by formal recruitment, particularly in smaller firms and rural networks. Critics, however, warn of vetting risks, insider threats, and the need for new legal frameworks to govern liability and training.

Pilot schemes in high-risk sectors, such as energy and finance, have been proposed, with public-private funding viewed as crucial. Advocates argue that a cyber militia could democratise security and foster collective responsibility, aligning with the country’s long-term cybersecurity strategy.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

AI-designed proteins could transform longevity and drug development

OpenAI has launched GPT-4b micro, an AI model developed with longevity startup Retro Biosciences to accelerate protein engineering. Unlike chatbots, it focuses on biological sequences and 3D structures.

The model redesigned two Yamanaka factors- proteins that convert adult cells into stem cells, showing 50-fold higher efficiency in lab tests and improved DNA repair. Older cells acted more youthful, potentially shortening trial-and-error in regenerative medicine.

AI-designed proteins could speed up drug development and allow longevity startups to rejuvenate cells safely and consistently. The work also opens new possibilities in synthetic biology beyond natural evolution.

OpenAI emphasised that the research is still early and lab-based, with clinical applications requiring caution. Transparency is key, as the technology’s power to design potent proteins quickly raises biosecurity considerations.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!