Scattered Spider cyberattacks set to intensify, warn FBI and CISA

The cybercriminal group known as Scattered Spider is expected to intensify its attacks in the coming weeks, according to a joint warning issued by the FBI, CISA, and cybersecurity agencies in Canada, the UK and Australia.

These warnings highlight the group’s increasingly sophisticated methods, including impersonating employees to bypass IT support and hijack multi-factor authentication processes.

Instead of relying on old techniques, the hackers now deploy stealthy tools like RattyRAT and DragonForce ransomware, particularly targeting VMware ESXi servers.

Their attacks combine social engineering with SIM swapping and phishing, enabling them to exfiltrate sensitive data before locking systems and demanding payment — a tactic known as double extortion.

Scattered Spider, also referred to as Okta Tempest, is reportedly creating fake online identities and infiltrating internal communication channels like Slack and Microsoft Teams. In some cases, they have even joined incident response calls to gain insight into how companies are reacting.

Security agencies urge organisations to adopt phishing-resistant multi-factor authentication, audit remote access software, monitor unusual logins and behaviours, and ensure offline encrypted backups are maintained.

More incidents are expected, as the group continues refining its strategies instead of slowing down.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Google rolls out AI age detection to protect teen users

In a move aimed at enhancing online protections for minors, Google has started rolling out a machine learning-based age estimation system for signed-in users in the United States.

The new system uses AI to identify users who are likely under the age of 18, with the goal of providing age-appropriate digital experiences and strengthening privacy safeguards.

Initially deployed to a small number of users, the system is part of Google’s broader initiative to align its platforms with the evolving needs of children and teenagers growing up in a digitally saturated world.

‘Children today are growing up with technology, not growing into it like previous generations. So we’re working directly with experts and educators to help you set boundaries and use technology in a way that’s right for your family,’ the company explained in a statement.

The system builds on changes first previewed earlier this year and reflects Google’s ongoing efforts to comply with regulatory expectations and public demand for better youth safety online.

Once a user is flagged by the AI as likely underage, Google will introduce a range of restrictions—most notably in advertising, content recommendation, and data usage.

According to the company, users identified as minors will have personalised advertising disabled and will be shielded from ad categories deemed sensitive. These protections will be enforced across Google’s entire advertising ecosystem, including AdSense, AdMob, and Ad Manager.

The company’s publishing partners were informed via email this week that no action will be required on their part, as the changes will be implemented automatically.

Google’s blog post titled ‘Ensuring a safer online experience for US kids and teens’ explains that its machine learning model estimates age based on behavioural signals, such as search history and video viewing patterns.

If a user is mistakenly flagged or wishes to confirm their age, Google will offer verification tools, including the option to upload a government-issued ID or submit a selfie.

The company stressed that the system is designed to respect user privacy and does not involve collecting new types of data. Instead, it aims to build a privacy-preserving infrastructure that supports responsible content delivery while minimising third-party data sharing.

Beyond advertising, the new protections extend into other parts of the user experience. For those flagged as minors, Google will disable Timeline location tracking in Google Maps and also add digital well-being features on YouTube, such as break reminders and bedtime prompts.

Google will also tweak recommendation algorithms to avoid promoting repetitive content on YouTube, and restrict access to adult-rated applications in the Play Store for flagged minors.

The initiative is not Google’s first foray into child safety technology. The company already offers Family Link for parental controls and YouTube Kids as a tailored platform for younger audiences.

However, the deployment of automated age estimation reflects a more systemic approach, using AI to enforce real-time, scalable safety measures. Google maintains that these updates are part of a long-term investment in user safety, digital literacy, and curating age-appropriate content.

Similar initiatives have already been tested in international markets, and the company announces it will closely monitor the US rollout before considering broader implementation.

‘This is just one part of our broader commitment to online safety for young users and families,’ the blog post reads. ‘We’ve continually invested in technology, policies, and literacy resources to better protect kids and teens across our platforms.’

Nonetheless, the programme is likely to attract scrutiny. Critics may question the accuracy of AI-powered age detection and whether the measures strike the right balance between safety, privacy, and personal autonomy — or risk overstepping.

Some parents and privacy advocates may also raise concerns about the level of visibility and control families will have over how children are identified and managed by the system.

As public pressure grows for tech firms to take greater responsibility in protecting vulnerable users, Google’s rollout may signal the beginning of a new industry standard.

The shift towards AI-based age assurance reflects a growing consensus that digital platforms must proactively mitigate risks for young users through smarter, more adaptive technologies.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

VPN dangers highlighted as UK’s Online Safety Act comes into force

Britons are being urged to proceed with caution before turning to virtual private networks (VPNs) in response to the new age verification requirements set by the Online Safety Act.

The law, now in effect, aims to protect young users by restricting access to adult and sensitive content unless users verify their age.

Instead of offering anonymous access, some platforms now demand personal details such as full names, email addresses, and even bank information to confirm a user’s age.

Although the legislation targets adult websites, many people have reported being blocked from accessing less controversial content, including alcohol-related forums and parts of Wikipedia.

As a result, more users are considering VPNs to bypass these checks. However, cybersecurity experts warn that many VPNs can pose serious risks by exposing users to scams, data theft, and malware. Without proper research, users might install software that compromises their privacy rather than protecting it.

With Ofcom reporting that eight per cent of children aged 8 to 14 in the UK have accessed adult content online, the new rules are viewed as a necessary safeguard. Still, concerns remain about the balance between online safety and digital privacy for adult users.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Australian companies unite cybersecurity defences to combat AI threats

Australian companies are increasingly adopting unified, cloud-based cybersecurity systems as AI reshapes both threats and defences.

A new report from global research firm ISG reveals that many enterprises are shifting away from fragmented, uncoordinated tools and instead opting for centralised platforms that can better detect and counter sophisticated AI-driven attacks.

The rapid rise of generative AI has introduced new risks, including deepfakes, voice cloning and misinformation campaigns targeting elections and public health.

In response, organisations are reinforcing identity protections and integrating AI into their security operations to improve both speed and efficiency. These tools also help offset a growing shortage of cybersecurity professionals.

After a rushed move to the cloud during the pandemic, many businesses retained outdated perimeter-focused security systems. Now, firms are switching to cloud-first strategies that target vulnerabilities at endpoints and prevent misconfigurations instead of relying on legacy solutions.

By reducing overlap in systems like identity management and threat detection, businesses are streamlining defences for better resilience.

ISG also notes a shift in how companies choose cybersecurity providers. Firms like IBM, PwC, Deloitte and Accenture are seen as leaders in the Australian market, while companies such as TCS and AC3 have been flagged as rising stars.

The report further highlights growing demands for compliance and data retention, signalling a broader national effort to enhance cyber readiness across industries.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

White House launches AI Action Plan with Executive Orders on exports and regulation

The White House has unveiled a sweeping AI strategy through its new publication Winning the Race: America’s AI Action Plan.

Released alongside three Executive Orders, the plan outlines the federal government’s next phase in shaping AI policy, focusing on innovation, infrastructure, and global leadership.

The AI Action Plan centres on three key pillars: accelerating AI development, establishing national AI infrastructure, and promoting American AI standards globally. Four consistent themes run through each pillar: regulation and deregulation, investment, research and standardisation, and cybersecurity.

Notably, deregulation is central to the plan’s strategy, particularly in reducing barriers to AI growth and speeding up infrastructure approval for data centres and grid expansion.

Investment plays a dominant role. Federal funds will support AI job training, data access, lab automation, and domestic component manufacturing, instead of relying on foreign suppliers.

Alongside, the plan calls for new national standards, improved dataset quality, and stronger evaluation mechanisms for AI interpretability, control, and safety. A dedicated AI Workforce Research Hub is also proposed.

In parallel, three Executive Orders were issued. One bans ‘woke’ or ideologically biased AI tools in federal use, another fast-tracks data centre development using federal land and brownfield sites, and a third launches an AI exports programme to support full-stack US AI systems globally.

While these moves open new opportunities, they also raise questions around regulation, bias, and the future shape of AI development in the US.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Quantum-resistant crypto wallets now possible without address changes

Sui Research revealed a way for blockchain wallets to upgrade for quantum safety without a hard fork or address changes. The approach, based on EdDSA cryptography, allows compatible networks like Solana, Sui and Near to transition securely with minimal disruption.

Cryptographer Kostas Chalkias described the breakthrough as the first backward-compatible path to quantum safety for wallets. The upgrade uses zero-knowledge proofs to verify private key control without exposing data, keeping original public keys and supporting dormant accounts.

While praised as one of the most important cryptographic advancements in recent years, the upgrade method does not apply to Bitcoin or Ethereum. These networks use different signature methods, which may need bigger changes to stay secure as quantum tech evolves.

Although quantum computers are not yet capable of breaking blockchain encryption, researchers and developers are racing to prepare. The risk of millions of wallets becoming vulnerable has triggered serious debate in the crypto community.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

NATO highlights cyber vulnerabilities in European ports

A recent policy brief from NATO’s Cooperative Cyber Defence Centre of Excellence (CCDCOE) indicates that Europe’s civilian ports, which handle approximately 80% of international trade and support NATO logistics, are increasingly targeted by cyberattacks linked to state-affiliated actors. The report identifies a rise in disruptions affecting port access control systems and vessel traffic management across various countries, with suspected involvement from groups associated with Russia, Iran, and China.

The document notes that NATO’s current maritime strategy lacks formal mechanisms to engage with commercial port operators, who manage critical infrastructure exposed to cyber threats. It calls for updated strategic frameworks to improve coordination between civil and military sectors, and to enhance cybersecurity and resilience across digital, operational, and energy systems in ports.

The brief outlines common attack methods, such as denial-of-service, phishing, ransomware, and malware, which have affected numerous maritime organisations in 2024.

Key recommendations include:

  • Updating NATO’s 2011 maritime strategy to integrate cybersecurity and establish engagement channels with commercial port operators.
  • Establishing sector-specific intelligence-sharing frameworks to support timely incident response.
  • Developing coordinated public–private action plans and resilience measures at both national and alliance levels.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

Aeroflot cyberattack cripples Russian flights in major breach

A major cyberattack on Russia’s flagship airline Aeroflot has caused severe disruptions to flights, with hundreds of passengers stranded at airports. Responsibility was claimed by two hacker groups: Ukraine’s Silent Crow and the Belarusian hacktivist collective Belarus Cyber-Partisans.

The attack is among the most damaging cyber incidents Russia has faced since the full-scale invasion of Ukraine in February 2022. Past attacks disrupted government portals and large state-run firms such as Russian Railways, but most resumed operations quickly. This time, the effects were longer-lasting.

Social media showed crowds of delayed passengers packed into Moscow’s Sheremetyevo Airport, Aeroflot’s main hub. The outage affected not only Aeroflot but also its subsidiaries, Rossiya and Pobeda.

Most of the grounded flights were domestic. However, international services to Belarus, Armenia, and Uzbekistan were also cancelled or postponed due to the IT failure.

Early on Monday, Aeroflot issued a statement warning of unspecified problems with its IT infrastructure. The company alerted passengers that delays and disruptions were likely as a result.

Later, Russia’s Prosecutor’s Office confirmed that the outage was the result of a cyberattack. It announced the opening of a criminal case and launched an investigation into the breach.

Kremlin spokesperson Dmitry Peskov described the incident as ‘quite alarming’, admitting that cyber threats remain a serious risk for all major service providers operating at scale.

In a Telegram post, Silent Crow claimed it had maintained access to Aeroflot’s internal systems for over a year. The group stated it had copied sensitive customer data, internal communications, audio recordings, and surveillance footage collected on Aeroflot employees.

The hackers claimed that all of these resources had now either been destroyed or made inaccessible. ‘Restoring them will possibly require tens of millions of dollars. The damage is strategic,’ the group wrote.

Screenshots allegedly showing Aeroflot’s compromised IT dashboards were shared via the same Telegram channel. Silent Crow hinted it may begin publishing the stolen data in the coming days.

It added: ‘The personal data of all Russians who have ever flown with Aeroflot have now also gone on a trip — albeit without luggage and to the same destination.’

The Belarus Cyber-Partisans, who have opposed Belarusian President Alexander Lukashenko’s authoritarian regime for years, said the attack was carefully planned and intended to cause maximum disruption.

‘This is a very large-scale attack and one of the most painful in terms of consequences,’ said group coordinator Yuliana Shametavets. She told The Associated Press that the group spent months preparing the strike and accessed Aeroflot’s systems by exploiting several vulnerabilities.

The Cyber-Partisans have previously claimed responsibility for other high-profile hacks. In April 2024, they said they had breached the internal network of Belarus’s state security agency, the KGB.

Belarus remains a close ally of Russia. Lukashenko, in power for over three decades, has permitted Russia to use Belarusian territory as a staging ground for the invasion of Ukraine and to deploy tactical nuclear weapons on Belarusian soil.

Russia’s aviation sector has already faced repeated interruptions this summer, often caused by Ukrainian drone attacks on military or dual-use airports. Flights have been grounded multiple times as a precaution, disrupting passenger travel.

The latest cyberattack adds a new layer of difficulty, exposing the vulnerability of even the most protected elements of Russia’s transportation infrastructure. While the full extent of the data breach is yet to be independently verified, the implications could be long-lasting.

For now, it remains unclear how long it will take Aeroflot to fully restore services or what specific data may have been leaked. Both hacker groups appear determined to continue using cyber tools as a weapon of resistance — targeting Russia’s most symbolic assets.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

Google backs EU AI Code but warns against slowing innovation

Google has confirmed it will sign the European Union’s General Purpose AI Code of Practice, joining other companies, including major US model developers.

The tech giant hopes the Code will support access to safe and advanced AI tools across Europe, where rapid adoption could add up to €1.4 trillion annually to the continent’s economy by 2034.

Kent Walker, Google and Alphabet’s President of Global Affairs, said the final Code better aligns with Europe’s economic ambitions than earlier drafts, noting that Google had submitted feedback during its development.

However, he warned that parts of the Code and the broader AI Act might hinder innovation by introducing rules that stray from EU copyright law, slow product approvals or risk revealing trade secrets.

Walker explained that such requirements could restrict Europe’s ability to compete globally in AI. He highlighted the need to balance regulation with the flexibility required to keep pace with technological advances.

Google stated it will work closely with the EU’s new AI Office to help shape a proportionate, future-facing approach.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

EU AI Act begins as tech firms push back

Europe’s AI crackdown officially begins soon, as the EU enforces the first rules targeting developers of generative AI models like ChatGPT.

Under the AI Act, firms must now assess systemic risks, conduct adversarial testing, ensure cybersecurity, report serious incidents, and even disclose energy usage. The goal is to prevent harms related to bias, misinformation, manipulation, and lack of transparency in AI systems.

Although the legislation was passed last year, the EU only released developer guidance on 10 July, leaving tech giants with little time to adapt.

Meta, which developed the Llama AI model, has refused to sign the voluntary code of practice, arguing that it introduces legal uncertainty. Other developers have expressed concerns over how vague and generic the guidance remains, especially around copyright and practical compliance.

The EU also distinguishes itself from the US, where a re-elected Trump administration has launched a far looser AI Action Plan. While Washington supports minimal restrictions to encourage innovation, Brussels is focused on safety and transparency.

Trade tensions may grow, but experts warn that developers should not rely on future political deals instead of taking immediate steps toward compliance.

The AI Act’s rollout will continue into 2026, with the next phase focusing on high-risk AI systems in healthcare, law enforcement, and critical infrastructure.

Meanwhile, questions remain over whether AI-generated content qualifies for copyright protection and how companies should handle AI in marketing or supply chains. For now, Europe’s push for safer AI is accelerating—whether Big Tech likes it or not.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!