Scattered Spider cyberattacks set to intensify, warn FBI and CISA

The cybercriminal group known as Scattered Spider is expected to intensify its attacks in the coming weeks, according to a joint warning issued by the FBI, CISA, and cybersecurity agencies in Canada, the UK and Australia.

These warnings highlight the group’s increasingly sophisticated methods, including impersonating employees to bypass IT support and hijack multi-factor authentication processes.

Instead of relying on old techniques, the hackers now deploy stealthy tools like RattyRAT and DragonForce ransomware, particularly targeting VMware ESXi servers.

Their attacks combine social engineering with SIM swapping and phishing, enabling them to exfiltrate sensitive data before locking systems and demanding payment — a tactic known as double extortion.

Scattered Spider, also referred to as Okta Tempest, is reportedly creating fake online identities and infiltrating internal communication channels like Slack and Microsoft Teams. In some cases, they have even joined incident response calls to gain insight into how companies are reacting.

Security agencies urge organisations to adopt phishing-resistant multi-factor authentication, audit remote access software, monitor unusual logins and behaviours, and ensure offline encrypted backups are maintained.

More incidents are expected, as the group continues refining its strategies instead of slowing down.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

China demands Nvidia explain security flaws in H20 chips

China’s top internet regulator has summoned Nvidia to explain alleged security concerns linked to its H20 computing chips.

The Cyberspace Administration of China stated that the chips, which are sold domestically, may contain backdoor vulnerabilities that could pose risks to users and systems.

Instead of ignoring the issue, Nvidia has been asked to submit technical documents and provide a formal response addressing these potential flaws.

The chips are part of Nvidia’s tailored product line for the Chinese market following US export restrictions on advanced AI processors.

The investigation signals tighter scrutiny from Chinese authorities on foreign technology amid ongoing geopolitical tensions and a global race for semiconductor dominance.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Australian companies unite cybersecurity defences to combat AI threats

Australian companies are increasingly adopting unified, cloud-based cybersecurity systems as AI reshapes both threats and defences.

A new report from global research firm ISG reveals that many enterprises are shifting away from fragmented, uncoordinated tools and instead opting for centralised platforms that can better detect and counter sophisticated AI-driven attacks.

The rapid rise of generative AI has introduced new risks, including deepfakes, voice cloning and misinformation campaigns targeting elections and public health.

In response, organisations are reinforcing identity protections and integrating AI into their security operations to improve both speed and efficiency. These tools also help offset a growing shortage of cybersecurity professionals.

After a rushed move to the cloud during the pandemic, many businesses retained outdated perimeter-focused security systems. Now, firms are switching to cloud-first strategies that target vulnerabilities at endpoints and prevent misconfigurations instead of relying on legacy solutions.

By reducing overlap in systems like identity management and threat detection, businesses are streamlining defences for better resilience.

ISG also notes a shift in how companies choose cybersecurity providers. Firms like IBM, PwC, Deloitte and Accenture are seen as leaders in the Australian market, while companies such as TCS and AC3 have been flagged as rising stars.

The report further highlights growing demands for compliance and data retention, signalling a broader national effort to enhance cyber readiness across industries.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

White House launches AI Action Plan with Executive Orders on exports and regulation

The White House has unveiled a sweeping AI strategy through its new publication Winning the Race: America’s AI Action Plan.

Released alongside three Executive Orders, the plan outlines the federal government’s next phase in shaping AI policy, focusing on innovation, infrastructure, and global leadership.

The AI Action Plan centres on three key pillars: accelerating AI development, establishing national AI infrastructure, and promoting American AI standards globally. Four consistent themes run through each pillar: regulation and deregulation, investment, research and standardisation, and cybersecurity.

Notably, deregulation is central to the plan’s strategy, particularly in reducing barriers to AI growth and speeding up infrastructure approval for data centres and grid expansion.

Investment plays a dominant role. Federal funds will support AI job training, data access, lab automation, and domestic component manufacturing, instead of relying on foreign suppliers.

Alongside, the plan calls for new national standards, improved dataset quality, and stronger evaluation mechanisms for AI interpretability, control, and safety. A dedicated AI Workforce Research Hub is also proposed.

In parallel, three Executive Orders were issued. One bans ‘woke’ or ideologically biased AI tools in federal use, another fast-tracks data centre development using federal land and brownfield sites, and a third launches an AI exports programme to support full-stack US AI systems globally.

While these moves open new opportunities, they also raise questions around regulation, bias, and the future shape of AI development in the US.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

NATO highlights cyber vulnerabilities in European ports

A recent policy brief from NATO’s Cooperative Cyber Defence Centre of Excellence (CCDCOE) indicates that Europe’s civilian ports, which handle approximately 80% of international trade and support NATO logistics, are increasingly targeted by cyberattacks linked to state-affiliated actors. The report identifies a rise in disruptions affecting port access control systems and vessel traffic management across various countries, with suspected involvement from groups associated with Russia, Iran, and China.

The document notes that NATO’s current maritime strategy lacks formal mechanisms to engage with commercial port operators, who manage critical infrastructure exposed to cyber threats. It calls for updated strategic frameworks to improve coordination between civil and military sectors, and to enhance cybersecurity and resilience across digital, operational, and energy systems in ports.

The brief outlines common attack methods, such as denial-of-service, phishing, ransomware, and malware, which have affected numerous maritime organisations in 2024.

Key recommendations include:

  • Updating NATO’s 2011 maritime strategy to integrate cybersecurity and establish engagement channels with commercial port operators.
  • Establishing sector-specific intelligence-sharing frameworks to support timely incident response.
  • Developing coordinated public–private action plans and resilience measures at both national and alliance levels.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

Italy investigates Meta over AI integration in WhatsApp

Italy’s antitrust watchdog has investigated Meta Platforms over allegations that the company may have abused its dominant position by integrating its AI assistant directly into WhatsApp.

The Rome-based authority, formally known as the Autorità Garante della Concorrenza e del Mercato (AGCM), announced the probe on Wednesday, stating that Meta may have breached European Union competition regulations.

The regulator claims that the introduction of the Meta AI assistant into WhatsApp was carried out without obtaining prior user consent, potentially distorting market competition.

Meta AI, the company’s virtual assistant designed to provide chatbot-style responses and other generative AI functions, has been embedded in WhatsApp since March 2025. It is accessible through the app’s search bar and is intended to offer users conversational AI services directly within the messaging interface.

The AGCM is concerned that this integration may unfairly favour Meta’s AI services by leveraging the company’s dominant position in the messaging market. It warned that such a move could steer users toward Meta’s products, limit consumer choice, and disadvantage competing AI providers.

‘By pairing Meta AI with WhatsApp, Meta appears to be able to steer its user base into the new market not through merit-based competition, but by ‘forcing’ users to accept the availability of two distinct services,’ the authority said.

It argued that this strategy may undermine rival offerings and entrench Meta’s position across adjacent digital services. In a statement, Meta confirmed cooperating fully with the Italian authorities.

The company defended the rollout of its AI features, stating that their inclusion in WhatsApp aimed to improve the user experience. ‘Offering free access to our AI features in WhatsApp gives millions of Italians the choice to use AI in a place they already know, trust and understand,’ a Meta spokesperson said via email.

The company maintains its approach, which benefits users by making advanced technology widely available through familiar platforms. The AGCM clarified that its inquiry is conducted in close cooperation with the European Commission’s relevant offices.

The cross-border collaboration reflects the growing scrutiny Meta faces from regulators across the EU over its market practices and the use of its extensive user base to promote new services.

If the authority finds Meta in breach of EU competition law, the company could face a fine of up to 10 percent of its global annual turnover. Under Article 102 of the Treaty on the Functioning of the European Union, abusing a dominant market position is prohibited, particularly if it affects trade between member states or restricts competition.

To gather evidence, AGCM officials inspected the premises of Meta’s Italian subsidiary, accompanied by Guardia di Finanza, the tax police’s special antitrust unit in Italy.

The inspections were part of preliminary investigative steps to assess the impact of Meta AI’s deployment within WhatsApp. Regulators fear that embedding AI assistants into dominant platforms could lead to unfair advantages in emerging AI markets.

By relying on its established user base and platform integration, Meta may effectively foreclose competition by making alternative AI services harder to access or less visible to consumers. Such a case would not be the first time Meta has faced regulatory scrutiny in Europe.

The company has been the subject of multiple investigations across the EU concerning data protection, content moderation, advertising practices, and market dominance. The current probe adds to a growing list of regulatory pressures facing the tech giant as it expands its AI capabilities.

The AGCM’s investigation comes amid broader EU efforts to ensure fair competition in digital markets. With the Digital Markets Act and AI Act emerging, regulators are becoming more proactive in addressing potential risks associated with integrating advanced technologies into consumer platforms.

As the investigation continues, Meta’s use of AI within WhatsApp will remain under close watch. The outcome could set an essential precedent for how dominant tech firms can release AI products within widely used communication tools.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

Aeroflot cyberattack cripples Russian flights in major breach

A major cyberattack on Russia’s flagship airline Aeroflot has caused severe disruptions to flights, with hundreds of passengers stranded at airports. Responsibility was claimed by two hacker groups: Ukraine’s Silent Crow and the Belarusian hacktivist collective Belarus Cyber-Partisans.

The attack is among the most damaging cyber incidents Russia has faced since the full-scale invasion of Ukraine in February 2022. Past attacks disrupted government portals and large state-run firms such as Russian Railways, but most resumed operations quickly. This time, the effects were longer-lasting.

Social media showed crowds of delayed passengers packed into Moscow’s Sheremetyevo Airport, Aeroflot’s main hub. The outage affected not only Aeroflot but also its subsidiaries, Rossiya and Pobeda.

Most of the grounded flights were domestic. However, international services to Belarus, Armenia, and Uzbekistan were also cancelled or postponed due to the IT failure.

Early on Monday, Aeroflot issued a statement warning of unspecified problems with its IT infrastructure. The company alerted passengers that delays and disruptions were likely as a result.

Later, Russia’s Prosecutor’s Office confirmed that the outage was the result of a cyberattack. It announced the opening of a criminal case and launched an investigation into the breach.

Kremlin spokesperson Dmitry Peskov described the incident as ‘quite alarming’, admitting that cyber threats remain a serious risk for all major service providers operating at scale.

In a Telegram post, Silent Crow claimed it had maintained access to Aeroflot’s internal systems for over a year. The group stated it had copied sensitive customer data, internal communications, audio recordings, and surveillance footage collected on Aeroflot employees.

The hackers claimed that all of these resources had now either been destroyed or made inaccessible. ‘Restoring them will possibly require tens of millions of dollars. The damage is strategic,’ the group wrote.

Screenshots allegedly showing Aeroflot’s compromised IT dashboards were shared via the same Telegram channel. Silent Crow hinted it may begin publishing the stolen data in the coming days.

It added: ‘The personal data of all Russians who have ever flown with Aeroflot have now also gone on a trip — albeit without luggage and to the same destination.’

The Belarus Cyber-Partisans, who have opposed Belarusian President Alexander Lukashenko’s authoritarian regime for years, said the attack was carefully planned and intended to cause maximum disruption.

‘This is a very large-scale attack and one of the most painful in terms of consequences,’ said group coordinator Yuliana Shametavets. She told The Associated Press that the group spent months preparing the strike and accessed Aeroflot’s systems by exploiting several vulnerabilities.

The Cyber-Partisans have previously claimed responsibility for other high-profile hacks. In April 2024, they said they had breached the internal network of Belarus’s state security agency, the KGB.

Belarus remains a close ally of Russia. Lukashenko, in power for over three decades, has permitted Russia to use Belarusian territory as a staging ground for the invasion of Ukraine and to deploy tactical nuclear weapons on Belarusian soil.

Russia’s aviation sector has already faced repeated interruptions this summer, often caused by Ukrainian drone attacks on military or dual-use airports. Flights have been grounded multiple times as a precaution, disrupting passenger travel.

The latest cyberattack adds a new layer of difficulty, exposing the vulnerability of even the most protected elements of Russia’s transportation infrastructure. While the full extent of the data breach is yet to be independently verified, the implications could be long-lasting.

For now, it remains unclear how long it will take Aeroflot to fully restore services or what specific data may have been leaked. Both hacker groups appear determined to continue using cyber tools as a weapon of resistance — targeting Russia’s most symbolic assets.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

Tea dating app suspends messaging after the major data breach

The women’s dating safety app Tea has suspended its messaging feature following a cyberattack that exposed thousands of private messages, posts and images.

The app, which helps women run background checks on men, confirmed that direct messages were accessed during the initial breach disclosed in late July.

Tea has 1.6 million users, primarily in the US. Affected users will be contacted directly and offered free identity protection services, including credit monitoring and fraud alerts.

The company said it is working to strengthen its security and will provide updates as the investigation continues. Some of the leaked conversations reportedly contain sensitive discussions about infidelity and abortion.

Experts have warned that the leak of both images and messages raises the risk of emotional harm, blackmail or identity theft. Cybersecurity specialists recommend that users accept the free protection services as soon as possible.

The breach affected those who joined the app before February 2024, including users who submitted ID photos that Tea had promised would be deleted after verification.

Tea is known for allowing women to check if a potential partner is married or has a criminal record, as well as share personal experiences to flag abusive or trustworthy behaviour.

The app’s recent popularity surge has also sparked criticism, with some claiming it unfairly targets men. As users await more information, experts urge caution and vigilance.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Hanwha and Samsung lead Korea’s cyber insurance push

South Korea is stepping up efforts to strengthen its cyber insurance sector as corporate cyberattacks surge across industries. A string of major breaches has revealed widespread vulnerability and renewed demand for more comprehensive digital risk protection.

Hanwha General Insurance launched Korea’s first Cyber Risk Management Centre last November and partnered with global cybersecurity firm Theori and law firm Shin & Kim to expand its offerings.

Despite the growing need, the market remains underdeveloped. Cyber insurance makes up only 1 percent of Korea’s accident insurance sector, with a 2024 report estimating local cyber premiums at $50 million, just 0.3 percent of the global total.

Regulators and industry voices call for higher mandatory coverage, clearer underwriting standards, and financial incentives to promote adoption.

As Korean demand rises, comprehensive policies offering tailored options and emergency coverage are gaining traction, with Hanwha reporting a 200 percent revenue jump in under a year.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Tech giants back Trump’s AI deregulation plan amid public concern over societal impacts

Donald Trump recently hosted an AI summit in Washington, titled ‘Winning the AI Race,’ geared towards a deregulated atmosphere for AI innovation. Key figures from the tech industry, including Nvidia’s CEO Jensen Huang and Palantir’s CTO Shyam Sankar, attended the event.

Co-hosted by the Hill and Valley Forum and the Silicon Valley All-in Podcast, the summit was a platform for Trump to introduce his ‘AI Action Plan‘, comprised of three executive orders focusing on deregulation. Trump’s objective is to dismantle regulatory restrictions he perceives as obstacles to innovation, aiming to re-establish the US as a leader in AI exportation globally.

The executive orders announced target the elimination of ‘ideological dogmas such as diversity, equity, and inclusion (DEI)’ in AI models developed by federally funded companies. Additionally, one order promotes exporting US-developed AI technologies internationally, while another seeks to lessen environmental restrictions and speed up approvals for energy-intensive data centres.

These measures are seen as reversing the Biden administration’s policies, which stressed the importance of safety and security in AI development. Technology giants Apple, Meta, Amazon, and Alphabet have shown significant support for Trump’s initiatives, contributing to his inauguration fund and engaging with him at his Mar-a-Lago estate. Leaders like OpenAI’s Sam Altman and Nvidia’s Jensen Huang have also pledged substantial investments in US AI infrastructure.

Despite this backing, over 100 groups, including labour, environmental, civil rights, and academic organisations, have voiced their opposition through a ‘People’s AI action plan’. These groups warn of the potential risks of unregulated AI, which they fear could undermine civil liberties, equality, and environmental safeguards.

They argue that public welfare should not be compromised for corporate gains, highlighting the dangers of allowing tech giants to dominate policy-making. That discourse illustrates the divide between industry aspirations and societal consequences.

The tech industry’s influence on AI legislation through lobbying is noteworthy, with a report from Issue One indicating that eight of the largest tech companies spent a collective $36 million on lobbying in 2025 alone. Meta led with $13.8 million, employing 86 lobbyists, while Nvidia and OpenAI saw significant increases in their expenditure compared to previous years. The substantial financial outlay reflects the industry’s vested interest in shaping regulatory frameworks to favour business interests, igniting a debate over the ethical responsibilities of unchecked AI progress.

As tech companies and pro-business entities laud Trump’s deregulation efforts, concerns persist over the societal impacts of such policies.

Would you like to learn more about AI, tech and digital diplomacyIf so, ask our Diplo chatbot!