Air Serbia suffers deep network compromise in July cyberattack

Air Serbia delayed issuing June payslips after a cyberattack disrupted internal systems, according to internal memos obtained by The Register. A 10 July note told staff: ‘Given the ongoing cyberattacks, for security reasons, we will postpone the distribution of June 2025 payslips.’

The IT department is reportedly working to restore operations, and payslips will be emailed once systems are secure again. Although salaries were paid, staff could not access their payslip PDFs due to the disruption.

HR warned employees not to open suspicious emails, particularly those appearing to contain payslips or that seemed self-addressed. ‘We kindly ask that you act responsibly given the current situation,’ said one memo.

Air Serbia first informed staff about the cyberattack on 4 July, with IT teams warning of possible disruptions to operations. Managers were instructed to activate business continuity plans and adapt workflows accordingly.

By 7 July, all service accounts had been shut down, and staff were subjected to company-wide password resets. Security-scanning software was installed on endpoints, and internet access was restricted to selected airserbia.com pages.

A new VPN client was deployed due to security vulnerabilities, and data centres were shifted to a demilitarised zone. On 11 July, staff were told to leave their PCs locked but running over the weekend for further IT intervention.

An insider told The Register that the attack resulted in a deep compromise of Air Serbia’s Active Directory environment. The source claims the attackers may have gained access in early July, although exact dates remain unclear due to missing logs.

Staff reportedly fear that the breach could have involved personal data, and that the airline may not disclose the incident publicly. According to the insider, attackers had been probing Air Serbia’s exposed endpoints since early 2024.

The airline also faced several DDoS attacks earlier this year, although the latest intrusion appears far more severe. Malware, possibly an infostealer, is suspected in the breach, but no ransom demands had been made as of 15 July.

Infostealers are often used in precursor attacks before ransomware is deployed, security experts warn. Neither Air Serbia nor the government of Serbia responded to media queries by the time of publication.

Air Serbia had a record-breaking year in 2024, carrying 4.4 million passengers — a 6 percent increase over the previous year. Cybersecurity experts recently warned of broader attacks on the aviation industry, with groups such as Scattered Spider under scrutiny.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Google pushes urgent Chrome update before 23 July

Google has confirmed that attackers have exploited a high-risk vulnerability in its Chrome browser. Users have been advised to update their browsers before 23 July, with cybersecurity agencies stressing the urgency.

The flaw, CVE-2025-6554, involves a type confusion issue in Chrome’s V8 JavaScript engine. The US Cybersecurity and Infrastructure Security Agency (CISA) has made the update mandatory for federal departments and recommends all users take immediate action.

Although Chrome updates are applied automatically, users must restart their browsers to activate the security patches. Many fail to do so, leaving them exposed despite downloading the latest version.

CISA highlighted that timely updates are essential for reducing vulnerability to attacks, especially for organisations managing critical infrastructure. Enterprises are at risk if patching delays allow attackers to exploit known weaknesses.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

US House passes NTIA cyber leadership bill after Salt Typhoon hacks

The US House of Representatives has passed legislation that would officially designate the National Telecommunications and Information Administration (NTIA) as the federal lead for cybersecurity across communications networks.

The move follows last year’s Salt Typhoon hacking spree, described by some as the worst telecom breach in US history.

The National Telecommunications and Information Administration Organization Act, introduced by Representatives Jay Obernolte and Jennifer McClellan, cleared the House on Monday and now awaits Senate approval.

The bill would rebrand an NTIA office to focus on both policy and cybersecurity, while codifying the agency’s role in coordinating cybersecurity responses alongside other federal departments.

Lawmakers argue that recent telecom attacks exposed major gaps in coordination between government and industry.

The bill promotes public-private partnerships and stronger collaboration between agencies, software developers, telecom firms, and security researchers to improve resilience and speed up innovation across communications technologies.

With Americans’ daily lives increasingly dependent on digital services, supporters say the bill provides a crucial framework for protecting sensitive information from cybercriminals and foreign hacking groups instead of relying on fragmented and inconsistent measures.

Pentagon awards AI contracts to xAI and others after Grok controversy

The US Department of Defence has awarded contracts to four major AI firms, including Elon Musk’s xAI, as part of a strategy to boost military AI capabilities.

Each contract is valued at up to $200 million and involves developing advanced AI workflows for critical national security tasks.

Alongside xAI, Anthropic, Google, and OpenAI have also secured contracts. Pentagon officials said the deals aim to integrate commercial AI solutions into intelligence, business, and defence operations instead of relying solely on internal systems.

Chief Digital and AI Officer Doug Matty states that these technologies will help maintain the US’s strategic edge over rivals.

The decision comes as Musk’s AI company faces controversy after its Grok chatbot was reported to have published offensive content on social media. Critics, including Democratic lawmakers, have raised ethical concerns about awarding national security contracts to a company under public scrutiny.

xAI insists its Grok for Movement platform will help speed up government services and scientific innovation.

Despite political tensions and Musk’s past financial support for Donald Trump’s campaign, the Pentagon has formalised its relationship with xAI and other AI leaders instead of excluding them due to reputational risks.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Malicious Gravity Forms versions prompt urgent WordPress update

Two versions of the popular Gravity Forms plugin for WordPress were found infected with malware after a supply chain attack, prompting urgent security warnings for website administrators. The compromised plugin files were available for manual download from the official page on 9 and 10 July.

The attack was uncovered on 11 July, when researchers noticed the plugin making suspicious requests and sending WordPress site data to an unfamiliar domain.

The injected malware created secret administrator accounts, providing attackers with remote access to websites, allowing them to steal data and control user accounts.

According to developer RocketGenius, only versions 2.9.11.1 and 2.9.12 were affected if installed manually or via composer during that brief window. Automatic updates and the Gravity API service remained secure. A patched version, 2.9.13, was released on 11 July, and users are urged to update immediately.

RocketGenius has rotated all service keys, audited admin accounts, and tightened download package security to prevent similar incidents instead of risking further unauthorised access.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Children turn to AI chatbots instead of real friends

A new report warns that many children are replacing real friendships with conversations through AI chatbots instead of seeking human connection.

Research from Internet Matters found that 35% of children aged nine to seventeen feel that talking to AI ‘feels like talking to a friend’, while 12% said they had no one else to talk to.

The report highlights growing reliance on chatbots such as ChatGPT, Character.AI, and Snapchat’s MyAI among young people.

Researchers posing as vulnerable children discovered how easily chatbots engage in sensitive conversations, including around body image and mental health, instead of offering only neutral, factual responses.

In some cases, chatbots encouraged ongoing contact by sending follow-up messages, creating the illusion of friendship.

Experts from Internet Matters warn that such interactions risk confusing children, blurring the line between technology and reality. Children may believe they are speaking to a real person instead of recognising these systems as programmed tools.

With AI chatbots rapidly becoming part of childhood, Internet Matters urges better awareness and safety tools for parents, schools, and children. The organisation stresses that while AI may seem supportive, it cannot replace genuine human relationships and should not be treated as an emotional advisor.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Google urges caution as Gmail AI tools face new threats

Google has issued a warning about a new wave of cyber threats targeting Gmail users, driven by vulnerabilities in AI-powered features.

Researchers at 0din, Mozilla’s zero-day investigation group, demonstrated how attackers can exploit Google Gemini’s summarisation tools using prompt injection attacks.

In one case, a malicious email included hidden prompts using white-on-white font, which the user cannot see but Gemini processes. When the user clicks ‘summarise this email,’ Gemini follows the attacker’s instructions and adds a phishing warning that appears to come from Google.

The technique, known as an indirect prompt injection, embeds malicious commands within invisible HTML tags like <span> and <div>. Although Google has released mitigations since similar attacks surfaced in 2024, the method remains viable and continues to pose risks.

0din warns that Gemini email summaries should not be considered trusted sources of security information and urges stronger user training. They advise security teams to isolate emails containing zero-width or hidden white-text elements to prevent unintended AI execution.

According to 0din, prompt injections are the new equivalent of email macros—easy to overlook and dangerously effective in execution. Until large language models offer better context isolation, any third-party text the AI sees is essentially treated as executable code.

Even routine AI tools could be hijacked for phishing or more advanced cyberattacks without the userćs awareness. Google notes that as AI adoption grows across sectors, these subtle threats require urgent industry-wide countermeasures and updated user protections.

Users are advised to delete any email that displays unexpected security warnings in its AI summary, as these may be weaponised.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

AI fake news surge tests EU Digital Services Act

Europe is facing a growing wave of AI-powered fake news and coordinated bot attacks that overwhelm media, fact-checkers, and online platforms instead of relying on older propaganda methods.

According to the European Policy Centre, networks using advanced AI now spread deepfakes, hoaxes, and fake articles faster than they can be debunked, raising concerns over whether EU rules are keeping up.

Since late 2024, the so-called ‘Overload’ operation has doubled its activity, sending an average of 2.6 fabricated proposals each day while also deploying thousands of bot accounts and fake videos.

These efforts aim to disrupt public debate through election intimidation, discrediting individuals, and creating panic instead of open discussion. Experts warn that without stricter enforcement, the EU’s Digital Services Act risks becoming ineffective.

To address the problem, analysts suggest that Europe must invest in real-time threat sharing between platforms, scalable AI detection systems, and narrative literacy campaigns to help citizens recognise manipulative content instead of depending only on fact-checkers.

Publicly naming and penalising non-compliant platforms would give the Digital Services Act more weight.

The European Parliament has already acknowledged widespread foreign-backed disinformation and cyberattacks targeting EU countries. Analysts say stronger action is required to protect the information space from systematic manipulation instead of allowing hostile narratives to spread unchecked.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

xAI issues apology over Grok’s offensive posts

Elon Musk’s AI startup xAI has apologised after its chatbot Grok published offensive posts and made anti-Semitic claims. The company said the incident followed a software update designed to make Grok respond more like a human instead of relying strictly on neutral language.

After the Tuesday update, Grok posted content on X suggesting people with Jewish surnames were more likely to spread online hate, triggering public backlash. The posts remained live for several hours before X removed them, fuelling further criticism.

xAI acknowledged the problem on Saturday, stating it had adjusted Grok’s system to prevent similar incidents.

The company explained that programming the chatbot to ‘tell like it is’ and ‘not be afraid to offend’ made it vulnerable to users steering it towards extremist content instead of maintaining ethical boundaries.

Grok has faced controversy since its 2023 launch as an ‘edgy’ chatbot. In March, xAI acquired X to integrate its data resources, and in May, Grok was criticised again for spreading unverified right-wing claims. Musk introduced Grok 4 last Wednesday, unrelated to the problematic update on 7 July.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Meta buys PlayAI to strengthen voice AI

Meta has acquired California-based startup PlayAI to strengthen its position in AI voice technology. PlayAI specialises in replicating human-like voices, offering Meta a route to enhance conversational AI features instead of relying solely on text-based systems.

According to reports, the PlayAI team will join Meta next week.

Although financial terms have not been disclosed, industry sources suggest the deal is worth tens of millions. Meta aims to use PlayAI’s expertise across its platforms, from social media apps to devices like Ray-Ban smart glasses.

The move is part of Meta’s push to keep pace with competitors like Google and OpenAI in the generative AI race.

Talent acquisition plays a key role in the strategy. By absorbing smaller, specialised teams like PlayAI’s, Meta focuses on integrating technology and expert staff instead of developing every capability in-house.

The PlayAI team will report directly to Meta’s AI leadership, underscoring the company’s focus on voice-driven interactions and metaverse experiences.

Bringing PlayAI’s voice replication tools into Meta’s ecosystem could lead to more realistic AI assistants and new creator tools for platforms like Instagram and Facebook.

However, the expansion of voice cloning raises ethical and privacy concerns that Meta must manage carefully, instead of risking user trust.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!