AI fake news surge tests EU Digital Services Act

Europe is facing a growing wave of AI-powered fake news and coordinated bot attacks that overwhelm media, fact-checkers, and online platforms instead of relying on older propaganda methods.

According to the European Policy Centre, networks using advanced AI now spread deepfakes, hoaxes, and fake articles faster than they can be debunked, raising concerns over whether EU rules are keeping up.

Since late 2024, the so-called ‘Overload’ operation has doubled its activity, sending an average of 2.6 fabricated proposals each day while also deploying thousands of bot accounts and fake videos.

These efforts aim to disrupt public debate through election intimidation, discrediting individuals, and creating panic instead of open discussion. Experts warn that without stricter enforcement, the EU’s Digital Services Act risks becoming ineffective.

To address the problem, analysts suggest that Europe must invest in real-time threat sharing between platforms, scalable AI detection systems, and narrative literacy campaigns to help citizens recognise manipulative content instead of depending only on fact-checkers.

Publicly naming and penalising non-compliant platforms would give the Digital Services Act more weight.

The European Parliament has already acknowledged widespread foreign-backed disinformation and cyberattacks targeting EU countries. Analysts say stronger action is required to protect the information space from systematic manipulation instead of allowing hostile narratives to spread unchecked.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

xAI issues apology over Grok’s offensive posts

Elon Musk’s AI startup xAI has apologised after its chatbot Grok published offensive posts and made anti-Semitic claims. The company said the incident followed a software update designed to make Grok respond more like a human instead of relying strictly on neutral language.

After the Tuesday update, Grok posted content on X suggesting people with Jewish surnames were more likely to spread online hate, triggering public backlash. The posts remained live for several hours before X removed them, fuelling further criticism.

xAI acknowledged the problem on Saturday, stating it had adjusted Grok’s system to prevent similar incidents.

The company explained that programming the chatbot to ‘tell like it is’ and ‘not be afraid to offend’ made it vulnerable to users steering it towards extremist content instead of maintaining ethical boundaries.

Grok has faced controversy since its 2023 launch as an ‘edgy’ chatbot. In March, xAI acquired X to integrate its data resources, and in May, Grok was criticised again for spreading unverified right-wing claims. Musk introduced Grok 4 last Wednesday, unrelated to the problematic update on 7 July.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Azerbaijan government workers hit by cyberattacks

In the first six months of the year, 95 employees from seven government bodies in Azerbaijan fell victim to cyberattacks after neglecting basic cybersecurity measures and failing to follow established protocols. The incidents highlight growing risks from poor cyber hygiene across public institutions.

According to the State Service of Special Communication and Information Security (XRİTDX), more than 6,200 users across the country were affected by various cyberattacks during the same period, not limited to government staff.

XRİTDX is now intensifying audits and monitoring activities to strengthen information security and safeguard state organisations against both existing and evolving cyber threats instead of leaving vulnerabilities unchecked.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Enhancing email security through multi-factor authentication

Many users overlook one critical security setting that can stop hackers in their tracks: multi-factor authentication (MFA). Passwords alone are no longer enough. Easy-to-remember passwords are insecure, and strong passwords are rarely memorable or widely reused.

Brute-force attacks and credential leaks are common, especially since many users repeat passwords across different platforms. MFA solves this by requiring a second verification form, usually from your phone or an authenticator app, to confirm your identity.

The extra step can block attackers, even if they have your password, because they still need access to your second device. Two-factor authentication (2FA) is the most common form of MFA. It combines something you know (your password) with something you have.

Many email providers, including Gmail, Outlook, and Proton Mail, now offer built-in 2FA options under account security settings. On Gmail, visit your Google Account, select Security, and enable 2-Step Verification. Use Google Authenticator instead of SMS for better safety.

Outlook.com users can turn on 2FA through their Microsoft account’s Security settings, using an authenticator app for code generation. Proton Mail allows you to scan a QR code with Google Authenticator after enabling 2FA under Account and Password settings.

Authenticator apps are preferred over SMS, as they are vulnerable to SIM-swapping and phishing-based interception. Adding MFA is a fast, simple way to strengthen your email security and avoid becoming a victim of password-related breaches.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

CISA 2015 expiry threatens private sector threat sharing

Congress has under 90 days to renew the Cybersecurity Information Sharing Act (CISA) of 2015 and avoid a regulatory setback. The law protects companies from liability when they share cyber threat indicators with the government or other firms, fostering collaboration.

Before CISA, companies hesitated due to antitrust and data privacy concerns. CISA removed ambiguity by offering explicit legal protections. Without reauthorisation, fear of lawsuits could silence private sector warnings, slowing responses to significant cyber incidents across critical infrastructure sectors.

Debates over reauthorisation include possible expansions of CISA’s scope. However, many lawmakers and industry groups in the United States now support a simple renewal. Health care, finance, and energy groups say the law is crucial for collective defence and rapid cyber threat mitigation.

Security experts warn that a lapse would reverse years of progress in information sharing, leaving networks more vulnerable to large-scale attacks. With only 35 working days left for Congress before the 30 September deadline, the pressure to act is mounting.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Hackers steal $500K via malicious Cursor AI extension

A cyberattack targeting the Cursor AI development environment has resulted in the theft of $500,000 in cryptocurrency from a Russian developer. Despite strong security practices and a fresh operating system, the victim downloaded a malicious extension named ‘Solidity Language’ in June 2025.

Masquerading as a syntax highlighting tool, the fake extension exploited search rankings to appear more legitimate than actual alternatives. Once installed, the extension served as a dropper for malware rather than offering any development features.

It contacted a command-and-control server and began deploying scripts designed to check for remote desktop software and install backdoors. The malware used PowerShell scripts to install ScreenConnect, granting persistent access to the victim’s system through a relay server.

Securelist analysts found that the extension exploited Open VSX registry algorithms by publishing with a more recent update date. Further investigation revealed the same attack methods were used in other packages, including npm’s ‘solsafe’ and three VS Code extensions.

The campaign reflects a growing trend of supply chain attacks exploiting AI coding tools to distribute persistent, stealthy malware.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

AI can reshape the insurance industry, but carries real-world risks

AI is creating new opportunities for the insurance sector, from faster claims processing to enhanced fraud detection.

According to Jeremy Stevens, head of EMEA business at Charles Taylor InsureTech, AI allows insurers to handle repetitive tasks in seconds instead of hours, offering efficiency gains and better customer service. Yet these opportunities come with risks, especially if AI is introduced without thorough oversight.

Poorly deployed AI systems can easily cause more harm than good. For instance, if an insurer uses AI to automate motor claims but trains the model on biassed or incomplete data, two outcomes are likely: the system may overpay specific claims while wrongly rejecting genuine ones.

The result would not simply be financial losses, but reputational damage, regulatory investigations and customer attrition. Instead of reducing costs, the company would find itself managing complaints and legal challenges.

To avoid such pitfalls, AI in insurance must be grounded in trust and rigorous testing. Systems should never operate as black boxes. Models must be explainable, auditable and stress-tested against real-world scenarios.

It is essential to involve human experts across claims, underwriting and fraud teams, ensuring AI decisions reflect technical accuracy and regulatory compliance.

For sensitive functions like fraud detection, blending AI insights with human oversight prevents mistakes that could unfairly affect policyholders.

While flawed AI poses dangers, ignoring AI entirely risks even greater setbacks. Insurers that fail to modernise may be outpaced by more agile competitors already using AI to deliver faster, cheaper and more personalised services.

Instead of rushing or delaying adoption, insurers should pursue carefully controlled pilot projects, working with partners who understand both AI systems and insurance regulation.

In Stevens’s view, AI should enhance professional expertise—not replace it—striking a balance between innovation and responsibility.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Hackers use fake Termius app to infect macOS devices

Hackers are bundling legitimate Mac apps with ZuRu malware and poisoning search results to lure users into downloading trojanized versions. Security firm SentinelOne reported that the Termius SSH client was recently compromised and distributed through malicious domains and fake downloads.

The ZuRu backdoor, originally detected in 2021, allows attackers to silently access infected machines and execute remote commands undetected. Attackers continue to target developers and IT professionals by trojanising trusted tools such as SecureCRT, Navicat, and Microsoft Remote Desktop.

Infected disk image files are slightly larger than legitimate ones due to embedded malicious binaries. Victims unknowingly launch malware alongside the real app.

The malware bypasses macOS code-signing protections by injecting a temporary developer signature into the compromised application bundle. The updated variant of ZuRu requires macOS Sonoma 14.1 or newer and supports advanced command-and-control functions using the open-source Khepri beacon.

The functions include file transfers, command execution, system reconnaissance and process control, with captured outputs sent back to attacker-controlled domains. The latest campaign used termius.fun and termius.info to host the trojanized packages. Affected users often lack proper endpoint security.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Bank of Korea sounds alarm over unregulated stablecoins

Bank of Korea Governor Lee Chang-yong warned that letting non-banks issue won-based stablecoins could spark economic confusion similar to the 19th-century US Free Banking Era. His remarks follow President Lee Jae Myung’s push to launch domestic stablecoins under his economic agenda.

Governor Lee noted that handing over payment and settlement services to non-banks might disrupt the profit models of traditional banks and conflict with foreign exchange policies. He stressed that stablecoin policy requires coordination across government, as the central bank lacks sole authority.

Meanwhile, President Lee’s support for stablecoins has sparked a flurry of activity among fintech and banking firms, with many filing trademark applications linked to KRW stablecoin symbols. KakaoPay, one of South Korea’s largest payment platforms, has seen its stock surge by more than 120% since Lee’s election.

The BOK recently announced it will pause its central bank digital currency (CBDC) pilot, citing legal uncertainty surrounding the coexistence of CBDCs, stablecoins, and deposit tokens. Lee stated the trial had considered stablecoin interaction from the beginning, and further action will depend on legislative developments.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

AI technology drives sharp rise in synthetic abuse material

AI is increasingly being used to produce highly realistic synthetic abuse videos, raising alarm among regulators and industry bodies.

According to new data published by the Internet Watch Foundation (IWF), 1,286 individual AI-generated abuse videos were identified during the first half of 2025, compared to just two in the same period last year.

Instead of remaining crude or glitch-filled, such material now appears so lifelike that under UK law, it must be treated like authentic recordings.

More than 1,000 of the videos fell into Category A, the most serious classification involving depictions of extreme harm. The number of webpages hosting this type of content has also risen sharply.

Derek Ray-Hill, interim chief executive of the IWF, expressed concern that longer-form synthetic abuse films are now inevitable unless binding safeguards around AI development are introduced.

Safeguarding minister Jess Phillips described the figures as ‘utterly horrific’ and confirmed two new laws are being introduced to address both those creating this material and those providing tools or guidance on how to do so.

IWF analysts say video quality has advanced significantly instead of remaining basic or easy to detect. What once involved clumsy manipulation is now alarmingly convincing, complicating efforts to monitor and remove such content.

The IWF encourages the public to report concerning material and share the exact web page where it is located.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!