Major tech firms pledge to fight online fraud

Eight major technology companies, including Google, Amazon, and OpenAI, have signed the ‘Online Services Accord Against Scams’ to share threat intelligence and strengthen defences against online fraud.

The voluntary pact aims to help companies work together and coordinate with governments, law enforcement, and NGOs.

The accord commits to sharing intelligence on criminal networks, using AI to detect fraud, and strengthening verification for financial transactions. Participating companies will also provide clearer reporting channels for users and encourage governments to prioritise scam prevention.

Executives emphasised that tackling scams requires collective effort. Meta’s Nathaniel Gleicher said the accord enables companies to share insights beyond individual cases, while Microsoft’s Steven Masada highlighted the need for faster collaboration to disrupt scams and track perpetrators globally.

The move comes as online scams grow in scale and sophistication, aided by AI-generated content and cross-platform operations. Consumers lost over $16 billion to online scams in 2024, prompting firms to boost safety features and push for stronger regulations and law enforcement.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

Europe aims to tighten AI rules and personal data standards

The European Council has proposed AI Act amendments, banning nudification tools and tightening rules for processing sensitive personal data. The move represents a key step in streamlining the continent’s digital legislation and improving safeguards for citizens.

Council officials highlighted the prohibition of AI systems that generate non-consensual sexual content or child sexual abuse material. The measure matches a European Parliament ban, showing strong support for tighter AI controls amid misuse concerns.

The proposal follows incidents such as the Grok chatbot producing millions of non-consensual intimate images, which sparked a global backlash and prompted an EU probe into the social media platform X and its AI features.

Other amendments reinstate strict rules for processing sensitive data to detect bias and require providers to register high-risk AI systems, even if claiming exemptions. Negotiations between the Council and Parliament will finalise the AI Act’s updated measures.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

Young investors warned on crypto and AI advice

Australia’s financial regulator has warned young investors to be cautious with social media influencers and AI chatbots. A survey by the Australian Securities and Investments Commission found one in four Gen Z Australians invest in crypto, often guided by online content.

The survey of 1,127 participants aged 18 to 28 showed 63% use social media for financial information, 18% rely on AI platforms, and 30% consult YouTube. AI was the most trusted source at 64%, but over half still trust influencers and social media despite possible misinformation.

ASIC previously issued warnings to 18 influencers suspected of promoting high-risk products without a licence. Commissioner Alan Kirkland said some social media marketing promotes crypto scams or risky super switches that threaten young people’s key assets.

The regulator is also watching AI financial guidance. Personalised advice from unlicensed sources is illegal, and young investors should carefully check sources, especially as crypto exchanges increasingly use AI bots for trading guidance.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

Phishing attack on Starbucks employee portal exposes nearly 900 workers

Starbucks has disclosed a data breach affecting 889 employees after attackers gained unauthorised access to Starbucks Partner Central accounts, the internal platform workers use to manage their employment details, payroll, and benefits information.

The company discovered suspicious activity on 6 February 2026, with investigators finding that accounts had been compromised between 19 January and 11 February.

Attackers obtained valid login credentials by directing employees to fraudulent websites designed to impersonate the legitimate Partner Central login page, a phishing tactic that allowed them to authenticate into real accounts without ever directly breaching Starbucks’ core infrastructure.

The exposed data included full names, Social Security numbers, dates of birth, and financial account and banking routing numbers linked to direct deposit records.

Starbucks notified law enforcement, strengthened security controls on Partner Central, and confirmed the breach does not affect customers. The company is offering affected employees two years of free credit monitoring and identity protection through Experian IdentityWorks.

Cybersecurity experts have warned that the exposed data, including Social Security numbers and financial identifiers, retains value to criminal groups for years and cannot simply be reset like a password.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Deepfake attacks push organisations to rethink cybersecurity strategies

Organisations are strengthening their cybersecurity strategies as deepfake attacks become more convincing and easier to produce using generative AI.

Security experts alert that enterprises must move beyond basic detection tools and adopt layered security strategies to defend against the growing threat of deepfake attacks targeting communications and digital identity.

Many existing tools for identifying manipulated media are still imperfect. Digital forensics expert Hany Farid estimates that some systems used to detect deepfake attacks are only about 80 percent effective and often fail to explain how they determine whether an image, video, or audio recording is authentic. The lack of explainability also raises challenges for legal investigations and public verification of suspicious media.

Cybersecurity companies are creating new technologies to improve the detection of deepfake attacks by analysing slight signals that are difficult for humans to notice. Firms such as GetReal Security, Reality Defender, Deep Media, and Sensity AI examine lighting consistency, shadow angles, voice patterns, and facial movements. Environmental indicators such as device location, metadata, and IP information can also help security teams spot potential deepfake attacks.

However, experts say detection alone cannot fully protect organisations from deepfake attacks. Companies are increasingly conducting internal red-team exercises that simulate impersonation scenarios to expose weaknesses in verification procedures. Multi-factor authentication techniques can reduce the risk of employees responding to fraudulent communications.

Another emerging defence involves digital provenance systems designed to track the origin and modification history of digital content. Initiatives such as the Coalition for Content Provenance and Authenticity (C2PA) embed cryptographically signed metadata into media files, allowing organisations to verify whether content linked to suspected deepfake attacks has been altered.

Recent experiments highlight how testing these threats can be. In February, cybersecurity company Reality Defender conducted an exercise with NATO by introducing deepfake media into a simulated military scenario. The findings showed how easily even experienced officials can struggle to identify manipulated communications, reinforcing calls for automated systems capable to detecting deepfake attacks across critical infrastructure.

As generative AI tools continue to advance, organisations are expected to combine detection technologies, stronger verification procedures, and provenance tracking to reduce the risks posed by deepfake attacks.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

Hackers target WhatsApp and Signal in global encrypted messaging attacks

Foreign state-backed hackers are targeting accounts on WhatsApp and Signal used by government officials, diplomats, military personnel, and other high-value individuals, according to a security alert issued by the Portuguese Security Intelligence Service (SIS).

Portuguese authorities described the activity as part of a global cyber-espionage campaign aimed at gaining access to sensitive communications and extracting privileged information from Portugal and allied countries. The advisory did not identify the origin of the suspected attackers.

The warning follows similar alerts from other European intelligence agencies. Earlier this week, Dutch authorities reported that hackers linked to Russia were conducting a global campaign targeting the messaging accounts of officials, military personnel, and journalists.

Security agencies say the attackers are not exploiting vulnerabilities in the messaging platforms themselves. Both WhatsApp and Signal rely on end-to-end encryption designed to protect the content of messages from interception.

Instead, the campaign focuses on social engineering tactics that trick users into granting access to their accounts. According to the SIS report, attackers use phishing messages, malicious links, fake technical support requests, QR-code lures, and impersonation of trusted contacts.

The agency also warned that AI tools are increasingly being used to make such attacks more convincing. AI can help impersonate support staff, mimic familiar voices or identities, and conduct more realistic conversations through messages, phone calls, or video.

Once attackers gain access to an account, they may be able to read private messages, group chats, and shared files via WhatsApp and Signal. They can also impersonate the compromised user to launch additional phishing attacks targeting the victim’s contacts.

The alert echoes a previous warning issued by the Cybersecurity and Infrastructure Security Agency (CISA), which reported that encrypted messaging apps are increasingly being used as entry points for spyware and phishing campaigns targeting high-value individuals.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

BeatBanker malware targets Android users in Brazil

A new Android malware called BeatBanker is targeting users in Brazil through fake Starlink and government apps. The malware hijacks devices, steals banking credentials, tampers with cryptocurrency transactions, and secretly mines Monero.

Infection begins on phishing websites mimicking the Google Play Store or the ‘INSS Reembolso’ app. Users are tricked into installing trojanised APKs, which evade detection through memory-based decryption and by blocking analysis environments.

Fake update screens maintain persistence while silently downloading additional malicious payloads.

BeatBanker initially combined a banking trojan with a cryptocurrency miner. It uses accessibility permissions to monitor browsers and crypto apps, overlaying fake screens to redirect Tether and other crypto transfers.

A foreground service plays silent audio loops to prevent the device from shutting down, while Firebase Cloud Messaging enables remote control of infected devices.

The latest variant replaces the banking module with the BTMOB RAT, providing full control over devices. Capabilities include automatic permissions, background persistence, keylogging, GPS tracking, camera access, and screen-lock credential capture.

Kaspersky warns that BeatBanker demonstrates the growing sophistication of mobile threats and multi-layered malware campaigns.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

AI browsers expose new cybersecurity attack surfaces

Security researchers have demonstrated that agentic browsers, powered by AI, may introduce new cybersecurity vulnerabilities.

Experiments targeting the Comet AI browser, developed by Perplexity AI, showed that attackers could manipulate the system into executing phishing scams in only a few minutes.

The attack exploits the reasoning process used by AI agents when interacting with websites. These systems continuously explain their actions and observations, revealing internal signals that attackers can analyse to refine malicious strategies and bypass built-in safeguards.

Researchers showed that phishing pages can be iteratively trained using adversarial machine learning methods, such as Generative Adversarial Networks.

By observing how the AI browser responds to suspicious signals, attackers can optimise fraudulent pages until the system accepts them as legitimate.

The findings highlight a shift in the cybersecurity threat landscape. Instead of deceiving human users directly, attackers increasingly focus on manipulating the AI agents that perform online actions on behalf of users.

Security experts warn that prompt injection vulnerabilities remain a fundamental challenge for large language models and agentic systems.

Although new defensive techniques are being developed, researchers believe such weaknesses may remain difficult to eliminate.

Would you like to learn more about AI, tech and digital diplomacyIf so, ask our Diplo chatbot!

AI agents face growing prompt injection risks

AI developers are working on new defences against prompt-injection attacks that aim to manipulate AI agents. Security specialists warn that attackers are increasingly using social engineering techniques to influence AI systems that interact with online content.

Researchers say AI agents that browse the web or handle user tasks face growing risks from hidden instructions embedded in emails or websites. Experts in the US note that attackers often attempt to trick AI into revealing sensitive information.

Engineers are responding by designing systems that limit the impact of manipulation attempts. Developers in the US say AI tools must include safeguards preventing sensitive data from being transmitted without user approval.

Security teams are also introducing technologies that detect risky actions and prompt users for confirmation. Specialists argue that strong system design and user oversight will remain essential as AI agents gain more autonomy.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

Google outlines roadmap for safer generative AI for young users

Google has presented a strategy for developing generative AI systems designed to protect younger users better better while supporting learning and creativity.

The approach emphasises building conversational AI experiences that balance innovation with safeguards tailored to children and teenagers.

The company’s framework rests on three pillars: protecting young people online, respecting the role of families in digital environments and enabling youth to explore AI technologies responsibly.

According to Google, safety policies prohibit harmful content, including material linked to child exploitation, violent extremism and self-harm, while additional restrictions target age-inappropriate topics.

Safeguards are integrated throughout the AI development lifecycle, from user input to model responses. Systems use specialised classifiers to detect potentially harmful queries and prevent inappropriate outputs.

These protections are also applied to models such as Gemini, which incorporates defences against prompt manipulation and cyber misuse.

Beyond preventing harm, Google aims to support responsible AI adoption through educational initiatives.

Resources designed for families encourage discussions about responsible technology use, while tools such as Guided Learning in Gemini seek to help students explore complex topics through structured explanations and interactive learning support.

Would you like to learn more about AI, tech and digital diplomacyIf so, ask our Diplo chatbot!