Cybercriminals trick users with fake AI apps

Cybercriminals are tricking users into downloading a dangerous new malware called Noodlophile by disguising it as AI software. Rather than using typical phishing tactics, attackers create convincing fake platforms that appear to offer AI-powered tools for editing videos or images.

These are promoted through realistic-looking Facebook groups and viral social media posts, some of which have received over 62,000 views.

Users are lured with promises of AI-generated content and are directed to bogus sites, one of which pretends to be CapCut AI, offering video editing features. Once users upload prompts and attempt to download the content, they unknowingly receive a malicious ZIP file.

Inside, it is a disguised program that kicks off a chain of infections, eventually installing the Noodlophile malware. However, this software can steal browser credentials, crypto wallet details, and other sensitive data.

The malware is linked to a Vietnamese developer who identifies themselves as a ‘passionate Malware Developer’ on GitHub. Vietnam has a known history of cybercrime activity targeting social media platforms like Facebook.

In some cases, the Noodlophile Stealer has been bundled with remote access tools like XWorm, which allow attackers to maintain long-term control over victims’ systems.

This isn’t the first time attackers have used public interest in AI for malicious purposes. Meta removed over 1,000 dangerous links in 2023 that exploited ChatGPT’s popularity to spread malware.

Meanwhile, cybersecurity experts at CYFIRMA have reported another threat: a new, simple yet effective malware called PupkinStealer, which secretly sends stolen information to hackers using Telegram bots.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Microsoft expands cloud push across Europe

Microsoft has unveiled a new set of commitments aimed at strengthening its digital presence across Europe, pledging to expand cloud and AI infrastructure while supporting the region’s economic competitiveness.

Announced by Microsoft President Brad Smith in Brussels, the ‘European Digital Commitments’ include a promise to increase European data centre capacity by 40% within two years, bringing the total to over 200 across 16 countries.

Smith explained that Microsoft’s goal is to provide technology that helps individuals and organisations succeed, rather than simply expanding its reach. He highlighted AI as essential to modern economies, describing it as a driving force behind what he called the ‘AI economy.’

Alongside job creation, Microsoft hopes its presence will spark wider economic benefits for customers and partners throughout the continent.

To ease concerns around data security, particularly in light of USEU geopolitical tensions, Microsoft has added clauses in agreements with European institutions allowing it to legally resist any external order to halt operations in Europe.

If such efforts failed, Microsoft has arranged for European partners to access its code stored securely in Switzerland, instead of allowing disruptions to affect vital digital services.

Although Microsoft’s investments stand to benefit Europe, they also underscore the company’s deep dependence on the region, with over a quarter of its business based there.

Smith insisted that Microsoft’s global success would not have been possible without its European footprint, and called for continued cooperation across the Atlantic—even in the face of potential tariff disputes or political strains.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Google pays around $1.4 billion over privacy case

Google has agreed to pay $1.375 billion to settle a lawsuit brought by the state of Texas over allegations that it violated users’ privacy through features such as Incognito mode, Location History, and biometric data collection.

Despite the sizable sum, Google denies any wrongdoing, stating that the claims were based on outdated practices which have since been updated.

Texas Attorney General Ken Paxton announced the settlement, emphasising that large tech firms are not above the law.

He accused Google of covertly tracking individuals’ locations and personal searches, while also collecting biometric data such as voiceprints and facial geometry — all without users’ consent. Paxton claimed the state’s legal challenge had forced Google to answer for its actions.

Although the settlement resolves two lawsuits filed in 2022, the specific terms and how the funds will be used remain undisclosed. A Google spokesperson maintained that the resolution brings closure to claims about past practices, instead of requiring any changes to its current products.

The case comes after a similar $1.4 billion agreement involving Meta, which faced accusations of unlawfully gathering facial recognition data. The repeated scrutiny from Texas authorities signals a broader pushback against the data practices of major tech companies.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Starkville Utilities hit by cyberattack

Starkville Utilities, a Mississippi-based electricity and water provider that also services Mississippi State University, has revealed a data breach that may have exposed sensitive information belonging to over 11,000 individuals.

The breach, which was first detected in late October last year, led the company to disconnect its network in an attempt to contain the intrusion.

Despite these efforts, an investigation later found that attackers may have accessed personal data, including full names and Social Security numbers. Details were submitted to the Maine Attorney General’s Office, confirming the scale of the breach and the nature of the data involved.

While no reports of identity theft have emerged since the incident, Starkville Utilities has chosen to offer twelve months of free identity protection services to those potentially affected. The company maintains that it is taking additional steps to improve its cybersecurity defences.

Stolen data such as Social Security numbers often ends up on underground marketplaces instead of staying idle, where it can be used for identity fraud and other malicious activities.

The incident serves as yet another reminder of the ongoing threat posed by cybercriminals targeting critical infrastructure and user data.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Google unveils AI tool to boost African businesses

Google has announced the beta launch of AI Max for Search Campaigns, a new tool aimed at helping local businesses, including those across Africa, reach more customers through smarter advertising.

The feature, which builds on Google’s Gemini AI models, enhances how businesses appear in search results, even when users type unexpected or highly specific queries.

As African economies continue to embrace digital transformation, AI Max offers vital support to small and medium-sized enterprises. The tool intelligently matches search terms, customises ad text in real time, and expands URL targeting to guide users to the most relevant content.

Designed to reduce the burden on entrepreneurs managing multiple responsibilities, the tool is seen as a cost-effective way to attract higher-intent customers with minimal effort.

This initiative complements Google’s ongoing support for African businesses, including training schemes like Hustle Academy. With AI Max, entrepreneurs now have access to technology that not only adapts to their needs but also improves their visibility in an increasingly competitive digital market.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

LockBit ransomware hacked, data on affiliates leaked

Internal data from the notorious LockBit ransomware group has been leaked following a hack of one of its administration panels. Over 200 conversations between affiliates and victims were also uncovered, revealing aggressive ransom tactics ranging from demands of a few thousand to over $100,000.

The breach, discovered on 7 May, exposed sensitive information including private chats with victims, affiliate account details, Bitcoin wallet addresses, and insights into LockBit’s infrastructure.

A defaced message on the group’s domain read: ‘Don’t do crime, crime is bad xoxo from Prague,’ linking to a downloadable archive of the stolen data. Although LockBit confirmed the breach, it downplayed its impact and denied that any victim decryptors were compromised.

Security researchers believe the leak could provide crucial intelligence for law enforcement. Searchlight Cyber identified 76 user credentials, 22 of which include TOX messaging IDs, commonly used by hackers and connected some users to aliases on criminal forums.

Speculation suggests the hack may be the result of infighting within the cybercriminal community, echoing a recent attack on the Everest ransomware group’s site. Authorities continue to pursue LockBit, but the group remains active despite previous takedowns.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

Reddit cracks down after AI bot experiment exposed

Reddit is accelerating plans to verify the humanity of its users following revelations that AI bots infiltrated a popular debate forum to influence opinions. These bots crafted persuasive, personalised comments based on users’ post histories, without disclosing their non-human identity.

Researchers from the University of Zurich conducted an unauthorised four-month experiment on the r/changemyview subreddit, deploying AI agents posing as trauma survivors, political figures, and other sensitive personas.

The incident sparked outrage across the platform. Reddit’s Chief Legal Officer condemned the experiment as a violation of both legal and ethical standards, while CEO Steve Huffman stressed that the platform’s strength lies in genuine human exchange.

All accounts linked to the study have been banned, and Reddit has filed formal complaints with the university. To restore trust, Reddit will introduce third-party verification tools that confirm users are human, without collecting personal data.

While protecting anonymity remains a priority, the platform acknowledges it must evolve to meet new threats posed by increasingly sophisticated AI impersonators.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

FTC says Amazon misused legal privilege to dodge scrutiny

Federal regulators have accused Amazon of deliberately concealing incriminating evidence in an ongoing antitrust case by abusing privilege claims. The Federal Trade Commission (FTC) said Amazon wrongly withheld nearly 70,000 documents, withdrawing 92% of its claims after a judge forced a re-review.

The FTC claims Amazon marked non-legal documents as privileged to keep them from scrutiny. Internal emails suggest staff were told to mislabel communications by including legal teams unnecessarily.

One email reportedly called former CEO Jeff Bezos the ‘chief dark arts officer,’ referring to questionable Prime subscription tactics.

The documents revealed issues such as widespread involuntary Prime sign-ups and efforts to manipulate search results in favour of Amazon’s products. Regulators said these practices show Amazon intended to hide evidence rather than make honest errors.

The FTC is now seeking a 90-day extension for discovery and wants Amazon to cover the additional legal costs. It claims the delay and concealment gave Amazon an unfair strategic advantage instead of allowing a level playing field.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Meta brings back Robert Fergus to lead AI lab

Meta Platforms has brought back Robert Fergus to lead its AI research lab, FAIR, which he helped found in 2014 alongside Yann LeCun. After spending five years as a research director at Google’s DeepMind, Fergus returns to replace Joelle Pineau, who steps down on 30 May.

Fergus, who previously spent six years as a research scientist at Facebook, announced his return on LinkedIn, expressing gratitude to Pineau and reaffirming Meta’s long-term commitment to AI.

FAIR, Meta’s Fundamental AI Research division, focuses on innovations such as voice translation and image recognition to support its open-source Llama language model.

The move comes as Meta ramps up its AI investment, with CEO Mark Zuckerberg allocating up to $65 billion in capital spending for 2025 to expand the company’s AI infrastructure.

AI is now deeply integrated into Meta’s services, including Facebook, Instagram, Messenger, WhatsApp, and a new standalone app meant to rival OpenAI.

By bringing Fergus back instead of appointing a new outsider, Meta signals its intent to build on its existing AI legacy while pushing further toward human-level machine experiences.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Microsoft bans DeepSeek app for staff use

Microsoft has confirmed it does not allow employees to use the DeepSeek app, citing data security and propaganda concerns.

Speaking at a Senate hearing, company president Brad Smith explained the decision stems from fears that data shared with DeepSeek could end up on Chinese servers and be exposed to state surveillance laws.

Although DeepSeek is open source and widely available, Microsoft has chosen not to list the app in its own store.

Smith warned that DeepSeek’s answers may be influenced by Chinese government censorship and propaganda, and its privacy policy confirms data is stored in China, making it subject to local intelligence regulations.

Interestingly, Microsoft still offers DeepSeek’s R1 model via its Azure cloud service. The company argued this is a different matter, as customers can host the model on their servers instead of relying on DeepSeek’s infrastructure.

Even so, Smith admitted Microsoft had to alter the model to remove ‘harmful side effects,’ although no technical details were provided.

While Microsoft blocks DeepSeek’s app for internal use, it hasn’t imposed a blanket ban on all chatbot competitors. Apps like Perplexity are available in the Windows store, unlike those from Google.

The stance against DeepSeek marks a rare public move by Microsoft as the tech industry navigates rising tensions over AI tools with foreign links.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!