Cybercriminals exploit Facebook ads for fake AI tools and malware

Cybersecurity researchers from Bitdefender have uncovered a disturbing trend where cybercriminals exploit Facebook’s advertising platform to promote counterfeit versions of popular generative AI tools, including OpenAI’s Sora, DALL-E, ChatGPT 5, and Midjourney. These fraudulent Facebook ads are designed to trick unsuspecting users into downloading malware-infected software, leading to the theft of sensitive personal information.

The hackers hijack legitimate Facebook pages of well-known AI tools like Midjourney to impersonate these services, making false claims about exclusive access to new features. The malicious ads direct users to join related Facebook communities, where they are prompted to download supposed ‘desktop versions’ of the AI tools. However, these downloads contain Windows executables packed with harmful viruses like Rilide, Nova, Vidar, and IceRAT, which can steal stored credentials, cryptocurrency wallet data, and credit card details for illicit use.

The cybercrime scheme goes beyond fake ads and hijacked pages; it involves setting up multiple websites to avoid suspicion and using platforms like GoFile to distribute malware through fake Midjourney landing pages. Bitdefender’s analysis highlighted that hackers particularly targeted European Facebook users, with a prominent fake Midjourney page amassing 1.2 million followers before being shut down on 8 March 2024. The reach of these scams extended across countries like Sweden, Romania, Belgium, Germany, and others, with ads primarily targeting European males aged 25-55.

Bitdefender’s report also exposed the cybercriminals’ comprehensive distribution network for malware, known as Malware-as-a-Service (MaaS), enabling anyone to conduct sophisticated attacks. These include data theft, online account compromise, ransom demands after encrypting data, and fraudulent activities.

The case mirrors previous incidents, such as Google’s lawsuit against scammers in 2023 for using fake ads to spread malware. In that case, scammers posed as official Google channels to entice users into downloading purported AI products, highlighting a broader trend of exploiting trusted platforms for illicit gains.

Google sues alleged scammers for distributing fraudulent crypto apps on Play Store

Google has initiated legal action against two alleged crypto scammers for distributing fraudulent cryptocurrency trading apps through its Play Store, deceiving users and extracting money from them. Based in China and Hong Kong, the accused developers uploaded 87 deceptive apps that reportedly conned over 100,000 individuals. According to Google, users suffered losses ranging from $100 to tens of thousands per person due to these schemes, which have been operational since at least 2019.

The lawsuit marks Google’s proactive stance against such scams since Google swiftly removed the fraudulent apps from its Play Store. The company’s general counsel, Halimah DeLaine Prado, emphasised that holding these bad actors accountable is crucial to safeguarding users and maintaining the integrity of the app store. The company claims it incurred over $75,000 in economic damages while investigating this fraud.

The scam reportedly enticed users through romance messages and YouTube videos, urging them to download fake cryptocurrency apps. The scammers allegedly misled users into believing they could profit by becoming affiliates of the platforms. Once users invested money, the apps displayed false investment returns and balances, preventing users from withdrawing funds or imposing additional fees, ultimately leading to more financial losses.

Google’s legal action accuses the developers of violating its terms of service and the Racketeer Influenced and Corrupt Organizations Act. The company seeks to block further fraudulent activities by the defendants and aims to recover unspecified damages. The legal move represents Google’s commitment to combating app-based scams and protecting users from deceptive practices on its platform.

Microsoft faulted for preventable Chinese hack

A report released by the US Cyber Safety Review Board on Tuesday blamed Microsoft for a targeted Chinese hack on top government officials’ emails, deeming it ‘preventable’ due to cybersecurity lapses and lack of transparency. The breach, orchestrated by the Storm-0558 hacking group affiliated with China, originated from the compromise of a Microsoft engineer’s corporate account. Microsoft highlighted ongoing efforts to bolster security infrastructure and processes, pledging to review the report for further recommendations.

The board’s report outlined decisions by Microsoft that diminished enterprise security, risk management, and customer trust, prompting recommendations for comprehensive security reforms across all Microsoft products. Last year, the intrusion affected senior officials at the US State and Commerce departments, including Commerce Secretary Gina Raimondo and US Ambassador to China Nicholas Burns, raising concerns about the theft of sensitive emails from prominent American figures.

Despite acknowledging the inevitability of cyberattacks from well-resourced adversaries, Microsoft emphasised its commitment to enhancing system defences and implementing robust security measures. The company highlighted ongoing efforts to fortify systems against cyber threats and enhance detection capabilities to fend off adversarial attacks. The incident underscores the persistent challenges posed by cyber threats and the imperative for technology companies to prioritise cybersecurity measures to safeguard sensitive data and operations against evolving threats.

China’s top prosecutor warns cybercriminals are exploiting blockchain and metaverse projects

China’s Supreme People’s Procuratorate (SPP) is ramping up efforts to combat cybercrime by targeting criminals who use blockchain and metaverse projects for illegal activities. The SPP is alarmed by the recent surge in online fraud, cyber violence, and personal information infringement. Notably, the SPP has observed a significant rise in cybercrimes committed on blockchains and within the metaverse, with criminals increasingly relying on cryptocurrencies for money laundering, making it challenging to trace their illicit wealth.

Ge Xiaoyan, the Deputy Prosecutor-General of the SPP, highlights a 64% year-on-year increase in charges related to cybercrime-related telecom fraud, while charges linked to internet theft have risen nearly 23%, and those related to online counterfeiting and sales of inferior goods have surged by almost 86%. Procuratorates have pressed charges against 280,000 individuals involved in cybercrime cases between January and November, reflecting a 36% year-on-year increase and constituting 19% of all criminal offenses.

The People’s Bank of China (PBoC) acknowledges the importance of regulating cryptocurrency and decentralized finance in its latest financial stability report. The PBoC emphasizes the necessity of international cooperation in regulating the industry.

Despite the ban on most crypto transactions and cryptocurrency mining, mainland China remains a significant hub for crypto-mining activities.

AI’s right to forget – Machine unlearning

Machine unlearning is a growing field within AI that aims to address the challenge of forgetting outdated, incorrect, or private data in machine learning (ML) models. ML models struggle to forget information, which has significant implications for privacy, security, and ethics. This has led to the development of machine unlearning techniques.

When issues arise with a dataset, it is possible to modify or delete the dataset. However, if the data has been used to train an ML model, it becomes difficult to remove the impact of a problematic dataset. ML models are often considered black boxes, making it challenging to understand how specific datasets influenced the model and undo their effects.

OpenAI has faced criticism for the data used to train their models, and generative AI art tools are involved in legal battles regarding their training data. This highlights concerns about privacy and the potential disclosure of information about individuals whose data was used to train the models.

Machine unlearning aims to erase the influence of specific datasets on ML systems. This involves identifying problematic datasets and excluding them from the model or retraining the entire model from scratch. However, the latter approach is costly and time-consuming.

Efficient machine unlearning algorithms are needed to remove datasets without compromising utility. Some promising approaches include incremental updates to ML systems, limiting the influence of data points, and scrubbing network weights to remove information about specific training data.

However, machine unlearning faces challenges, including efficiency, standardization of evaluation metrics, validation of efficacy, privacy preservation, compatibility with existing ML models, and scalability to handle large datasets.

To address these challenges, interdisciplinary collaboration between AI experts, data privacy lawyers, and ethicists is required. Google has launched a machine unlearning challenge to unify evaluation metrics and foster innovative solutions.

Looking ahead, advancements in hardware and infrastructure will support the computational demands of machine unlearning. Collaborative efforts between legal professionals, ethicists, and AI researchers can align unlearning algorithms with ethical and legal standards. Increased public awareness and potential policy and regulatory changes will also shape the development and application of machine unlearning.

Businesses using large datasets are advised to understand and adopt machine unlearning strategies to proactively manage data privacy concerns. This includes monitoring research, implementing data handling rules, considering interdisciplinary teams, and preparing for retraining costs.

Machine unlearning is crucial for responsible AI, improving data handling capabilities while maintaining model quality. Although challenges remain, progress is being made in developing efficient unlearning algorithms. Businesses should embrace machine unlearning to manage data privacy issues responsibly and stay up-to-date with advancements in the field.

Read more

Employees at Fortune 1000 telecom companies are some of the most exposed on darkweb, researchers report

A recent report by threat intelligence firm SpyCloud has shed light on the alarming vulnerability of employees at Fortune 1000 telecommunications companies on dark web sites. The report reveals that researchers have uncovered approximately 6.34 million pairs of credentials, including corporate email addresses and passwords, which are likely associated with employees in the telecommunications sector.

The report highlights this as an ‘extreme’ rate of exposure compared to other sectors. In comparison, SpyCloud’s findings uncovered 7.52 million pairs of credentials belonging to employees in the tech sector, but this encompassed a significantly larger pool of 167 Fortune 1000 companies.

Media reports that these findings underscore the heightened risk faced by employees within the telecommunications industry, as their credentials are more readily available on dark web platforms. The compromised credentials pose a significant threat to the affected individuals and their respective companies, as cybercriminals can exploit them for various malicious activities such as unauthorized access, data breaches, and targeted attacks.

Western Digital, a technology company, confirms that hackers stole customer data

Western Digital, a technology company, has notified its customers after the March 2023 data breach and confirmed that the customer data was stolen.

In a press release, the company mentioned it worked with external forensic experts and determined that the hackers obtained a copy of a database which contained limited personal information of online store customers. The exact number of affected customers has not been disclosed. The company has notified affected customers and advised them to remain vigilant against potential phishing attempts.

The March data breach had previously been reported in early April when the company disclosed it has suffered a cyberattack. TechCrunch reported that an ‘unnamed’ hacking group breached Western Digital, claiming to have stolen ten terabytes of data.

The hackers subsequently published some of the stolen data and threatened to release more if their demands were not met. Western Digital has restored the majority of its impacted systems and services and continues to investigate the incident.

Ransomware criminal group leaks MSI’s private code on darkweb

The ransomware gang responsible for targeting Taiwanese PC manufacturer MSI has leaked the private code signing keys of the company available on their darkweb leak site. The attack, orchestrated by the group known as Money Message, was announced in early April: The group revealed that they had successfully breached the systems of MSI, a multinational IT corporation renowned for its production and distribution of motherboards and graphics cards worldwide, including in the USA and Canada. MSI is headquartered in Taipei, Taiwan.

It is reported that initially, the criminal group demanded a ransom from MSI, threatening to publish the stolen files if their demands were not met by a specified deadline. However, the group has eventually exposed MSI’s private code signing keys on their darkweb leak site. These keys are of significant importance as they are used to authenticate the legitimacy and integrity of software and firmware updates released by the company. Malicious actors could potentially misuse these keys to distribute malware or carry out other malicious activities, putting MSI’s customers at risk. The company now faces the daunting task of mitigating the potential fallout from this exposure and bolstering their cybersecurity measures to prevent further unauthorized access.

ICANN launches project to look at what drives malicious domain name registrations

The Internet Corporation for Assigned Names and Numbers (ICANN) has launched a project to explore the practices and choices of malicious actors when they decide to use the domain names of certain registrars over others. The project, called Inferential Analysis of Maliciously Registered Domains (INFERMAL), will systematically analyse the preferences of cyberattackers and possible measures to mitigate malicious activities across top-level domains (TLDs). It is funded as part of ICANN’s Domain Name System (DNS) Security Threat Mitigation Program, which aims to reduce the prevalence of DNS security threats across the Internet.

The team leading the project intends to collect and analyse a comprehensive list of domain name registration policies pertinent to would-be attackers, and then use statistical modelling to identify the registration factors preferred by attackers. It is expected that the findings of the project could help registrars and registries identify relevant DNS anti-abuse practices, strengthen the self-regulation of the overall domain name industry, and reduce the costs associated with domain regulations. The project would also help increase the security levels of domain names and, thus, the trust of end-users.

Data poisoning – a new type of cyberattacks against AI systems

Data poisoning is a new type of cyber-attack aimed at misleading AI systems. AI is developed by processing huge amounts of data. The quality of data impacts the quality of AI. Data poisoning is the intentional supply of wrong or misleading data to impact the quality of AI. Data poisoning is becoming particularly risky with the development of Large Language Models (LLM) such as ChatGPT.

Researchers from the Swiss Federal Institute of Technology (ETH) in Zurich, Google, NVIDIA and Robust Intelligence have recently published a preprint paper investigating the feasibility of data poisoning attacks against machine learning (ML) models used in artificial intelligence (AI). They injected corrupted data into an existing training data set in order to influence the behaviour of an AI algorithm that is being trained on it. It impacted the functionality of AI systems.

As AI systems are becoming more complex and massive, the detection of data poisoning attacks will be difficult. The main risks are in dealing with politically charged topics.