South Korea’s National Intelligence Service (NIS) has raised concerns about the Chinese AI app DeepSeek, accusing it of excessively collecting personal data and using it for training purposes. The agency warned government bodies last week to take security measures, highlighting that unlike other AI services, DeepSeek collects sensitive data such as keyboard input patterns and transfers it to Chinese servers. Some South Korean government ministries have already blocked access to the app due to these security concerns.
The NIS also pointed out that DeepSeek grants advertisers unrestricted access to user data and stores South Korean users’ data in China, where it could be accessed by the Chinese government under local laws. The agency also noted discrepancies in the app’s responses to sensitive questions, such as the origin of kimchi, which DeepSeek claimed was Chinese when asked in Chinese, but Korean when asked in Korean.
DeepSeek has also been accused of censoring political topics, such as the 1989 Tiananmen Square crackdown, prompting the app to suggest changing the subject. In response to these concerns, China’s foreign ministry stated that the country values data privacy and security and complies with relevant laws, denying that it pressures companies to violate privacy. DeepSeek has not yet commented on the allegations.
The Central African Republic made waves on 10 February by announcing the launch of its meme coin, CAR. The news came directly from President Faustin-Archange Touadéra’s official X account, presenting the token as an experiment to unite people and boost national development. The meme coin, launched on the Solana-based Pump.fun platform, saw its value surge rapidly as traders rushed to invest in what was described as the first-ever national meme coin.
However, excitement soon turned to scepticism. AI detection tools flagged the president’s announcement video as potentially AI-generated, raising concerns about its authenticity. The project’s official X account was swiftly suspended, and further scrutiny revealed that its domain had been registered just days before the announcement using Namecheap, a budget-friendly provider. Shortly after, Namecheap took the website offline, citing it as an ‘abusive service.’
Despite these red flags, the CAR token initially reached a peak valuation of $527 million before dropping to $460 million. The controversy comes amid a rise in fraudulent memecoin launches, with recent cases involving hacked X accounts of high-profile figures. While there is still no clear confirmation on whether CAR is an official government-backed initiative or an elaborate scam, the crypto community remains on high alert.
South Korea has temporarily blocked employee access to Chinese AI startup DeepSeek over security concerns. A government notice urged ministries and agencies to exercise caution when using AI services, including DeepSeek and ChatGPT. Korea Hydro & Nuclear Power, the defence ministry, and the foreign ministry have all imposed restrictions on DeepSeek access.
Australia and Taiwan have already banned DeepSeek from government devices, citing security risks. Italy previously ordered the company to block its chatbot over privacy concerns. Authorities in the US, India, and parts of Europe are also reviewing the implications of using the AI service. South Korea’s privacy watchdog plans to question DeepSeek on its handling of user data.
Korean businesses are also tightening restrictions on generative AI. Kakao Corp advised employees to avoid using DeepSeek, despite its recent partnership with OpenAI. SK Hynix has limited access to generative AI services, and Naver has asked employees not to use AI tools that store data externally.
DeepSeek has not yet responded to requests for comment. The company’s latest AI models, released last month, have drawn attention for their capabilities and cost efficiency. However, growing security concerns are leading governments and corporations to impose stricter controls on their use.
OpenAI is set to air its first-ever television advert during the upcoming Super Bowl, marking its entry into commercial advertising. The Wall Street Journal reported that the AI company will join other major tech firms in leveraging the massive Super Bowl audience to promote its brand. Google previously used the event to highlight its AI capabilities.
The Super Bowl is one of the most sought-after advertising platforms, with high costs reflecting its enormous reach. A 30-second slot for the 2025 game has sold for up to $8 million, an increase from $7 million last year.
The 2024 Super Bowl attracted an estimated 210 million viewers, and this year’s event will take place in New Orleans on 9 February at the Caesars Superdome.
OpenAI has seen rapid growth since launching ChatGPT in 2022, reaching over 300 million weekly active users. The company is in talks to raise up to $40 billion at a $300 billion valuation and recently appointed Kate Rouch as its first chief marketing officer. Microsoft holds a significant stake in the AI firm.
Luca Casarini, a prominent Italian migrant rescue activist, was warned by Meta that his phone had been targeted with spyware. The alert was received through WhatsApp, the same day Meta accused surveillance firm Paragon Solutions of using advanced hacking methods to steal user data. Paragon, reportedly American-owned, has not responded to the allegations.
Casarini, who co-founded the Mediterranea Saving Humans charity, has faced legal action in Italy over his rescue work. He has also been a target of anti-migrant media and previously had his communications intercepted in a case related to alleged illegal immigration. He remains unaware of who attempted to hack his device or whether the attack had judicial approval.
The revelation follows a similar warning issued to Italian journalist Francesco Cancellato, whose investigative news outlet, Fanpage, recently exposed far-right sympathies within Prime Minister Giorgia Meloni’s political youth wing. Italy’s interior ministry has yet to comment on the situation.
Australia has banned Chinese AI startup DeepSeek from all government devices, citing security risks. The directive, issued by the Department of Home Affairs, requires all government entities to prevent the installation of DeepSeek’s applications and remove any existing instances from official systems. Home Affairs Minister Tony Burke stated that the immediate ban was necessary to safeguard Australia’s national security.
The move follows similar action taken by Italy and Taiwan, with other countries also reviewing potential risks posed by the AI firm. DeepSeek has drawn global attention for its cost-effective AI models, which have disrupted the industry by operating with lower hardware requirements than competitors. The rapid rise of the company has raised concerns over data security, particularly regarding its Chinese origins.
This is not the first time Australia has taken such action against a Chinese technology firm. Two years ago, the government imposed a nationwide ban on TikTok for similar security reasons. As scrutiny over AI intensifies, more governments may follow Australia’s lead in limiting DeepSeek’s reach within public sector networks.
Google has removed a key passage from its AI principles that previously committed to steering clear of potentially harmful applications, including weapons. The now-missing section, titled ‘AI applications we will not pursue,’ explicitly stated that the company would not develop technologies likely to cause harm, as seen in archived versions of the page reviewed by Bloomberg.
The change has sparked concern among AI ethics experts. Margaret Mitchell, former co-lead of Google’s ethical AI team and now chief ethics scientist at Hugging Face, criticised the move. ‘Having that removed is erasing the work that so many people in the ethical AI space and the activist space as well had done at Google, and more problematically, it means Google will probably now work on deploying technology directly that can kill people,’ she said.
With ethics guardrails shifting, questions remain about how Google will navigate the evolving AI landscape—and whether its revised stance signals a broader industry trend toward prioritising market dominance over ethical considerations.
Multiple Russian cybersecurity firms have published research reports on emerging threats, including a large-scale information-stealing campaign targeting local organisations using the Nova malware.
According to a report from Moscow-based BI.ZONE, Nova is a commercial malware sold as a service on dark web marketplaces. Prices range from $50 for a monthly license to $630 for a lifetime license. Nova is a variant of SnakeLogger, a widely used malware known for stealing sensitive information.
While the developers of Nova remain unidentified, the code contains strings in Polish, and a Telegram group dedicated to promoting and supporting the malware was created in August 2024. The scale of the campaign and the full extent of its impact on Russian organisations remain unclear.
Over the weekend, F.A.C.C.T. reported a cyberespionage campaign targeting chemical, food, and pharmaceutical companies in Russia, attributing the attacks to a state-backed group named Rezet (or Rare Wolf). Meanwhile, Solar reported an attack on Russian industrial facilities by the newly identified group APT NGC4020, which exploited a vulnerability in a SolarWinds tool.
The Nova malware collects a wide range of data, including saved authentication credentials, keystrokes, screenshots, and clipboard content. This stolen data can be used in a variety of malicious activities, such as facilitating ransomware attacks. The malware is distributed through phishing emails, often disguised as contracts, to trick employees in organisations that handle high volumes of email correspondence.
Ofcom has ended its investigation into whether under-18s are accessing OnlyFans but will continue to examine whether the platform provided complete and accurate information during the inquiry. The media regulator stated that it would remain engaged with OnlyFans to ensure the platform implements appropriate measures to prevent children from accessing restricted content.
The investigation, launched in May, sought to determine whether OnlyFans was doing enough to protect minors from pornography. Ofcom stated that while no findings were made, it reserves the right to reopen the case if new evidence emerges.
OnlyFans maintains that its age assurance measures, which require users to be at least 20 years old, are sufficient to prevent underage access. A company spokesperson reaffirmed its commitment to compliance and child protection, emphasising that its policies have always met regulatory standards.
Kaspersky Labs has uncovered a dangerous malware hidden in software development kits used to create Android and iOS apps. The malware, known as SparkCat, scans images on infected devices to find crypto wallet recovery phrases, allowing hackers to steal funds without needing passwords. It also targets other sensitive data stored in screenshots, such as passwords and private messages.
The malware uses Google’s ML Kit OCR to extract text from images and has been downloaded around 242,000 times, primarily affecting users in Europe and Asia. It is embedded in dozens of real and fake apps on Google’s Play Store and Apple’s App Store, disguised as analytics modules. Kaspersky’s researchers suspect a supply chain attack or intentional embedding by developers.
While the origin of the malware remains unclear, analysis of its code suggests the developer is fluent in Chinese. Security experts advise users to avoid storing sensitive information in images and to remove any suspicious apps. Google and Apple have yet to respond to the findings.