Australia has banned Chinese AI startup DeepSeek from all government devices, citing security risks. The directive, issued by the Department of Home Affairs, requires all government entities to prevent the installation of DeepSeek’s applications and remove any existing instances from official systems. Home Affairs Minister Tony Burke stated that the immediate ban was necessary to safeguard Australia’s national security.
The move follows similar action taken by Italy and Taiwan, with other countries also reviewing potential risks posed by the AI firm. DeepSeek has drawn global attention for its cost-effective AI models, which have disrupted the industry by operating with lower hardware requirements than competitors. The rapid rise of the company has raised concerns over data security, particularly regarding its Chinese origins.
This is not the first time Australia has taken such action against a Chinese technology firm. Two years ago, the government imposed a nationwide ban on TikTok for similar security reasons. As scrutiny over AI intensifies, more governments may follow Australia’s lead in limiting DeepSeek’s reach within public sector networks.
Google has removed a key passage from its AI principles that previously committed to steering clear of potentially harmful applications, including weapons. The now-missing section, titled ‘AI applications we will not pursue,’ explicitly stated that the company would not develop technologies likely to cause harm, as seen in archived versions of the page reviewed by Bloomberg.
The change has sparked concern among AI ethics experts. Margaret Mitchell, former co-lead of Google’s ethical AI team and now chief ethics scientist at Hugging Face, criticised the move. ‘Having that removed is erasing the work that so many people in the ethical AI space and the activist space as well had done at Google, and more problematically, it means Google will probably now work on deploying technology directly that can kill people,’ she said.
With ethics guardrails shifting, questions remain about how Google will navigate the evolving AI landscape—and whether its revised stance signals a broader industry trend toward prioritising market dominance over ethical considerations.
Multiple Russian cybersecurity firms have published research reports on emerging threats, including a large-scale information-stealing campaign targeting local organisations using the Nova malware.
According to a report from Moscow-based BI.ZONE, Nova is a commercial malware sold as a service on dark web marketplaces. Prices range from $50 for a monthly license to $630 for a lifetime license. Nova is a variant of SnakeLogger, a widely used malware known for stealing sensitive information.
While the developers of Nova remain unidentified, the code contains strings in Polish, and a Telegram group dedicated to promoting and supporting the malware was created in August 2024. The scale of the campaign and the full extent of its impact on Russian organisations remain unclear.
Over the weekend, F.A.C.C.T. reported a cyberespionage campaign targeting chemical, food, and pharmaceutical companies in Russia, attributing the attacks to a state-backed group named Rezet (or Rare Wolf). Meanwhile, Solar reported an attack on Russian industrial facilities by the newly identified group APT NGC4020, which exploited a vulnerability in a SolarWinds tool.
The Nova malware collects a wide range of data, including saved authentication credentials, keystrokes, screenshots, and clipboard content. This stolen data can be used in a variety of malicious activities, such as facilitating ransomware attacks. The malware is distributed through phishing emails, often disguised as contracts, to trick employees in organisations that handle high volumes of email correspondence.
Ofcom has ended its investigation into whether under-18s are accessing OnlyFans but will continue to examine whether the platform provided complete and accurate information during the inquiry. The media regulator stated that it would remain engaged with OnlyFans to ensure the platform implements appropriate measures to prevent children from accessing restricted content.
The investigation, launched in May, sought to determine whether OnlyFans was doing enough to protect minors from pornography. Ofcom stated that while no findings were made, it reserves the right to reopen the case if new evidence emerges.
OnlyFans maintains that its age assurance measures, which require users to be at least 20 years old, are sufficient to prevent underage access. A company spokesperson reaffirmed its commitment to compliance and child protection, emphasising that its policies have always met regulatory standards.
Kaspersky Labs has uncovered a dangerous malware hidden in software development kits used to create Android and iOS apps. The malware, known as SparkCat, scans images on infected devices to find crypto wallet recovery phrases, allowing hackers to steal funds without needing passwords. It also targets other sensitive data stored in screenshots, such as passwords and private messages.
The malware uses Google’s ML Kit OCR to extract text from images and has been downloaded around 242,000 times, primarily affecting users in Europe and Asia. It is embedded in dozens of real and fake apps on Google’s Play Store and Apple’s App Store, disguised as analytics modules. Kaspersky’s researchers suspect a supply chain attack or intentional embedding by developers.
While the origin of the malware remains unclear, analysis of its code suggests the developer is fluent in Chinese. Security experts advise users to avoid storing sensitive information in images and to remove any suspicious apps. Google and Apple have yet to respond to the findings.
India‘s finance ministry has issued an advisory urging employees to refrain from using AI tools like ChatGPT and DeepSeek for official tasks, citing concerns over the potential risks to the confidentiality of government data. The directive, dated January 29, highlights the dangers of AI apps on office devices, warning that they could jeopardise the security of sensitive documents and information.
This move comes amid similar actions taken by other countries such as Australia and Italy, which have restricted the use of DeepSeek due to data security concerns. The advisory surfaced just ahead of OpenAI CEO Sam Altman’s visit to India, where he is scheduled to meet with the IT minister.
Representatives from India’s finance ministry, OpenAI, and DeepSeek have yet to comment on the matter. It remains unclear whether other Indian ministries have implemented similar measures.
A former Google software engineer faces additional charges in the US for allegedly stealing AI trade secrets to benefit Chinese companies. Prosecutors announced a 14-count indictment against Linwei Ding, also known as Leon Ding, accusing him of economic espionage and theft of trade secrets. Each charge carries significant prison terms and fines.
Ding, a Chinese national, was initially charged last March and remains free on bond. His case is being handled by a US task force established to prevent the transfer of advanced technology to countries such as China and Russia.
Prosecutors claim Ding stole information on Google’s supercomputing data centres used to train large AI models, including confidential chip blueprints intended to give the company a competitive edge.
Ding allegedly began his thefts in 2022 after being recruited by a Chinese technology firm. By 2023, he had uploaded over 1,000 confidential files and shared a presentation with employees of a startup he founded, citing China’s push for AI development.
Google has cooperated with authorities but has not been charged in the case. Discussions between prosecutors and defence lawyers indicate the case may go to trial.
Donald Trump has said there is significant interest in purchasing TikTok, as his administration looks to broker a sale of the Chinese-owned app. The former president posted on Truth Social, stating that such a deal would benefit China and all involved parties.
The fate of TikTok remains uncertain following a US law that requires ByteDance, its Chinese parent company, to sell the app or face a nationwide ban. The law came into effect on 19 January, raising concerns over national security and data privacy.
After taking office, Trump signed an executive order delaying the enforcement of the law by 75 days, allowing more time for negotiations. The move has reignited debate over foreign ownership of technology platforms and their impact on US security.
Belgium‘s new government, led by Prime Minister Bart De Wever, has announced plans to utilise AI tools in law enforcement, including facial recognition technology for detecting criminals. The initiative will be overseen by Vanessa Matz, the country’s first federal minister for digitalisation, AI, and privacy. The AI policy is set to comply with the EU’s AI Act, which bans high-risk systems like facial recognition but allows exceptions for law enforcement under strict regulations.
Alongside AI applications, the Belgian government also aims to combat disinformation by promoting transparency in online platforms and increasing collaboration with tech companies and media. The government’s approach to digitalisation also includes a long-term strategy to improve telecom infrastructure, focusing on providing ultra-fast internet access to all companies by 2030 and preparing for potential 6G rollouts.
The government has outlined a significant digital strategy that seeks to balance technological advancements with strong privacy and legal protections. As part of this, they are working on expanding camera legislation for smarter surveillance applications. These moves are part of broader efforts to strengthen the country’s digital capabilities in the coming years.
The US Treasury is facing a lawsuit over claims it unlawfully granted Elon Musk’s Department of Government Efficiency (DOGE) access to millions of Americans’ financial and personal data. The American Federation of Labor and Congress of Industrial Organizations (AFL-CIO) filed the lawsuit in Washington, DC, accusing the Treasury and Secretary Scott Bessent of illegally sharing sensitive information.
The lawsuit follows concerns raised by US Senator Ron Wyden, who alleged that DOGE had full access to the Treasury’s payments system, which includes names, Social Security numbers, bank account details, and other private data. Prominent Democrats, including Senate leader Chuck Schumer and Senator Elizabeth Warren, have condemned the move, arguing that DOGE lacks any legal authority over federal spending or data access.
Schumer has pledged to introduce legislation to prevent further interference, stating that DOGE is not a legitimate government agency. Warren warned that the system is now “at the mercy of Elon Musk,” raising fears over potential misuse of financial records. The Treasury and DOGE have yet to respond to the allegations.