Ex-Google worker indicted for alleged AI espionage

A former Google software engineer faces additional charges in the US for allegedly stealing AI trade secrets to benefit Chinese companies. Prosecutors announced a 14-count indictment against Linwei Ding, also known as Leon Ding, accusing him of economic espionage and theft of trade secrets. Each charge carries significant prison terms and fines.

Ding, a Chinese national, was initially charged last March and remains free on bond. His case is being handled by a US task force established to prevent the transfer of advanced technology to countries such as China and Russia.

Prosecutors claim Ding stole information on Google’s supercomputing data centres used to train large AI models, including confidential chip blueprints intended to give the company a competitive edge.

Ding allegedly began his thefts in 2022 after being recruited by a Chinese technology firm. By 2023, he had uploaded over 1,000 confidential files and shared a presentation with employees of a startup he founded, citing China’s push for AI development.

Google has cooperated with authorities but has not been charged in the case. Discussions between prosecutors and defence lawyers indicate the case may go to trial.

DeepSeek’s impact on power demand remains uncertain in Japan

Japan’s industry ministry acknowledges concerns that expanding data centres could drive up electricity consumption but finds it difficult to predict how demand may shift due to a single technology such as DeepSeek. The government’s latest draft energy plan, released in December, projects a 10-20% rise in electricity generation by 2040, citing increased AI-driven consumption.

DeepSeek, a Chinese AI startup, has raised questions about whether power demand will decline due to its potentially lower energy usage or increase as AI technology becomes more widespread and affordable. Analysts remain divided on the overall effect, highlighting the complexity of forecasting long-term energy trends.

Japan’s Ministry of Economy, Trade and Industry (METI) noted that AI-related energy demand depends on multiple factors, including improvements in performance, cost reductions, and energy-efficient innovations. The ministry emphasised that a single example cannot determine the future impact on electricity needs.

Economic growth and industrial competitiveness will rely on securing adequate decarbonised power sources to meet future demand. METI underscored the importance of balancing AI expansion with sustainable energy policies to maintain stability in Japan’s energy landscape.

White House expresses alarm over DeepSeek’s AI techniques

Top White House advisers have raised concerns over China’s DeepSeek using a technique known as “distillation” to potentially replicate US AI models, a method where one AI system learns from another. This could allow DeepSeek to benefit from the extensive investments made by US rivals, such as OpenAI, without incurring the same costs. DeepSeek recently made waves by releasing an AI model that rivals those of US giants, at a fraction of the cost, and giving away the code for free. US tech companies, including OpenAI, are now investigating whether DeepSeek’s model may have improperly used this distillation method.

Distillation, while common in the AI industry, may violate the terms of service of models like OpenAI’s. The technique allows a newer, smaller model to benefit from the learnings of a larger, more advanced one, often without detection, especially when using open-source models. Industry experts have pointed out that blocking such practices is difficult, particularly with freely available models like Meta’s Llama and French startup Mistral’s offerings. Some US tech executives, however, are advocating for stricter export controls and customer identification measures to limit such activities.

Despite the concerns, DeepSeek has not responded to the allegations, and OpenAI has stated it will work with the US government to protect its intellectual property. However, as AI technology continues to evolve, finding a way to prevent distillation may prove to be a complex challenge. The ongoing debate highlights the growing tensions between the US and China over the use of AI and other advanced technologies.

DeepSeek data exposed online before swift removal

A cybersecurity firm Wiz has discovered that Chinese AI startup DeepSeek inadvertently exposed sensitive data online. New York-based Wiz found more than a million lines of unsecured information, including digital software keys and chat logs capturing user interactions with the company’s AI assistant.

DeepSeek acted swiftly to secure the data after Wiz reported the issue. The company’s chief technology officer noted that the exposure was easy to find, raising concerns that others may have accessed the information before it was taken down. DeepSeek has not commented on the incident.

The startup has gained rapid popularity, with its AI assistant surpassing ChatGPT in downloads from Apple’s App Store. Its rise has intensified competition in the AI sector, sparking debates about the sustainability of US tech giants’ business models and profit margins.

FBI and Europol target cybercrime networks in global crackdown

A global law enforcement operation has shut down a series of cybercrime websites used for selling stolen data, pirated software, and hacking tools. The FBI and Europol coordinated the takedown as part of ‘Operation Talent’, targeting platforms associated with Cracked, Nulled, StarkRDP, Sellix, and MySellix.

Seizure notices appeared on the affected websites, and officials confirmed that information on customers and victims had also been obtained. Europol stated that further details would be released within 24 hours, while the FBI has not yet commented on the operation.

Reports suggest that the targeted sites played various roles in the cybercrime ecosystem, facilitating the trade of stolen login credentials, compromised credit card details, and video game cheats. A message in a Cracked Telegram channel acknowledged the seizure, with administrators expressing uncertainty over the next steps.

Authorities continue to investigate, with the crackdown highlighting ongoing efforts to disrupt cybercriminal networks. More updates are expected as officials analyse the seized data and determine potential follow-up actions.

OpenAI warns about Chinese firms accessing US AI

OpenAI has raised concerns about Chinese companies attempting to access US AI technologies to enhance their models. In a statement released on Tuesday, OpenAI highlighted the critical need to protect its intellectual property and the most advanced capabilities in its AI systems. The company emphasised that it has put in place countermeasures to safeguard its innovations and is working closely with the US government to protect the technology from being exploited by competitors and adversaries.

These comments come in response to the White House’s ongoing review of national security risks posed by Chinese AI companies, particularly the rapidly growing startup DeepSeek. The US government has been looking into potential threats as China increasingly seeks to advance its AI capabilities. David Sacks, the White House’s AI and crypto czar, explained that Chinese firms are using an AI technique called “distillation,” which allows them to extract knowledge from leading US AI models, further raising concerns about intellectual property theft.

OpenAI’s statement underscores the challenges and security risks that arise as AI becomes a critical technology with broad applications, from national defence to economic competitiveness. The company’s efforts to protect its proprietary AI models are part of a broader push by the US to ensure that its technological edge is not compromised by foreign competitors who might attempt to bypass intellectual property protections. The situation highlights the increasing geopolitical tension surrounding AI development, especially as China continues to make significant strides in the field.

India’s copyright lawsuit targets OpenAI and AI use

Microsoft-backed OpenAI is seeking to prevent some of India’s largest media organisations, including those linked to Gautam Adani and Mukesh Ambani, from joining a copyright lawsuit. The case, initiated by news agency ANI last year, involves claims that AI systems like ChatGPT use copyrighted material without permission, sparking a wider debate over AI and intellectual property in the country. India ranks as OpenAI’s second-largest market by user numbers, following the US.

OpenAI has argued its AI services rely only on publicly available data and adhere to fair use principles. During Tuesday’s hearing, OpenAI’s lawyer opposed bids by additional media organisations to join the case, stating he would submit formal objections in writing. The company has also challenged the court’s jurisdiction, asserting that its servers are located outside India. The case is scheduled to continue in February.

The Federation of Indian Publishers has accused ChatGPT of harming their business by summarising books from unlicensed online sources. OpenAI denies these claims, maintaining its tools do not infringe copyright. Prominent digital media groups, including the Indian Express and Hindustan Times, allege ChatGPT scrapes and reproduces their content, prompting their involvement in the lawsuit.

Tensions escalated over media coverage of the case, with OpenAI objecting to reports based on non-public court filings. Lawyers representing media groups called such claims unfounded. The lawsuit is poised to shape the future of AI and copyright law in India, as courts worldwide grapple with similar challenges.

OpenAI faces legal action from Indian news companies

Several prominent Indian media outlets, including those owned by billionaires Gautam Adani and Mukesh Ambani, are taking legal action against OpenAI. These outlets, such as NDTV and Network18, along with organisations like the Indian Express and Hindustan Times, have filed to join an ongoing lawsuit against OpenAI in a New Delhi court. They allege that OpenAI has been improperly scraping their copyrighted content to train its AI model, ChatGPT, without permission or payment.

The legal claim, which is being led by the Digital News Publishers Association (DNPA), argues that OpenAI’s practices pose a significant threat to the copyrights of its members. The publishers claim that OpenAI’s actions amount to ‘wilful scraping’ and the use of their work for commercial gain, especially as the company generates revenue through ads linked to AI-generated content. This lawsuit highlights broader concerns in the media industry about the influence of large tech companies on content distribution and monetisation.

The legal proceedings are part of a larger global trend, with authors, musicians, and news organisations worldwide suing AI firms for using their works without compensation. In the US, the New York Times has filed a similar lawsuit against OpenAI and its major backer, Microsoft. This new case in India adds significant pressure to OpenAI, which has denied the allegations, arguing that its AI systems rely on publicly available data and that deleting such data could violate US law.

The Indian plaintiffs argue that OpenAI’s failure to strike content-sharing deals with local publishers, while it has done so with international media outlets, undermines the business of Indian news companies. The publishers warn that OpenAI’s practices could weaken the media landscape and negatively impact democracy, calling for greater protection of intellectual property in the age of AI.

Paul McCartney warns AI could exploit artists

Paul McCartney has raised concerns about AI potentially ‘ripping off’ artists, urging the British government to ensure that upcoming copyright reforms protect creative industries. In a recent BBC interview, McCartney warned that without proper protections, only tech giants would benefit from AI’s ability to produce content using works created by artists without compensating the original creators.

The music and film industries are facing legal and ethical challenges around AI, as models can generate content based on existing works without paying for the rights to use the original material. In response, the UK government has proposed a system where artists can license their works for AI training, though it also suggests exceptions for AI developers using unreserved rights materials at scale.

McCartney emphasised that while AI has its merits, it should not be used to exploit artists. He highlighted the risk that young creators could lose control over their works, with profits going to tech companies rather than the artists themselves. ‘It should be the person who created it’ who benefits, he said, urging that artists’ rights be prioritised in the evolving landscape of AI.

Davos spotlight: AI regulation needs global consistency

The CEO of Japanese IT giant NTT DATA has called for global standards in AI regulation to mitigate the risks posed by the rapidly advancing technology. Speaking at the World Economic Forum in Davos, Switzerland, Abhijit Dubey emphasised that inconsistent regulations could lead to significant challenges. He argued that standardised global rules are essential for addressing issues like intellectual property protection, energy efficiency, and combating deepfakes.

Dubey pointed out that the key to unlocking AI’s potential lies not in the technology itself, which he believes will continue to improve rapidly, but in ensuring businesses are prepared to adopt it. A company’s ability to leverage AI, he said, depends on the readiness of its workforce and the robustness of its data architecture.

He stressed that companies must align their AI strategies with their broader business objectives to maximise productivity gains. ‘The biggest issue isn’t the technology it’s whether organisations are set up to implement it effectively,’ Dubey noted.

The discussion at Davos highlighted the urgent need for collaboration among governments, businesses, and industry leaders to create cohesive AI regulations that balance innovation with risk management.