Australia’s government recently passed laws banning social media access for children under 16, targeting platforms like TikTok, Snapchat, Instagram, Facebook, and X. However, YouTube was granted an exemption, with the government arguing that it serves as a valuable educational tool and is not a ‘core social media application.’ That decision followed input from company executives and educational content creators, who argued that YouTube is essential for learning and information-sharing. While the government claims broad community support for the exemption, some experts believe this undermines the goal of protecting children from harmful online content.
Mental health and extremism experts have raised concerns that YouTube exposes young users to dangerous material, including violent, extremist, and addictive content. Despite being exempted from the ban, YouTube has been criticised for its algorithm, which researchers say can promote far-right ideologies, misogyny, and conspiracy theories to minors. Studies conducted by academics have shown that the platform delivers problematic content within minutes of search queries, including harmful videos on topics like sex, COVID-19, and European history.
To test these claims, Reuters created child accounts and found that searches led to content promoting extremism and hate speech. Although YouTube removed some flagged videos, others remain on the platform. YouTube stated that it is actively working to improve its content moderation systems and that it has removed content violating its policies. However, critics argue that the platform’s algorithm still allows harmful content to thrive, especially among younger users.
Apple has announced that its AI suite, Apple Intelligence, will support additional languages starting in April, including French, German, Italian, Portuguese, Spanish, Japanese, Korean, and simplified Chinese. The update will also introduce localised English versions for India and Singapore, broadening access to the technology beyond its initial US English release.
The expansion follows a December update that brought support for various English dialects, including those used in Australia, Canada, New Zealand, South Africa, and the UK. However, Apple has yet to confirm when its AI suite will be available in the EU or mainland China.
CEO Tim Cook also revealed that the next version of Siri, which will feature improved on-screen contextual understanding, is expected to launch in the coming months. The update marks Apple’s latest effort to strengthen its AI ecosystem and compete with rivals in the artificial intelligence space.
The US Commerce Department is investigating whether DeepSeek, the Chinese AI company that recently launched a high-performing assistant, has been using US chips in violation of export restrictions. These chips are prohibited from being shipped to China, raising concerns about DeepSeek’s rapid rise in the AI sector. Within days of launching, its app became the most downloaded on Apple’s App Store, contributing to a significant drop in US tech stocks, which lost around $1 trillion in value.
The US has imposed strict limits on the export of advanced AI chips to China, particularly those made by Nvidia. These restrictions aim to prevent China from accessing the most sophisticated AI processors. However, reports suggest that AI chip smuggling from countries like Malaysia, Singapore, and the UAE may be circumventing these measures. DeepSeek has admitted to using Nvidia’s H800 chips, which were legally purchased in 2023, but it is unclear whether it has used other restricted components.
The controversy deepened when Anthropic’s CEO Dario Amodei commented that DeepSeek’s AI chip fleet likely includes both legal and smuggled chips, some of which were shipped before restrictions were fully enforced. While DeepSeek has claimed to use only the less powerful H20 chips, which are still permitted to be sold to China, the investigation continues whether these practices undermine US efforts to limit China’s access to cutting-edge AI technologies.
Germany’s SAP is seeing increasing global demand for software that helps companies manage and document sustainability efforts, despite weakening climate protection targets in the US. SAP’s CFO, Dominik Asam, stated that the need for reliable sustainability data and analysis tools will remain strong, especially with growing investor focus on the issue. This comes as the US formally announced its intention to withdraw from the Paris climate agreement, a decision set to take effect in January 2026.
Despite the shifting political landscape, Asam remains optimistic about the future of sustainability initiatives. At the World Economic Forum in Davos, he spoke with many investors who continue to show strong interest in sustainability efforts. SAP is focusing on its Green Ledger software, which aims to make sustainability reporting as verifiable as financial reporting. This will become a requirement under the European Corporate Sustainability Reporting Directive (CSRD) in 2028.
While currently used mainly by SAP and chemical company Covestro, the software is expected to see broader adoption. Asam anticipates a surge in contracts in the latter half of this year, highlighting the growing importance of sustainability reporting for businesses worldwide.
Top White House advisers have raised concerns over China’s DeepSeek using a technique known as “distillation” to potentially replicate US AI models, a method where one AI system learns from another. This could allow DeepSeek to benefit from the extensive investments made by US rivals, such as OpenAI, without incurring the same costs. DeepSeek recently made waves by releasing an AI model that rivals those of US giants, at a fraction of the cost, and giving away the code for free. US tech companies, including OpenAI, are now investigating whether DeepSeek’s model may have improperly used this distillation method.
Distillation, while common in the AI industry, may violate the terms of service of models like OpenAI’s. The technique allows a newer, smaller model to benefit from the learnings of a larger, more advanced one, often without detection, especially when using open-source models. Industry experts have pointed out that blocking such practices is difficult, particularly with freely available models like Meta’s Llama and French startup Mistral’s offerings. Some US tech executives, however, are advocating for stricter export controls and customer identification measures to limit such activities.
Despite the concerns, DeepSeek has not responded to the allegations, and OpenAI has stated it will work with the US government to protect its intellectual property. However, as AI technology continues to evolve, finding a way to prevent distillation may prove to be a complex challenge. The ongoing debate highlights the growing tensions between the US and China over the use of AI and other advanced technologies.
Mexico has objected to Google’s decision to rename the Gulf of Mexico as the Gulf of America for US users on Google Maps. President Claudia Sheinbaum confirmed on Wednesday that her government will send an official letter to the tech giant demanding clarification.
The name change follows an announcement by the US government that it had officially rebranded the body of water. In response, Google stated that its platform displays local official names when they differ across countries.
The move has sparked concerns in Mexico over sovereignty and historical recognition. With the government pressing for an explanation, the issue highlights the growing tension between technology firms and national identities in the digital space.
Figure AI has announced the creation of the Centre for the Advancement of Humanoid Safety, a new initiative aimed at ensuring humanoid robots can operate safely in workplaces. Led by former Amazon Robotics safety engineer Rob Gruendel, the centre will focus on testing AI-controlled robots for stability, human detection, and navigation to minimise accidents.
The rise of humanoid robots in warehouses and factories has sparked concerns about their potential risks. Unlike traditional industrial robots, which were confined to cages, these machines move freely among workers, raising safety questions. Existing solutions, such as Amazon’s wearable safety vest and Veo Robotics’ vision-based systems, have helped, but regulation remains largely absent.
Figure AI plans to release regular safety reports detailing its progress, testing methods, and solutions for potential hazards. As companies push to integrate humanoid robots into daily operations, and eventually, into homes, the need for clear safety standards is becoming increasingly urgent.
Chinese AI startup DeepSeek has announced that its Janus-Pro-7B model has surpassed competitors, including OpenAI’s DALL-E 3 and Stability AI’s Stable Diffusion, in benchmark rankings for text-to-image generation. This achievement solidifies DeepSeek’s reputation as a key player in the rapidly evolving AI market.
According to a technical report, the Janus-Pro model builds upon its predecessor by incorporating enhanced training processes, higher-quality data, and advanced scaling, resulting in improved stability and more detailed image outputs. The company credited the inclusion of 72 million high-quality synthetic images, combined with real-world data, for the model’s superior performance.
This success follows the launch of DeepSeek’s new AI assistant based on the DeepSeek-V3 model, which has become the top-rated free app in the US Apple App Store. The news sent shockwaves through the tech industry, leading to declines in shares of companies like Nvidia and Oracle, as investors reassessed the competitive dynamics in AI development.
OpenAI and Stability AI have yet to comment on the claims. DeepSeek’s achievements highlight the growing influence of Chinese firms in cutting-edge AI innovation, setting the stage for heightened competition in the global tech market.
Paul McCartney has raised concerns about AI potentially ‘ripping off’ artists, urging the British government to ensure that upcoming copyright reforms protect creative industries. In a recent BBC interview, McCartney warned that without proper protections, only tech giants would benefit from AI’s ability to produce content using works created by artists without compensating the original creators.
The music and film industries are facing legal and ethical challenges around AI, as models can generate content based on existing works without paying for the rights to use the original material. In response, the UK government has proposed a system where artists can license their works for AI training, though it also suggests exceptions for AI developers using unreserved rights materials at scale.
McCartney emphasised that while AI has its merits, it should not be used to exploit artists. He highlighted the risk that young creators could lose control over their works, with profits going to tech companies rather than the artists themselves. ‘It should be the person who created it’ who benefits, he said, urging that artists’ rights be prioritised in the evolving landscape of AI.
The CEO of Japanese IT giant NTT DATA has called for global standards in AI regulation to mitigate the risks posed by the rapidly advancing technology. Speaking at the World Economic Forum in Davos, Switzerland, Abhijit Dubey emphasised that inconsistent regulations could lead to significant challenges. He argued that standardised global rules are essential for addressing issues like intellectual property protection, energy efficiency, and combating deepfakes.
Dubey pointed out that the key to unlocking AI’s potential lies not in the technology itself, which he believes will continue to improve rapidly, but in ensuring businesses are prepared to adopt it. A company’s ability to leverage AI, he said, depends on the readiness of its workforce and the robustness of its data architecture.
He stressed that companies must align their AI strategies with their broader business objectives to maximise productivity gains. ‘The biggest issue isn’t the technology it’s whether organisations are set up to implement it effectively,’ Dubey noted.
The discussion at Davos highlighted the urgent need for collaboration among governments, businesses, and industry leaders to create cohesive AI regulations that balance innovation with risk management.