Trump eyes TikTok sale: Four buyers in play

US President Donald Trump confirmed on Sunday that his administration is actively negotiating with four parties interested in purchasing TikTok, the immensely popular Chinese-owned social media platform.

Trump’s comments come amid continued uncertainty about TikTok’s future in the US, following security concerns that prompted legislation mandating its sale or facing a ban.

Speaking to reporters aboard Air Force One, Trump expressed optimism about the potential deal, suggesting all four prospective buyers offered strong options.

Though Trump did not disclose specifics about the parties involved, recent reports indicate significant interest, notably from prominent businessman Frank McCourt, former owner of the Los Angeles Dodgers.

Analysts estimate TikTok’s value could reach up to $50 billion, making it one of the most lucrative tech deals in recent years.

The uncertainty around TikTok began escalating when the new law targeting the platform took effect on 19 January, requiring ByteDance, TikTok’s parent company, to divest the business due to national security concerns.

President Trump subsequently delayed enforcement of the law by signing an executive order granting a 75-day extension, providing additional time to facilitate a sale.

So far, neither TikTok nor ByteDance have publicly commented on Trump’s latest statements or the ongoing negotiations.

Meanwhile, the app’s tens of millions of American users continue to watch closely, hoping their favourite platform survives the political and economic storm surrounding it.

Stay up to date with the latest news on TikTok developments!

Japan to prioritise domestic cybersecurity solutions

Japan has announced plans to prioritise the use of domestic software for cybersecurity purposes, as part of an initiative to reduce the country’s reliance on foreign products in this critical sector.

The government intends to offer subsidies and support technology standards that will encourage the growth of the local cybersecurity industry. However, this move is also a part of the government’s broader efforts to enhance cyber defence and strengthen national security.

As of 2021, Japanese domestic companies were responsible for around 40% of the nation’s cybersecurity countermeasure products. For newer products, this share has significantly decreased, with domestic offerings accounting for less than 10% of the latest cybersecurity technologies.

The move reflects Japan’s increasing focus on cybersecurity as a national priority, particularly in the face of rising global cyber threats. By fostering a stronger domestic cybersecurity ecosystem, Japan aims to enhance its resilience against cyberattacks.

Experts, however, warned that that restricting foreign products could limit access to cutting-edge technologies, making the domestic industry potentially less competitive in terms of features, capabilities, or performance. This could hinder the effectiveness of cybersecurity defenses.

To support this transition, the government plans to offer financial incentives and collaborate with local technology providers to establish standardized solutions that meet both national and international security requirements.

These efforts are part of a broader strategy to ensure that Japan’s critical infrastructure and businesses are better protected in the digital age.

For more information on these topics, visit diplomacy.edu.

Labour probe launched into Scale AI’s pay and working conditions

The United States Department of Labor is investigating Scale AI, a data labeling startup backed by Nvidia, Amazon, and Meta, for its compliance with fair pay and working conditions under the Fair Labor Standards Act.

The inquiry began nearly a year ago during Joe Biden’s presidency, with officials examining whether the company meets federal labour regulations. Scale AI has been cooperating with the department to clarify its business practices and the evolving nature of the AI sector.

Founded in 2016, Scale AI plays a crucial role in training advanced AI models by providing accurately labeled data. The company also operates a platform where researchers exchange AI-related insights, with contributors spanning over 9,000 locations worldwide.

In response to the investigation, a company spokesperson stated that the majority of payments to contributors are made on time, with 90% of payment-related inquiries resolved within three days.

Valued at $14 billion following a late-stage funding round last year, Scale AI serves major clients such as OpenAI, Cohere, Microsoft, and Morgan Stanley.

The company insists that contributor feedback is overwhelmingly positive and maintains that it prioritises fair pay and support for its workforce.

For more information on these topics, visit diplomacy.edu.

US House subpoenas Alphabet over content moderation

The US House Judiciary Committee subpoenaed Alphabet on Thursday, demanding information on its communications with the Biden administration regarding content moderation policies. The committee, led by Republican Jim Jordan, also requested similar communications with external companies and groups.

The subpoena specifically seeks details on discussions about restricting or banning content related to US President Donald Trump, Elon Musk, COVID-19, and other conservative topics. Republicans have accused Big Tech companies of suppressing conservative viewpoints, with the Federal Trade Commission warning that coordinating policies or misleading users could breach the law.

Last year, Meta Platforms acknowledged pressure from the Biden administration to censor content, but Alphabet has not publicly distanced itself from similar claims. A Google spokesperson stated the company will demonstrate its independent approach to policy enforcement.

For more information on these topics, visit diplomacy.edu.

Google acknowledges AI being used for harmful content

Google has reported receiving over 250 complaints globally about its AI software being used to create deepfake terrorist content, according to Australia’s eSafety Commission.

The tech giant also acknowledged dozens of user reports alleging that its AI program, Gemini, was being exploited to generate child abuse material. Under Australian law, companies must provide regular updates on their efforts to minimise harm or risk hefty fines.

The eSafety Commission described Google’s disclosure as a ‘world-first insight’ into how AI tools may be misused to produce harmful and illegal content.

Between April 2023 and February 2024, Google received 258 reports of suspected AI-generated extremist material and 86 related to child exploitation. However, the company did not specify how many of these reports were verified.

A Google spokesperson stated that the company strictly prohibits AI-generated content related to terrorism, child abuse, and other illegal activities.

While it uses automated detection to remove AI-generated child exploitation material, the same system is not applied to extremist content.

Meanwhile, the regulator has previously fined platforms like X (formerly Twitter) and Telegram for failing to meet reporting requirements, with both companies planning to appeal.

For more information on these topics, visit diplomacy.edu.

Singapore expands charges in server fraud case

Singapore authorities have filed additional charges against three men in a widening investigation into server fraud, which may involve AI chips, court documents revealed on Thursday.

The suspects are accused of deceiving tech firms Dell and Super Micro by falsely representing the final destination of the servers they purchased.

Officials have stated the servers could contain Nvidia chips but have not confirmed whether they fall under US export controls.

The case is part of a broader probe involving 22 individuals and companies suspected of fraudulent transactions. US authorities are also investigating whether Chinese AI firm DeepSeek has been using restricted American chips.

Singapore has confirmed that some servers were sent to Malaysia, where authorities are now examining if any laws were violated.

Two suspects, Aaron Woon and Alan Wei, face additional fraud charges, while a third, Li Ming, had his earlier charge updated to include an alleged offence dating back to 2023.

Lawyers representing the men have either declined to comment or stated that the case is complex due to its international scope.

Meanwhile, Singapore police have seized 42 electronic devices and are analysing bank statements as they work with foreign law enforcement to trace the movement of funds.

For more information on these topics, visit diplomacy.edu.

Google warns breakup plans could harm economy and security

Google has urged the Trump administration to reconsider efforts to break up the company as part of ongoing antitrust lawsuits.

The meeting with government officials took place last week, according to a source familiar with the matter. The United States Department of Justice (DOJ) is pursuing two cases against Google, focusing on its dominance in search and advertising technology.

Executives at Google have expressed concerns that proposed remedies, including the potential divestment of the Chrome browser and changes to search engine agreements, could negatively impact the American economy and national security.

The DOJ has not yet commented on the discussions. A trial to determine appropriate remedies is set for April, with a final ruling expected in August.

President Trump’s administration is expected to take a softer approach to antitrust enforcement compared to his predecessor.

Industry experts believe this could lead to adjustments in the DOJ’s stance on breaking up Google, potentially reshaping the legal battle over its market power.

For more information on these topics, visit diplomacy.edu.

Tech giants challenge Australia’s exemption for YouTube

Major social media companies, including Meta, Snapchat, and TikTok, have urged Australia to reconsider its decision to exempt YouTube from a new law banning under-16s from social media platforms.

The legislation, passed in November, imposes strict age restrictions and threatens heavy fines for non-compliance. YouTube, however, is set to be excluded due to its educational value and parental supervision features.

Industry leaders argue that YouTube shares key features with other platforms, such as algorithmic content recommendations and social interaction tools, making its exemption inconsistent with the law’s intent.

Meta called for equal enforcement, while TikTok warned that excluding YouTube would create an ‘illogical, anticompetitive, and short-sighted’ regulation. Snapchat echoed these concerns, insisting that all platforms should be treated fairly.

Experts have pointed out that YouTube, like other platforms, can expose children to addictive and harmful content. The company has responded by strengthening content moderation and expanding its automated detection systems.

The debate highlights broader concerns over online safety and fair competition as Australia moves to enforce some of the world’s strictest social media regulations.

For more information on these topics, visit diplomacy.edu.

Malaysia works with US and Singapore on Nvidia chip probe

Malaysian authorities are investigating whether local laws were breached in the shipment of servers that may have contained advanced AI chips subject to U export controls.

The case is linked to a fraud investigation in Singapore, where three men were recently charged over transactions involving servers supplied by US firms. The equipment was allegedly transferred to Malaysia and may have included Nvidia’s artificial intelligence chips.

The Malaysian government confirmed it is working closely with the United States and Singapore to determine whether US-sanctioned chips were involved. Authorities aim to find effective measures to prevent such transactions from violating trade regulations.

Singapore has not specified whether the chips in question fall under US export restrictions but acknowledged they were used in servers that passed through Malaysia.

US officials are also examining whether DeepSeek, a Chinese AI firm whose technology gained attention in January, has been using restricted US chips.

Washington has tightened controls on AI chip exports to China, and any unauthorised shipments could lead to further scrutiny of supply chains in the region.

For more information on these topics, visit diplomacy.edu.

Microsoft executive says firms are lagging in AI adoption

Microsoft’s UK boss has warned that many companies are ‘stuck in neutral’ when it comes to AI, with a significant number of private and public sector organisations lacking any formal AI strategy. According to a Microsoft survey of nearly 1,500 senior leaders and 1,440 employees in the UK, more than half of executives report that their organisations have no official AI plan. Additionally, many recognise a growing productivity gap between employees using AI and those who are not.

Darren Hardman, Microsoft’s UK chief executive, stated that some companies are caught in the experimentation phase rather than fully deploying AI. Microsoft, a major backer of OpenAI, has been promoting AI deployment in workplaces through autonomous AI agents designed to perform tasks without human intervention. Early adopters, like consulting giant McKinsey, are already using AI agents for tasks such as scheduling meetings.

Hardman also discussed AI’s potential impact on jobs, with the Tony Blair Institute estimating that AI could displace up to 3 million UK jobs, though the net job loss will likely be much lower as new roles are created. He compared AI’s transformative impact on the workplace to how the internet revolutionised retail, creating roles like data analysts and social media managers. Hardman also backed proposed UK copyright law reforms, which would allow tech companies to use copyright-protected work for training AI models, arguing that the changes could drive economic growth and support AI development.

For more information on these topics, visit diplomacy.edu.