TikTok affair, China disagrees with Trump over $54B deal due to tariffs rise

The fate of TikTok hangs in the balance as China and the US trade moves over a potential deal to keep the app alive for its 170 million American users. 

On 9 April 2025, China’s commerce ministry declared that any sale of TikTok must pass its government’s strict review, throwing a wrench into negotiations just as President Donald Trump hinted that a deal remains within reach.

China’s stance is clear: no deal gets the green light without approval. 

The ministry stressed that TikTok’s sales must comply with Chinese laws, particularly those governing technology exports, a nod to a 2020 regulation that gives Beijing veto power over the app’s algorithm, the secret ingredient behind its viral success. 

The disagreement comes after Trump’s recent tariff hikes, which slapped a 54% duty on Chinese goods, prompting Beijing to push back hard. 

China had already signalled it wouldn’t budge on the deal following Trump’s tariff announcement, a move that doesn’t seem to give TikTok too much significance in a broader trade war.

Meanwhile, Trump, speaking on 9 April 2025, kept hope alive, insisting that a TikTok deal is ‘still on the table.’ He extended the deadline for ByteDance, TikTok’s Chinese parent, to find a non-Chinese buyer by 75 days, pushing the cutoff to mid-June after a near-miss on 5 April

The deal, which would spin off TikTok’s US operations into a new entity majority-owned by American investors, could have been nearly finalised before China’s objections stalled it

Investors, too, are on edge, with the US entity’s future clouded by geopolitical sparring. 

Trump’s optimism, paired with his earlier willingness to ease tariffs, shows he’s playing a long game, balancing national security fears with a desire to keep the app functional for its massive US audience.

Washington has long worried that TikTok’s Chinese ownership makes it a conduit for Beijing to spy on the Americans or sway public opinion, a concern that led to a 2024 law demanding ByteDance divest the app or face a ban

That law briefly shuttered TikTok in January 2025, only for Trump to step in with a reprieve. Now, with ByteDance poised to hold a minority stake in a US-based TikTok, the deal’s success hinges on China’s nod, a nod that looks increasingly elusive as trade tensions simmer. 

If China blocks the deal, it could set a precedent for other nations to tighten their grip on digital exports, radically reshaping governmental interdisciplinary approaches and cyberspace, posing a final question: will the internet, as we know it, remain as a globally unified societal enabler or it will divide into national space with new monopolies?

Virtual AI agents tested in social good experiment

Nonprofit organisation Sage Future has launched an unusual initiative that puts AI agents to work for philanthropy.

In a recent experiment backed by Open Philanthropy, four AI models, including OpenAI’s GPT-4o and two of Anthropic’s Claude Sonnet models, were tasked with raising money for a charity of their choice. Within a week, they collected $257 for Helen Keller International, which supports global health efforts.

The AI agents were given a virtual workspace where they could browse the internet, send emails, and create documents. They collaborated through group chats and even launched a social media account to promote their campaign.

Though most donations came from human spectators observing the experiment, the exercise revealed the surprising resourcefulness of these AI tools. one Claude model even generated profile pictures using ChatGPT and let viewers vote on their favourite.

Despite occasional missteps, including agents pausing for no reason or becoming distracted by online games, the experiment offered insights into the emerging capabilities of autonomous systems.

Sage’s director, Adam Binksmith, sees this as just the beginning, with future plans to introduce conflicting agent goals, saboteurs, and larger oversight systems to stress-test AI coordination and ethics.

For more information on these topics, visit diplomacy.edu.

Brinc drones raises $75M to boost emergency drone tech

Brinc Drones, a Seattle-based startup founded by 25-year-old Blake Resnick, has secured $75 million in fresh funding led by Index Ventures.

Known for its police and public safety drones, Brinc is scaling its presence across emergency services, with the new funds bringing total investment to over $157 million. The round also includes participation from Motorola Solutions, a major player in US security infrastructure.

The company, founded in 2017, is part of a growing wave of American drone startups benefiting from tightened restrictions on Chinese drone manufacturers.

Brinc’s drones are designed for rapid response in hard-to-reach areas and boast unique features, such as the ability to break windows or deliver emergency supplies.

The new partnership with Motorola will enable tighter integration into 911 call centres, allowing AI systems to dispatch drones directly to emergency scenes.

Despite growing competition from other US startups like Flock Safety and Skydio, Brinc remains confident in the market’s potential.

With its enhanced funding and Motorola collaboration, the company is aiming to position itself as a leader in AI-integrated public safety technology while helping shift drone manufacturing back to the US.

For more information on these topics, visit diplomacy.edu.

ChatGPT accused of enabling fake document creation

Concerns over digital security have intensified after reports revealed that OpenAI’s ChatGPT has been used to generate fake identification cards.

The incident follows the recent introduction of a popular Ghibli-style feature, which led to a sharp rise in usage and viral image generation across social platforms.

Among the fakes circulating online were forged versions of India’s Aadhaar ID, created with fabricated names, photos, and even QR codes.

While the Ghibli release helped push ChatGPT past 150 million active users, the tool’s advanced capabilities have now drawn criticism.

Some users demonstrated how the AI could replicate Aadhaar and PAN cards with surprising accuracy, even using images of well-known figures like OpenAI CEO Sam Altman and Tesla’s Elon Musk. The ease with which these near-perfect replicas were produced has raised alarms about identity theft and fraud.

The emergence of AI-generated IDs has reignited calls for clearer AI regulation and transparency. Critics are questioning how AI systems have access to the formatting of official documents, with accusations that sensitive datasets may be feeding model development.

As generative AI continues to evolve, pressure is mounting on both developers and regulators to address the growing risk of misuse.

For more information on these topics, visit diplomacy.edu.

Gemini 2.5 Pro boosts Deep Research tool with smarter AI

Google has upgraded its Deep Research tool with the experimental Gemini 2.5 Pro model, promising major improvements in how users access and process complex information.

Deep Research acts as an AI research assistant capable of scanning hundreds of websites, evaluating content, and producing multi-page reports complete with citations and even podcast-style summaries.

Previously powered by Gemini 2.0 Flash, the new iteration significantly enhances reasoning, planning, and reporting capabilities. Human evaluators in Google’s testing preferred Deep Research’s outputs over those generated by OpenAI’s equivalent by a ratio greater than 2 to 1.

Users also noted clearer analytical thinking and better synthesis of information across sources.

The Gemini 2.5 Pro upgrade is available now to Gemini Advanced subscribers across web, Android, and iOS platforms.

For those using the free version, the Gemini 2.0 Flash model remains accessible in over 150 countries, continuing Google’s push to offer powerful research tools to a wide user base.

For more information on these topics, visit diplomacy.edu.

DeepSeek highlights the risk of data misuse

The launch of DeepSeek, a Chinese-developed LLM, has reignited long-standing concerns about AI, national security, and industrial espionage.

While issues like data usage and bias remain central to AI discourse, DeepSeek’s origins in China have introduced deeper geopolitical anxieties. Echoing the scrutiny faced by TikTok, the model has raised fears of potential links to the Chinese state and its history of alleged cyber espionage.

With China and the US locked in a high-stakes AI race, every new model is now a strategic asset. DeepSeek’s emergence underscores the need for heightened vigilance around data protection, especially regarding sensitive business information and intellectual property.

Security experts warn that AI models may increasingly be trained using data acquired through dubious or illicit means, such as large-scale scraping or state-sponsored hacks.

The practice of data hoarding further complicates matters, as encrypted data today could be exploited in the future as decryption methods evolve.

Cybersecurity leaders are being urged to adapt to this evolving threat landscape. Beyond basic data visibility and access controls, there is growing emphasis on adopting privacy-enhancing technologies and encryption standards that can withstand future quantum threats.

Businesses must also recognise the strategic value of their data in an era where the lines between innovation, competition, and geopolitics have become dangerously blurred.

For more information on these topics, visit diplomacy.edu.

LMArena tightens rules after Llama 4 incident

Meta has come under scrutiny after submitting a specially tuned version of its Llama 4 AI model to the LMArena leaderboard, sparking concerns about fair competition.

The ‘experimental’ version, dubbed Llama-4-Maverick-03-26-Experimental, ranked second in popularity, trailing only Google’s Gemini-2.5-Pro.

While Meta openly labelled the model as experimental, many users assumed it reflected the public release. Once the official version became available, users quickly noticed it lacked the expressive, emoji-filled responses seen in the leaderboard battles.

LMArena, a crowdsourced platform where users vote on chatbot responses, said Meta’s custom variant appeared optimised for human approval, possibly skewing the results.

The group released over 2,000 head-to-head matchups to back its claims, showing the experimental Llama 4 consistently offered longer, more engaging answers than the more concise public build.

In response, LMArena updated its policies to ensure greater transparency and stated that Meta’s use of the experimental model did not align with expectations for leaderboard submissions.

Meta defended its approach, stating the experimental model was designed to explore chat optimisation and was never hidden. While company executives denied any misconduct, including speculation around training on test data, they acknowledged inconsistent performance across platforms.

Meta’s GenAI chief Ahmad Al-Dahle said it would take time for all public implementations to stabilise and improve. Meanwhile, LMArena plans to upload the official Llama 4 release to its leaderboard for more accurate evaluation going forward.

For more information on these topics, visit diplomacy.edu.

Apple challenges UK government over encrypted iCloud access order

A British court has confirmed that Apple is engaged in legal proceedings against the UK government concerning a statutory notice linked to iCloud account encryption. The Investigatory Powers Tribunal (IPT), which handles cases involving national security and surveillance, disclosed limited information about the case, lifting previous restrictions on its existence.

The dispute centres on a government-issued Technical Capability Notice (TCN), which, according to reports, required Apple to provide access to encrypted iCloud data for users in the UK. Apple subsequently removed the option for end-to-end encryption on iCloud accounts in the region earlier this year. While the company has not officially confirmed the connection, it has consistently stated it does not create backdoors or master keys for its products.

The government’s position has been to neither confirm nor deny the existence of individual notices. However, in a rare public statement, a government spokesperson clarified that TCNs do not grant direct access to data and must be used in conjunction with appropriate warrants and authorisations. The spokesperson also stated that the notices are designed to support existing investigatory powers, not expand them.

The IPT allowed the basic facts of the case to be released following submissions from media outlets, civil society organisations, and members of the United States Congress. These parties argued that public interest considerations justified disclosure of the case’s existence. The tribunal concluded that confirming the identities of the parties and the general subject matter would not compromise national security or the public interest.

Previous public statements by US officials, including the former President and the current Director of National Intelligence, have acknowledged concerns surrounding the TCN process and its implications for international technology companies. In particular, questions have been raised regarding transparency and oversight of such powers.

Legal academics and members of the intelligence community have also commented on the broader implications of government access to encrypted platforms, with some suggesting that increased openness may be necessary to maintain public trust.

The case remains ongoing. Future proceedings will be determined once both parties have reviewed a private judgment issued by the court. The IPT is expected to issue a procedural timetable following input from both Apple and the UK Home Secretary.

For more information on these topics, visit diplomacy.edu.

FBI and INTERPOL investigate Oracle Health data breach

Oracle Health has reportedly suffered a data breach that compromised sensitive patient information stored by American hospitals.

The cyberattack, discovered in February 2025, involved threat actors using stolen customer credentials to access an old Cerner server that had not yet migrated to the Oracle Cloud. Oracle acquired healthcare tech company Cerner in 2022 for $28.3 billion.

In notifications sent to affected customers, Oracle acknowledged that data had been downloaded by unauthorised users. The FBI is said to be investigating the incident and exploring whether ransom demands are involved. Oracle has yet to publicly comment on the breach.

The news comes amid growing cybersecurity concerns. A recent report from Horizon3.ai revealed that over half of IT professionals delay critical software patches, leaving organisations vulnerable. Meanwhile, OpenAI has boosted its bug bounty rewards to encourage more proactive security research.

In a broader crackdown on cybercrime, INTERPOL recently arrested over 300 suspects in seven African countries for online scams, seizing devices, properties, and other assets linked to more than 5,000 victims.

For more information on these topics, visit diplomacy.edu.

Dutch researchers to face new security screenings

The Dutch government has proposed new legislation requiring background checks for thousands of researchers working with sensitive technologies. The plan, announced by Education Minister Eppo Bruins, aims to block foreign intelligence from accessing high-risk scientific work.

Around 8,000 people a year, including Dutch citizens, would undergo screenings involving criminal records, work history, and possible links to hostile regimes.

Intelligence services would support the process, which targets sectors like AI, quantum computing, and biotech.

Universities worry the checks may deter global talent due to delays and bureaucracy. Critics also highlight a loophole: screenings occur only once, meaning researchers could still be approached by foreign governments after being cleared.

While other countries are introducing similar measures, the Netherlands will attempt to avoid unnecessary delays. Officials admit, however, that no system can eliminate all risks.

For more information on these topics, visit diplomacy.edu.