Trump signs order to advance TikTok spin-off tied to his allies

President Donald Trump has signed an executive order that paves the way for TikTok to remain in the US, despite a law requiring its Chinese owner, ByteDance, to divest the app or face a ban. The order grants negotiators 120 more days to finalise a deal, marking the fifth time Trump has delayed enforcement of the law passed by Congress and upheld by the Supreme Court.

The deal would transfer most of TikTok’s US operations to a new company controlled by American investors. Among them are Oracle co-founder Larry Ellison, private equity firm Silver Lake, and Susquehanna International’s Jeff Yass, a prominent Republican donor. An Emirati consortium known as MGX would also participate, reflecting the Gulf’s growing role in global tech investments. ByteDance would keep a minority stake and retain control of the app’s recommendation algorithm, a sticking point for lawmakers initially pushing for the sale.

Speaking from the Oval Office, Trump described the incoming management as ‘very smart Americans’ and said Chinese President Xi Jinping had approved the arrangement. Asked whether TikTok would favour pro-Trump content, the president joked that he would prefer a ‘100 percent MAGA’ feed but insisted the app would remain open to all perspectives.

Critics argue the arrangement undermines the very law that forced ByteDance to sell. By preserving a Chinese stake and leaving ByteDance in charge of the algorithm, the deal raises questions about whether the national security concerns that motivated Congress have truly been addressed. Some legal scholars say the White House’s role in handpicking buyers aligned with Trump’s political allies only adds to fears of political influence over a platform used by 170 million Americans.

The negotiations also highlight TikTok’s enormous influence and profit potential. Investors worldwide, including Rupert Murdoch’s Fox Corp., expressed interest in a slice of the app. TikTok’s algorithm, which will still be trained in China but adapted with US data, will remain central to the platform’s success. Oracle will continue to oversee American user data and review the algorithm for security risks.

The unusual process has fueled debate about political power and digital influence. Critics like California Governor Gavin Newsom warned that placing TikTok in the hands of Trump-friendly investors could create new risks of propaganda. Others note that the deal reflects less of a clear national security strategy and more of a high-stakes convergence of money, politics, and global tech rivalry.

Would you like to learn more about AI, tech and digital diplomacyIf so, ask our Diplo chatbot!

UK to introduce mandatory digital ID for work

The UK government has announced plans to make digital ID mandatory for proving the right to work by the end of the current Parliament, expected no later than 2029. Prime Minister Sir Keir Starmer said the scheme would tighten controls on illegal employment while offering wider benefits for citizens.

The digital ID will be stored on smartphones in a format similar to contactless payment cards or the NHS app. It is expected to include core details such as name, date of birth, nationality or residency status, and a photo.

The system aims to provide a more consistent and secure alternative to paper-based checks, reducing the risk of forged documents and streamlining verification for employers.

Officials believe the scheme could extend beyond employment, potentially simplifying access to driving licences, welfare, childcare, and tax records.

A consultation later in the year will decide whether additional data, such as residential addresses, should be integrated. The government has also pledged accessibility for citizens unable to use smartphones.

The proposal has faced political opposition, with critics warning of privacy risks, administrative burdens, and fears of creating a de facto compulsory ID card system.

Despite these objections, the government argues that digital ID will strengthen border controls, counter the shadow economy, and modernise public service access.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

LinkedIn expands AI training with default data use

LinkedIn will use member profile data to train its AI systems by default from 3 November 2025. The policy, already in place in the US and select markets, will now extend to more regions, mainly for 18+ users who prefer not to share their information and must opt out manually via account settings.

According to LinkedIn, the types of data that may be used include account details, email addresses, payment and subscription information, and service-related data such as IP addresses, device IDs, and location information.

Once disabled, profiles will no longer be added to AI training, although information collected earlier may remain in the system. Users can request the removal of past data through a Data Processing Objection Form.

Meta and X have already adopted similar practices in the US, allowing their platforms to use user-generated posts for AI training. LinkedIn insists its approach complies with privacy rules but leaves the choice in members’ hands.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

Content Signals Policy by Cloudflare lets websites signal data use preferences

Cloudflare has announced the launch of its Content Signals Policy, a new extension to robots.txt that allows websites to express their preferences for how their data is used after access. The policy is designed to help creators maintain open content while preventing misuse by data scrapers and AI trainers.

The new tool enables website owners to specify, in a machine-readable format, whether they permit search indexing, AI input, or AI model training. Operators can set each signal to ‘yes,’ ‘no,’ or leave it blank to indicate no stated preference, providing them with fine-grained control over their responses.

Cloudflare says the policy tackles the free-rider problem, where scraped content is reused without credit. With bot traffic set to surpass human traffic by 2029, it calls for clear, standard rules to protect creators and keep the web open.

Customers already using Cloudflare’s managed robots.txt will have the policy automatically applied, with a default setting that allows search but blocks AI training. Sites without a robots.txt file can opt in to publish the human-readable policy text and add their own preferences when ready.

Cloudflare emphasises that content signals are not enforcement mechanisms but a means of communicating expectations. It is releasing the policy under a CC0 licence to encourage broad adoption and is working with standards bodies to ensure the rules are recognised across the industry.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

UK sets up expert commission to speed up NHS adoption of AI

Doctors, researchers and technology leaders will work together to accelerate the safe adoption of AI in the NHS, under a new commission launched by the Medicines and Healthcare products Regulatory Agency (MHRA).

The body will draft recommendations to modernise healthcare regulation, ensuring patients gain faster access to innovations while maintaining safety and public trust.

MHRA stressed that clear rules are vital as AI spreads across healthcare, already helping to diagnose conditions such as lung cancer and strokes in hospitals across the UK.

Backed by ministers, the initiative aims to position Britain as a global hub for health tech investment. Companies including Google and Microsoft will join clinicians, academics, and patient advocates to advise on the framework, expected to be published next year.

A commission that will also review the regulatory barriers slowing adoption of tools such as AI-driven note-taking systems, which early trials suggest can significantly boost efficiency in clinical care.

Officials say the framework will provide much-needed clarity for AI in radiology, pathology, and virtual care, supporting the digital transformation of NHS.

MHRA chief executive Lawrence Tallon called the commission a ‘cultural shift’ in regulation. At the same time, Technology Secretary Liz Kendall said it will ensure patients benefit from life-saving technologies ‘quickly and safely’.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Expanded AI model support arrives in Microsoft 365 Copilot

Microsoft is expanding the AI models powering Microsoft 365 Copilot by adding Anthropic’s Claude Sonnet 4 and Claude Opus 4.1. Customers can now choose between OpenAI and Anthropic models for research, deep reasoning, and agent building across Microsoft 365 tools.

The Researcher agent can now run on Anthropic’s Claude Opus 4.1, giving users a choice of models for in-depth analysis. The Researcher draws on web sources, trusted third-party data, and internal work content—encompassing emails, chats, meetings, and files—to deliver tailored, multistep reasoning.

Claude Sonnet 4 and Opus 4.1 are also available in Copilot Studio, enabling the creation of enterprise-grade agents with flexible model selection. Users can mix Anthropic, OpenAI, and Azure Model Catalogue models to power multi-agent workflows, automate tasks, and manage agents efficiently.

Claude in Researcher is rolling out today to Microsoft 365 Copilot-licensed customers through the Frontier Program. Customers can also use Claude models in Copilot Studio to build and orchestrate agents.

Microsoft says this launch is part of its strategy to bring the best AI innovation across the industry to Copilot. More Anthropic-powered features will roll out soon, strengthening Copilot’s role as a hub for enterprise AI and workflow transformation.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

Quantum-classical hybrid outperforms, according to HSBC and IBM study

HSBC and IBM have reported the first empirical evidence of the value of quantum computers in solving real-world problems in bond trading. Their joint trial showed a 34% improvement in predicting the likelihood of a trade being filled at a quoted price compared to classical-only techniques.

The trial used a hybrid approach that combined quantum and classical computing to optimise quote requests in over-the-counter bond markets. Production-scale trading data from the European corporate bond market was run on IBM quantum computers to predict winning probabilities.

The results demonstrate how quantum techniques can outperform standard methods in addressing the complex and dynamic factors in algorithmic bond trading. HSBC said the findings offer a competitive edge and could redefine how the financial industry prices customer inquiries.

Philip Intallura, HSBC Group Head of Quantum Technologies, called the trial ‘a ground-breaking world-first in bond trading’. He said the results show that quantum computing is on the cusp of delivering near-term value for financial services.

IBM’s latest Heron processor played a key role in the workflow, augmenting classical computation to uncover hidden pricing signals in noisy data. IBM said such work helps unlock new algorithms and applications that could transform industries as quantum systems scale.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

New Meta feature floods users with AI slop in TikTok-style feed

Meta has launched a new short-form video feed called Vibes inside its Meta AI app and on meta.ai, offering users endless streams of AI-generated content. The format mimics TikTok and Instagram Reels but consists entirely of algorithmically generated clips.

Mark Zuckerberg unveiled the feature in an Instagram post showcasing surreal creations, from fuzzy creatures leaping across cubes to a cat kneading dough and even an AI-generated Egyptian woman taking a selfie in antiquity.

Users can generate videos from scratch or remix existing clips by adding visuals, music, or stylistic effects before posting to Vibes, sharing via direct message, or cross-posting to Instagram and Facebook Stories.

Meta partnered with Midjourney and Black Forest Labs to support the early rollout, though it plans to transition to its AI models.

The announcement, however, was derided by users, who criticised the platform for adding yet more ‘AI slop’ to already saturated feeds. One top comment under Zuckerberg’s post bluntly read: ‘gang nobody wants this’.

A launch that comes as Meta ramps up its AI investment to catch up with rivals OpenAI, Anthropic, and Google DeepMind.

Earlier during the year, the company consolidated its AI teams into Meta Superintelligence Labs and reorganised them into four units focused on foundation models, research, product integration, and infrastructure.

Despite the strategic shift, many question whether Vibes adds value or deepens user fatigue with generative content.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

YouTube rolls back rules on Covid-19 and 2020 election misinformation

Google’s YouTube has announced it will reinstate accounts previously banned for repeatedly posting misinformation about Covid-19 and the 2020 US presidential election. The decision marks another rollback of moderation rules that once targeted health and political falsehoods.

The platform said the move reflects a broader commitment to free expression and follows similar changes at Meta and Elon Musk’s X.

YouTube had already scrapped policies barring repeat claims about Covid-19 and election outcomes, rules that had led to actions against figures such as Robert F. Kennedy Jr.’s Children’s Health Defense Fund and Senator Ron Johnson.

An announcement that came in a letter to House Judiciary Committee Chair Jim Jordan, amid a Republican-led investigation into whether the Biden administration pressured tech firms to remove certain content.

YouTube claimed the White House created a political climate aimed at shaping its moderation, though it insisted its policies were enforced independently.

The company said that US conservative creators have a significant role in civic discourse and will be allowed to return under the revised rules. The move highlights Silicon Valley’s broader trend of loosening restrictions on speech, especially under pressure from right-leaning critics.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

More social media platforms could face under-16 ban in Australia

Australia is set to expand its under-16 social media ban, with platforms such as WhatsApp, Reddit, Twitch, Roblox, Pinterest, Steam, Kick, and Lego Play potentially joining the list. The eSafety Commissioner, Julie Inman Grant, has written to 16 companies asking them to self-assess whether they fall under the ban.

The current ban already includes Facebook, TikTok, YouTube, and Snapchat, making it a world-first policy. The focus will be on platforms with large youth user bases, where risks of harm are highest.

Despite the bold move, experts warn the legislation may be largely symbolic without concrete enforcement mechanisms. Age verification remains a significant hurdle, with Canberra acknowledging that companies will likely need to self-regulate. An independent study found that age checks can be done ‘privately, efficiently and effectively,’ but noted there is no one-size-fits-all solution.

Firms failing to comply could face fines of up to AU$49.5 million (US$32.6 million). Some companies have called the law ‘vague’ and ‘rushed.’ Meanwhile, new rules will soon take effect to limit access to harmful but legal content, including online pornography and AI chatbots capable of sexually explicit dialogue. Roblox has already agreed to strengthen safeguards.

Would you like to learn more about AI, tech and digital diplomacyIf so, ask our Diplo chatbot!