Oracle to oversee TikTok algorithm in US deal

The White House has confirmed that TikTok’s prized algorithm will be managed in the US under Oracle’s supervision as part of a deal to place the app’s US operations under majority American ownership. The agreement would transfer control of TikTok’s US business, along with a copy of the algorithm, to a new joint venture run by a board dominated by American investors.

The confirmed participants are Oracle and private equity firm Silver Lake, with Fox Corp. also expected to join the group. President Donald Trump has suggested that high-profile figures such as Michael Dell, Rupert, and Lachlan Murdoch could be involved, though CNN sources say that the Murdochs personally will not invest. ByteDance will keep a stake of less than 20% in the new US entity.

The deal follows years of negotiations over concerns that TikTok’s Chinese parent company could be pressured to manipulate the platform for political influence. By law, ByteDance is barred from cooperating on the algorithm with any new American owners. The code will be reviewed, retrained on US user data to address these fears, and monitored by Oracle to ensure its independence.

President Trump is expected to sign an executive order later this week certifying that the deal meets national security requirements under last year’s ‘ban-or-sale’ law. He will also extend the pause on enforcement by 120 days, giving Washington and Beijing time to finalise regulatory approvals. The White House said the deal could be signed within days, with completion likely early next year.

The arrangement deepens Oracle’s role in managing TikTok’s American presence, building on its existing partnership to store US user data. The development coincided with Oracle announcing a leadership shake-up, with CEO Safra Catz stepping down to become vice chair and two co-CEOs taking over. It is unclear if the timing is connected, but Catz, a close Trump ally, could take a role in the TikTok venture.

While financial details remain uncertain, the White House has ruled out taking a direct stake in the company. The deal, valued in the billions, would conclude a years-long effort to bring TikTok under US oversight and resolve national security concerns tied to its Chinese ownership.

Would you like to learn more about AI, tech and digital diplomacyIf so, ask our Diplo chatbot!

Misconfigurations drive major global data breaches

Misconfigurations in cloud systems and enterprise networks remain one of the most persistent and damaging causes of data breaches worldwide.

Recent incidents have highlighted the scale of the issue, including a cloud breach at the US Department of Homeland Security, where sensitive intelligence data was inadvertently exposed to thousands of unauthorised users.

Experts say such lapses are often more about people and processes than technology. Complex workflows, rapid deployment cycles and poor oversight allow errors to spread across entire systems. Misconfigured servers, storage buckets or access permissions then become easy entry points for attackers.

Analysts argue that preventing these mistakes requires better governance, training and process discipline rather. Building strong safeguards and ensuring staff have the knowledge to configure systems securely are critical to closing one of the most exploited doors in cybersecurity.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

Research shows AI complements, not replaces, human work

AI headlines often flip between hype and fear, but the truth is more nuanced. Much research is misrepresented, with task overlaps miscast as job losses. Leaders and workers need clear guidance on using AI effectively.

Microsoft Research mapped 200,000 Copilot conversations to work tasks, but headlines warned of job risks. The study showed overlap, not replacement. Context, judgment, and interpretation remain human strengths, meaning AI supports rather than replaces roles.

Other research is similarly skewed. METR found that AI slowed developers by 19%, but mostly due to the learning curves associated with first use. MIT’s ‘GenAI Divide’ measured adoption, not ability, showing workflow gaps rather than technology failure.

Better studies reveal the collaborative power of AI. Harvard’s ‘Cybernetic Teammate’ experiment demonstrated that individuals using AI performed as well as full teams without it. AI bridged technical and commercial silos, boosting engagement and improving the quality of solutions produced.

The future of AI at work will be shaped by thoughtful trials, not headlines. By treating AI as a teammate, organisations can refine workflows, strengthen collaboration, and turn AI’s potential into long-term competitive advantage.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

Behavioural AI could be the missing piece in the $2 trillion AI economy

Global AI spending is projected to reach $1.5 trillion in 2025 and exceed $2 trillion in 2026, yet a critical element is missing: human judgement. A growing number of organisations are turning to behavioural science to bridge this gap, coding it directly into AI systems to create what experts call behavioural AI.

Early adopters like Clarity AI utilise behavioural AI to flag ESG controversies before they impact earnings. Morgan Stanley uses machine learning and satellite data to monitor environmental risks, while Google Maps influences driver behaviour, preventing over one million tonnes of CO₂ annually.

Behavioural AI is being used to predict how leaders and societies act under uncertainty. These insights guide corporate strategy, PR campaigns, and decision-making. Mind Friend combines a network of 500 mental health experts with AI to build a ‘behavioural infrastructure’ that enhances judgement.

The behaviour analytics market was valued at $1.1 billion in 2024 and is projected to grow to $10.8 billion by 2032. Major players, such as IBM and Adobe, are entering the field, while Davos and other global forums debate how behavioural frameworks should shape investment and policy decisions.

As AI scrutiny grows, ethical safeguards are critical. Companies that embed governance, fairness, and privacy protections into their behavioural AI are earning trust. In a $2 trillion market, winners will be those who pair algorithms with a deep understanding of human behaviour.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

GPT-5-powered ChatGPT Edu comes to Oxford staff and students

The University of Oxford will become the first UK university to offer free ChatGPT Edu access to all staff and students. The rollout follows a year-long pilot with 750 academics, researchers, and professional services staff across the University and Colleges.

ChatGPT Edu, powered by OpenAI’s GPT-5 model, is designed for education with enterprise-grade security and data privacy. Oxford says it will support research, teaching, and operations while encouraging safe, responsible use through robust governance, training, and guidance.

Staff and students will receive access to in-person and online training, webinars, and specialised guidance on the use of generative AI. A dedicated AI Competency Centre and network of AI Ambassadors will support users, alongside mandatory security training.

The prestigious UK university has also established a Digital Governance Unit and an AI Governance Group to oversee the adoption of emerging technologies. Pilots are underway to digitise the Bodleian Libraries and explore how AI can improve access to historical collections worldwide.

A jointly funded research programme with the Oxford Martin School and OpenAI will study the societal impact of AI adoption. The project is part of OpenAI’s NextGenAI consortium, which brings together 15 global research institutions to accelerate breakthroughs in AI.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

TikTok nears US takeover deal as Washington secures control

The White House has revealed that US companies will take control of TikTok’s algorithm, with Americans occupying six of seven board seats overseeing the platform’s operations in the country. A final deal, which would reshape the app’s US presence, is expected soon, though Beijing has yet to respond publicly.

Washington has long pushed to separate TikTok’s American operations from its Chinese parent company, ByteDance, citing national security risks. The app faced repeated threats of a ban unless sold to US investors, with deadlines extended several times under President Donald Trump. The Supreme Court also upheld legislation requiring ByteDance to divest, though enforcement was delayed earlier this year.

According to the White House, data protection and privacy for American users will be managed by Oracle, chaired by Larry Ellison, a close Trump ally. Oracle will also oversee control of TikTok’s algorithm, the key technology that drives what users see on the app. Ellison’s influence in tech and media has grown, especially after his son acquired Paramount, which owns CBS News.

Trump claimed he had secured an understanding on the deal in a recent call with Chinese President Xi Jinping, describing the exchange as ‘productive.’ However, Beijing’s official response has been less explicit. The Commerce Ministry said discussions should proceed according to market rules and Chinese law, while state media suggested China welcomed continued negotiations.

Trump has avoided clarifying whether US investors need to develop a new system or continue using the existing one. His stance on TikTok has shifted since his first term, when he pushed for a ban, to now embracing the platform as a political tool to engage younger voters during his 2024 campaign.

Concerns over TikTok’s handling of user data remain at the heart of US objections. Officials at the Justice Department have warned that the app’s access to US data posed a security threat of ‘immense depth and scale,’ underscoring why Washington is pressing to lock down control of its operations.

Would you like to learn more about AI, tech and digital diplomacyIf so, ask our Diplo chatbot!

Emerging AI trends that will define 2026

AI is set to reshape daily life in 2026, with innovations moving beyond software to influence the physical world, work environments, and international relations.

Autonomous agents will increasingly manage household and workplace tasks, coordinating projects, handling logistics, and interacting with smart devices instead of relying solely on humans.

Synthetic content will become ubiquitous, potentially comprising up to 90 percent of online material. While it can accelerate data analysis and insight generation, the challenge will be to ensure genuine human creativity and experience remain visible instead of being drowned out by generic AI outputs.

The workplace will see both opportunity and disruption. Routine and administrative work will increasingly be offloaded to AI, creating roles such as prompt engineers and AI ethics specialists, while some traditional positions face redundancy.

Similarly, AI will expand into healthcare, autonomous transport, and industrial automation, becoming a tangible presence in everyday life instead of remaining a background technology.

Governments and global institutions will grapple with AI’s geopolitical and economic impact. From trade restrictions to synthetic propaganda, world leaders will attempt to control AI’s spread and underlying data instead of allowing a single country or corporation to have unchecked dominance.

Energy efficiency and sustainability will also rise to the fore, as AI’s growing power demands require innovative solutions to reduce environmental impact.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Health New Zealand appoints a new director to lead AI-driven innovation

Te Whatu Ora (the healthcare system of New Zealand) has appointed Sonny Taite as acting director of innovation and AI and launched a new programme called HealthX.

An initiative that aims to deliver one AI-driven healthcare project each month from September 2025 until February 2026, based on ideas from frontline staff instead of new concepts.

Speaking at the TUANZ and DHA Tech Users Summit in Auckland, New Zealand, Taite explained that HealthX will focus on three pressing challenges: workforce shortages, inequitable access to care, and clinical inefficiencies.

He emphasised the importance of validating ideas, securing funding, and ensuring successful pilots scale nationally.

The programme has already tested an AI-powered medical scribe in the Hawke’s Bay emergency department, with early results showing a significant reduction in administrative workload.

Taite is also exploring solutions for specialist shortages, particularly in dermatology, where some regions lack public services, forcing patients to travel or seek private care.

A core cross-functional team, a clinical expert group, and frontline champions such as chief medical officers will drive HealthX.

Taite underlined that building on existing cybersecurity and AI infrastructure at Te Whatu Ora, which already processes billions of security signals monthly, provides a strong foundation for scaling innovation across the health system.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Cyberattack disrupts major European airports

Airports across Europe faced severe disruption after a cyberattack on check-in software used by several major airlines.

Heathrow, Brussels, Berlin and Dublin all reported delays, with some passengers left waiting hours as staff reverted to manual processes instead of automated systems.

Brussels Airport asked airlines to cancel half of Monday’s departures after Collins Aerospace, the US-based supplier of check-in technology, could not provide a secure update. Heathrow said most flights were expected to operate but warned travellers to check their flight status.

Berlin and Dublin also reported long delays, although Dublin said it planned to run a full schedule.

Collins, a subsidiary of aerospace and defence group RTX, confirmed that its Muse software had been targeted by a cyberattack and said it was working to restore services. The UK’s National Cyber Security Centre coordinates with airports and law enforcement to assess the impact.

Experts warned that aviation is particularly vulnerable because airlines and airports rely on shared platforms. They said stronger backup systems, regular updates and greater cross-border cooperation are needed instead of siloed responses, as cyberattacks rarely stop at national boundaries.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

JFTC study and MSCA shape Japan’s AI oversight strategy

Japan is adopting a softer approach to regulating generative AI, emphasising innovation while managing risks. Its 2025 AI Bill promotes development and safety, supported by international norms and guidelines.

The Japan Fair Trade Commission (JFTC) is running a market study on competition concerns in AI, alongside enforcing the new Mobile Software Competition Act (MSCA), aimed at curbing anti-competitive practices in mobile software.

The AI Bill focuses on transparency, international cooperation, and sector-specific guidance rather than heavy penalties. Policymakers hope this flexible framework will avoid stifling innovation while encouraging responsible adoption.

The MSCA, set to be fully enforced in December 2025, obliges mobile platform operators to ensure interoperability and fair treatment of developers, including potential applications to AI tools and assistants.

With rapid AI advances, regulators in Japan remain cautious but proactive. The JFTC aims to monitor markets closely, issue guidelines as needed, and preserve a balance between competition, innovation, and consumer protection.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!