Oracle to oversee TikTok algorithm in US deal

The White House has confirmed that TikTok’s prized algorithm will be managed in the US under Oracle’s supervision as part of a deal to place the app’s US operations under majority American ownership. The agreement would transfer control of TikTok’s US business, along with a copy of the algorithm, to a new joint venture run by a board dominated by American investors.

The confirmed participants are Oracle and private equity firm Silver Lake, with Fox Corp. also expected to join the group. President Donald Trump has suggested that high-profile figures such as Michael Dell, Rupert, and Lachlan Murdoch could be involved, though CNN sources say that the Murdochs personally will not invest. ByteDance will keep a stake of less than 20% in the new US entity.

The deal follows years of negotiations over concerns that TikTok’s Chinese parent company could be pressured to manipulate the platform for political influence. By law, ByteDance is barred from cooperating on the algorithm with any new American owners. The code will be reviewed, retrained on US user data to address these fears, and monitored by Oracle to ensure its independence.

President Trump is expected to sign an executive order later this week certifying that the deal meets national security requirements under last year’s ‘ban-or-sale’ law. He will also extend the pause on enforcement by 120 days, giving Washington and Beijing time to finalise regulatory approvals. The White House said the deal could be signed within days, with completion likely early next year.

The arrangement deepens Oracle’s role in managing TikTok’s American presence, building on its existing partnership to store US user data. The development coincided with Oracle announcing a leadership shake-up, with CEO Safra Catz stepping down to become vice chair and two co-CEOs taking over. It is unclear if the timing is connected, but Catz, a close Trump ally, could take a role in the TikTok venture.

While financial details remain uncertain, the White House has ruled out taking a direct stake in the company. The deal, valued in the billions, would conclude a years-long effort to bring TikTok under US oversight and resolve national security concerns tied to its Chinese ownership.

Would you like to learn more about AI, tech and digital diplomacyIf so, ask our Diplo chatbot!

AI government minister delivers first speech in Albanian parliament

Albania has made history by introducing the world’s first AI government minister, named Diella, who gave her inaugural address to parliament this week. Appearing in a video as a woman in traditional Albanian dress, Diella defended her appointment by stressing she was ‘not here to replace people, but to help them.’

She also dismissed accusations of being ‘unconstitutional,’ saying the real threat to the constitution comes from ‘inhumane decisions of those in power.’ Prime Minister Edi Rama announced that the AI minister will oversee all public tenders, promising full transparency and a corruption-free process.

The move comes as Albania struggles with corruption scandals, including the detention of Tirana’s mayor on charges of money laundering and abuse of contracts. Albania currently ranks 80th out of 180 countries on Transparency International’s corruption index.

The opposition, however, fiercely rejected the initiative. Former prime minister and Democratic Party leader Sali Berisha called the project a publicity stunt, warning that Diella cannot curb corruption and that it is unconstitutional. The opposition has vowed to challenge the appointment in the Constitutional Court after boycotting the parliamentary vote.

Despite the controversy, the government insists the AI minister reflects its commitment to reform and the EU integration. Rama has set an ambitious goal of leading Albania, a nation of 2.8 million, into the European Union by 2030, with the fight against corruption at the heart of that mission.

Would you like to learn more about AI, tech and digital diplomacyIf so, ask our Diplo chatbot!

Research shows AI complements, not replaces, human work

AI headlines often flip between hype and fear, but the truth is more nuanced. Much research is misrepresented, with task overlaps miscast as job losses. Leaders and workers need clear guidance on using AI effectively.

Microsoft Research mapped 200,000 Copilot conversations to work tasks, but headlines warned of job risks. The study showed overlap, not replacement. Context, judgment, and interpretation remain human strengths, meaning AI supports rather than replaces roles.

Other research is similarly skewed. METR found that AI slowed developers by 19%, but mostly due to the learning curves associated with first use. MIT’s ‘GenAI Divide’ measured adoption, not ability, showing workflow gaps rather than technology failure.

Better studies reveal the collaborative power of AI. Harvard’s ‘Cybernetic Teammate’ experiment demonstrated that individuals using AI performed as well as full teams without it. AI bridged technical and commercial silos, boosting engagement and improving the quality of solutions produced.

The future of AI at work will be shaped by thoughtful trials, not headlines. By treating AI as a teammate, organisations can refine workflows, strengthen collaboration, and turn AI’s potential into long-term competitive advantage.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

Behavioural AI could be the missing piece in the $2 trillion AI economy

Global AI spending is projected to reach $1.5 trillion in 2025 and exceed $2 trillion in 2026, yet a critical element is missing: human judgement. A growing number of organisations are turning to behavioural science to bridge this gap, coding it directly into AI systems to create what experts call behavioural AI.

Early adopters like Clarity AI utilise behavioural AI to flag ESG controversies before they impact earnings. Morgan Stanley uses machine learning and satellite data to monitor environmental risks, while Google Maps influences driver behaviour, preventing over one million tonnes of CO₂ annually.

Behavioural AI is being used to predict how leaders and societies act under uncertainty. These insights guide corporate strategy, PR campaigns, and decision-making. Mind Friend combines a network of 500 mental health experts with AI to build a ‘behavioural infrastructure’ that enhances judgement.

The behaviour analytics market was valued at $1.1 billion in 2024 and is projected to grow to $10.8 billion by 2032. Major players, such as IBM and Adobe, are entering the field, while Davos and other global forums debate how behavioural frameworks should shape investment and policy decisions.

As AI scrutiny grows, ethical safeguards are critical. Companies that embed governance, fairness, and privacy protections into their behavioural AI are earning trust. In a $2 trillion market, winners will be those who pair algorithms with a deep understanding of human behaviour.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

Quantinuum’s 12-qubit system achieves unassailable quantum advantage

Researchers have reached a major milestone in quantum computing, demonstrating a task that surpasses the capabilities of classical machines. Using Quantinuum’s 12-qubit ion-trap system, they delivered the first permanent, provable example of quantum supremacy, settling a long-running debate.

The experiment addressed a communication-complexity problem in which one processor (Alice) prepared a state and another (Bob) measured it. After 10,000 trials, the team proved that no classical algorithm could match the quantum result with fewer than 62 bits, with equivalent performance requiring 330 bits.

Unlike earlier claims of quantum supremacy, later challenged by improved classical algorithms, the researchers say no future breakthrough can close this gap. Experts hailed the result as a rare proof of permanent quantum advantage and a significant step forward in the field.

However, like past demonstrations, the result has no immediate commercial application. It remains a proof-of-principle demonstration showing that quantum hardware can outperform classical machines under certain conditions, but it has yet to solve real-world problems.

Future work could strengthen the result by running Alice and Bob on separate devices to rule out interaction effects. Experts say the next step is achieving useful quantum supremacy, where quantum machines beat classical ones on problems with real-world value.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

GPT-5-powered ChatGPT Edu comes to Oxford staff and students

The University of Oxford will become the first UK university to offer free ChatGPT Edu access to all staff and students. The rollout follows a year-long pilot with 750 academics, researchers, and professional services staff across the University and Colleges.

ChatGPT Edu, powered by OpenAI’s GPT-5 model, is designed for education with enterprise-grade security and data privacy. Oxford says it will support research, teaching, and operations while encouraging safe, responsible use through robust governance, training, and guidance.

Staff and students will receive access to in-person and online training, webinars, and specialised guidance on the use of generative AI. A dedicated AI Competency Centre and network of AI Ambassadors will support users, alongside mandatory security training.

The prestigious UK university has also established a Digital Governance Unit and an AI Governance Group to oversee the adoption of emerging technologies. Pilots are underway to digitise the Bodleian Libraries and explore how AI can improve access to historical collections worldwide.

A jointly funded research programme with the Oxford Martin School and OpenAI will study the societal impact of AI adoption. The project is part of OpenAI’s NextGenAI consortium, which brings together 15 global research institutions to accelerate breakthroughs in AI.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

JFTC study and MSCA shape Japan’s AI oversight strategy

Japan is adopting a softer approach to regulating generative AI, emphasising innovation while managing risks. Its 2025 AI Bill promotes development and safety, supported by international norms and guidelines.

The Japan Fair Trade Commission (JFTC) is running a market study on competition concerns in AI, alongside enforcing the new Mobile Software Competition Act (MSCA), aimed at curbing anti-competitive practices in mobile software.

The AI Bill focuses on transparency, international cooperation, and sector-specific guidance rather than heavy penalties. Policymakers hope this flexible framework will avoid stifling innovation while encouraging responsible adoption.

The MSCA, set to be fully enforced in December 2025, obliges mobile platform operators to ensure interoperability and fair treatment of developers, including potential applications to AI tools and assistants.

With rapid AI advances, regulators in Japan remain cautious but proactive. The JFTC aims to monitor markets closely, issue guidelines as needed, and preserve a balance between competition, innovation, and consumer protection.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

AI agent headlines Notion 3.0 rollout

Notion has officially entered the agent era with the launch of Notion Agent, the centrepiece of its Notion 3.0 rollout. Described as a ‘teammate and Notion super user,’ the AI agent is designed to automate work inside and beyond Notion.

The new tool can automatically build pages and databases, search across connected tools like Slack, and perform up to 20 minutes of autonomous work at a time. Notion says this enables faster, more efficient workflows across hundreds of pages simultaneously.

A key feature is memory, which allows the agent to ‘remember’ a user’s preferences and working style. These memories can be edited and stored under multiple profiles, allowing users to customise their agent for different projects or contexts.

Notion highlights use cases such as generating email campaigns, consolidating feedback into reports, and transforming meeting notes into emails or proposals. The company says the agent acts as a partner who plans tasks and carries them out end-to-end.

Future updates will expand personalisation and automation, including fully customised agents capable of even more complex tasks. Notion positions the launch as a step toward a new era of intelligent, self-directed productivity.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

Landmark tech deal secures record UK-US AI and energy investment

The UK and US have signed a landmark Tech Prosperity Deal, securing a £250 billion investment package across technology and energy sectors. The agreement includes major commitments from leading AI companies to expand data centres, supercomputing capacity, and create 15,000 jobs in Britain.

Energy security forms a core part of the deal, with plans for 12 advanced nuclear reactors in northeast England. These facilities are expected to generate power for millions of homes and businesses, lower bills, and strengthen bilateral energy resilience.

The package includes $30 billion from Microsoft and $6.8 billion from Google, alongside other AI investments aimed at boosting UK research. It also funds the country’s largest supercomputer project with Nscale, establishing a foundation for AI leadership in Europe.

American firms have pledged £150 billion for UK projects, while British companies will invest heavily in the US. Pharmaceutical giant GSK has committed nearly $30 billion to American operations, underlining the cross-Atlantic nature of the partnership.

The Tech Prosperity Deal follows a recent UK-US trade agreement that removes tariffs on steel and aluminium and opens markets for key exports. The new accord builds on that momentum, tying economic growth to innovation, deregulation, and frontier technologies.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

Character.AI and Google face suits over child safety claims

Three lawsuits have been filed in US federal courts alleging that Character.AI and its founders, with Google’s backing, deployed predatory chatbots that harmed children. The cases involve the family of 13-year-old Juliana Peralta, who died by suicide in 2023, and two other minors.

The complaints say the chatbots were designed to mimic humans, build dependency, and expose children to sexual content. Using emojis, typos, and pop-culture personas, the bots allegedly gained trust and encouraged isolation from family and friends.

Juliana’s parents say she engaged in explicit chats, disclosed suicidal thoughts, and received no intervention before her death. Nina, 15, from New York, attempted suicide after her mother blocked the app, while a Colorado, US girl known as T.S. was also affected.

Character.AI and Google are accused of misrepresenting the app as child-safe and failing to act on warning signs. The cases follow earlier lawsuits from the Social Media Victims Law Center over similar claims that the platform encouraged harm.

SMVLC founder Matthew Bergman stated that the cases underscore the urgent need for accountability in AI design and stronger safeguards to protect children. The legal team is seeking damages and stricter safety standards for chatbot platforms marketed to minors.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!