TikTok restructures operations for US market

TikTok has finalised a deal allowing the app to continue operating in America by separating its US business from its global operations. The agreement follows years of political pressure in the US over national security concerns.

Under the arrangement, a new entity will manage TikTok’s US operations, with user data and algorithms handled inside the US. The recommendation algorithm has been licensed and will now be trained only on US user data to meet American regulatory requirements.

Ownership of TikTok’s US business is shared among American and international investors, while China-based ByteDance retains a minority stake. Oracle will oversee data security and cloud infrastructure for users in the US.

Analysts say the changes could alter how the app functions for the roughly 200 million users in the US. Questions remain over whether a US-trained algorithm will perform as effectively as the global version.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

EU cyber rules target global tech dependence

The European Union has proposed new cybersecurity rules aimed at reducing reliance on high-risk technology suppliers, particularly from China. In the European Union, policymakers argue existing voluntary measures failed to curb dependence on vendors such as Huawei and ZTE.

The proposal would introduce binding obligations for telecom operators across the European Union to phase out Chinese equipment. At the same time, officials have warned that reliance on US cloud and satellite services also poses security risks for Europe.

Despite increased funding and expanded certification plans, divisions remain within the European Union. Countries including Germany and France support stricter sovereignty rules, while others favour continued partnerships with US technology firms.

Analysts say the lack of consensus in the European Union could weaken the impact of the reforms. Without clear enforcement and investment in European alternatives, Europe may struggle to reduce dependence on both China and the US.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

OpenAI ads in ChatGPT signal a shift in conversational advertising

The AI firm, OpenAI, plans to introduce advertising within ChatGPT for logged-in adult users, marking a structural shift in how brands engage audiences through conversational interfaces.

Ads would be clearly labelled and positioned alongside responses, aiming to replace interruption-driven formats with context-aware brand suggestions delivered during moments of active user intent.

Industry executives describe conversational AI advertising as a shift from exposure to earned presence, in which brands must provide clarity or utility to justify inclusion.

Experts warn that trust remains fragile, as AI recommendations carry the weight of personal consultation, and undisclosed commercial influence could prompt rapid user disengagement instead of passive ad avoidance.

Regulators and marketers alike highlight risks linked to dark patterns, algorithmic framing and subtle manipulation within AI-mediated conversations.

As conversational systems begin to shape discovery and decision-making, media planning is expected to shift toward intent-led engagement, authority-building, and transparency, reshaping digital advertising economics beyond search rankings and impression-based buying.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

The House of Lords backs social media ban for under-16s

The upper house of the Parliament of the United Kingdom,, the House of Lords has voted in favour of banning under-16s from social media platforms, backing an amendment to the government’s schools bill by 261 votes to 150. The proposal would require ministers to define restricted platforms and enforce robust age verification within a year.

Political momentum for tighter youth protections has grown after Australia’s similar move, with cross-party support emerging at Westminster. More than 60 Labour MPs have joined Conservatives in urging a UK ban, increasing pressure ahead of a Commons vote.

Supporters argue that excessive social media use contributes to declining mental health, online radicalisation, and classroom disruption. Critics warn that a blanket ban could push teenagers toward less regulated platforms and limit positive benefits, urging more vigorous enforcement of existing safety rules.

The government has rejected the amendment and launched a three-month consultation on age checks, curfews, and curbing compulsive online behaviour. Ministers maintain that further evidence is needed before introducing new legal restrictions.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Ransomware attack on Under Armour leads to massive customer data exposure

Under Armour is facing growing scrutiny following the publication of customer data linked to a ransomware attack disclosed in late 2025.

According to breach verification platform Have I Been Pwned, a dataset associated with the incident appeared on a hacking forum in January, exposing information tied to tens of millions of customers.

The leaked material reportedly includes 72 million email addresses alongside names, dates of birth, location details and purchase histories. Security analysts warn that such datasets pose risks that extend far beyond immediate exposure, particularly when personal identifiers and behavioural data are combined.

Experts note that verified customer information linked to a recognised brand can enable compelling phishing and fraud campaigns powered by AI tools.

Messages referencing real transactions or purchase behaviour can blur the boundary between legitimate communication and malicious activity, increasing the likelihood of delayed victimisation.

The incident has also led to legal action against Under Armour, with plaintiffs alleging failures in safeguarding sensitive customer information. The case highlights how modern data breaches increasingly generate long-term consequences rather than immediate technical disruption.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Generative AI fuels surge in online fraud risks in 2026

Online scams are expected to surge in 2026, overtaking ransomware as the top cyber-risk, the World Economic Forum warned, driven by the growing use of generative AI.

Executives are increasingly concerned about AI-driven scams that are easier to launch and harder to detect than traditional cybercrime. WEF managing director Jeremy Jurgens said leaders now face the challenge of acting collectively to protect trust and stability in an AI-driven digital environment.

Consumers are also feeling the impact. An Experian report found 68% of people now see identity theft as their main concern, while US Federal Trade Commission data shows consumer fraud losses reached $12.5 billion in 2024, up 25% year on year.

Generative AI is enabling more convincing phishing, voice cloning, and impersonation attempts. The WEF reported that 62% of executives experienced phishing attacks, 37% encountered invoice fraud, and 32% reported identity theft, with vulnerable groups increasingly targeted through synthetic content abuse.

Experts warn that many organisations still lack the skills and resources to defend against evolving threats. Consumer groups advise slowing down, questioning urgent messages, avoiding unsolicited requests for information, and verifying contacts independently to reduce the risk of generative AI-powered scams.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

UK launches software security ambassadors scheme

The UK government has launched the Software Security Ambassadors Scheme to promote stronger software security practices nationwide. The initiative is led by the Department for Science, Innovation and Technology and the National Cyber Security Centre.

In the UK, participating organisations commit to championing the new Software Security Code of Practice within their industries. Signatories agree to lead by example through secure development, procurement and advisory practices, while sharing lessons learned to strengthen national cyber resilience.

The scheme aims to improve transparency and risk management across UK digital supply chains. Software developers are encouraged to embed security throughout the whole lifecycle, while buyers are expected to incorporate security standards into procurement processes.

Officials say the approach supports the UK’s broader economic and security goals by reducing cyber risks and increasing trust in digital technologies. The government believes that better security practices will help UK businesses innovate safely and withstand cyber incidents.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

AI becomes mainstream in UK auto buying behaviour, survey shows

A recent survey reported by AM-Online reveals that approximately 66 per cent of UK car buyers use artificial intelligence in some form as part of their vehicle research and buying process.

AI applications cited include chatbots for questions and comparisons, recommendation systems for model selection, and virtual advisors that help consumers weigh options based on preferences and budget.

Industry commentators suggest that this growing adoption reflects broader digital transformation trends in automotive retail, with dealerships and manufacturers increasingly deploying AI technologies to personalise sales experiences, streamline research and nurture leads.

The integration of AI tools is seen as boosting customer engagement and efficiency, but it also raises questions about privacy and data protection, transparency and the future role of human sales advisors as digital tools become more capable.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Davos roundtable calls for responsible AI growth

Leaders from the tech industry, academia, and policy circles met at a TIME100 roundtable in Davos, Switzerland, on 21 January to discuss how to pursue rapid AI progress without sacrificing safety and accountability. The conversation, hosted by TIME CEO Jessica Sibley, focused on how AI should be built, governed, and used as it becomes more embedded in everyday life.

A major theme was the impact of AI-enabled technology on children. Jonathan Haidt, an NYU Stern professor and author of The Anxious Generation, argued that the key issue is not total avoidance but the timing and habits of exposure. He suggested children do not need smartphones until at least high school, emphasising that delaying access can help protect brain development and executive function.

Yoshua Bengio, a professor at the Université de Montréal and founder of LawZero, said responsible innovation depends on a deeper scientific understanding of AI risks and stronger safeguards built into systems from the start. He pointed to two routes, consumer and societal demand for ‘built-in’ protections, and government involvement that could include indirect regulation through liability frameworks, such as requiring insurance for AI developers and deployers.

Participants also challenged the idea that geopolitical competition should justify weaker guardrails. Bengio argued that even rivals share incentives to prevent harmful outcomes, such as AI being used for cyberattacks or the development of biological weapons, and said coordination between major powers is possible, drawing a comparison to Cold War-era cooperation on nuclear risk reduction.

The roundtable linked AI risks to lessons from social media, particularly around attention-driven business models. Bill Ready, CEO of Pinterest, said engagement optimisation can amplify divisions and ‘prey’ on negative human impulses, and described Pinterest’s shift away from maximising view time toward maximising user outcomes, even if it hurts short-term metrics.

Several speakers argued that today’s alignment approach is too reactive. Stanford computer scientist Yejin Choi warned that models trained on the full internet absorb harmful patterns and then require patchwork fixes, urging exploration of systems that learn moral reasoning and human values more directly from the outset.

Kay Firth-Butterfield, CEO of Good Tech Advisory, added that wider AI literacy, shaped by input from workers, parents, and other everyday users, should underpin future certification and trust in AI tools.

Diplo is live reporting on all sessions from the World Economic Forum 2026 in Davos.

Would you like to learn more about AI, tech and digital diplomacyIf so, ask our Diplo chatbot!

Microsoft restores Exchange and Teams after Microsoft 365 disruption

The US tech giant, Microsoft, investigated a service disruption affecting Exchange Online, Teams and other Microsoft 365 services after users reported access and performance problems.

An incident that began late on Wednesday affected core communication tools used by enterprises for daily operations.

Engineers initially focused on diagnosing the fault, with Microsoft indicating that a potential third-party networking issue may have interfered with access to Outlook and Teams.

During the disruption, users experienced intermittent connectivity failures, latency and difficulties signing in across parts of the Microsoft 365 ecosystem.

Microsoft later confirmed that service access had been restored, although no detailed breakdown of the outage scope was provided.

The incident underlined the operational risks associated with cloud productivity platforms and the importance of transparency and resilience in enterprise digital infrastructure.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!