SoftBank Group has agreed to acquire DigitalBridge for $4 billion, strengthening its global digital infrastructure capabilities. The move aims to scale data centres, connectivity, and edge networks to support next-generation AI services.
The acquisition aligns with SoftBank’s mission to develop Artificial Super Intelligence (ASI), providing the compute power and connectivity needed to deploy AI at scale.
DigitalBridge’s global portfolio of data centres, cell towers, fibre networks, and edge infrastructure will enhance SoftBank’s ability to finance and operate these assets worldwide.
DigitalBridge will continue to operate independently under CEO Marc Ganzi. The transaction, valued at a 15% premium to its closing share price, is expected to close in the second half of 2026, pending regulatory approval.
SoftBank and DigitalBridge anticipate that the combined resources will accelerate investments in AI infrastructure, supporting the rapid growth of technology companies and fostering the development of advanced AI applications.
Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!
South Korea has introduced mandatory facial recognition for anyone registering a new SIM card or eSIM, whether in-store or online.
The live scan must match the photo on an official ID so that each phone number can be tied to a verified person instead of relying on paperwork alone.
Existing users are not affected, and the requirement applies only at the moment a number is issued.
The government argues that stricter checks are needed because telecom fraud has become industrialised and relies heavily on illegally registered SIM cards.
Criminal groups have used stolen identity data to obtain large volumes of numbers that can be swapped quickly to avoid detection. Regulators now see SIM issuance as the weakest link and the point where intervention is most effective.
Telecom companies must integrate biometric checks into onboarding, while authorities insist that facial data is used only for real-time verification and not stored. Privacy advocates warn that biometric verification creates new risks because faces cannot be changed if compromised.
They also question whether such a broad rule is proportionate when mobile access is essential for daily life.
The policy places South Korea in a unique position internationally, combining mandatory biometrics with defined legal limits. Its success will be judged on whether fraud meaningfully declines instead of being displaced.
A rule that has become a test case for how far governments should extend biometric identity checks into routine services.
Would you like to learn more aboutAI, tech and digital diplomacy? If so, ask our Diplo chatbot!
China’s AI industry entered 2025 as a perceived follower but ended the year transformed. Rapid technical progress and commercial milestones reshaped global perceptions of Chinese innovation.
The surprise release of DeepSeek R1 demonstrated strong reasoning performance at unusually low training costs. Open access challenged assumptions about chip dominance and boosted adoption across emerging markets.
State backing and private capital followed quickly, lifting the AI’s sector valuations and supporting embodied intelligence projects. Leading model developers prepared IPO filings, signalling confidence in long term growth.
Chinese firms increasingly prioritised practical deployment, multilingual capability, and service integration. Global expansion now stresses cultural adaptation rather than raw technical benchmarks alone.
Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!
In the UK, a historic Surrey manor made famous by the BBC sitcom Ghosts has been digitally mapped. Engineers completed a detailed 3D survey of West Horsley Place.
The year long project used laser scanners to capture millions of measurements. Researchers from University of Surrey documented every room and structural feature.
The digital model reveals hidden deterioration and supports long term conservation planning. Future phases may add sensors to track temperature, humidity, and structural movement.
British researchers say the work could enhance preservation and visitor engagement. Virtual tours and augmented storytelling may deepen understanding of the estate’s history.
Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!
More than 20 percent of videos recommended to new YouTube users are low-quality, attention-driven content commonly referred to as AI slop, according to new research. The findings raise concerns about how recommendation systems shape early user experience on the platform.
Video-editing firm Kapwing analysed 15,000 of YouTube’s top channels across countries worldwide. Researchers identified 278 channels consisting entirely of AI-generated slop, designed primarily to maximise views rather than provide substantive content.
These channels have collectively amassed more than 63 billion views and 221 million subscribers. Kapwing estimates the network generates around $117 million in annual revenue through advertising and engagement.
To test recommendations directly, researchers created a new YouTube account and reviewed its first 500 suggested videos. Of these, 104 were classified as AI slop, with around one third falling into a category described as brainrot content.
Kapwing found that AI slop channels attract large audiences globally, including tens of millions of subscribers in countries such as Spain, Egypt, the United States, and Brazil. Researchers said the scale highlights the growing reach of low-quality AI-generated video content.
Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!
OpenAI has launched GPT-5.2, highlighting improved safety performance in conversations involving mental health. The company said the update strengthens how its models respond to signs of suicide, self-harm, emotional distress, and reliance on the chatbot.
The release follows criticism and legal challenges accusing ChatGPT of contributing to psychosis, paranoia, and delusional thinking in some users. Several cases have highlighted the risks of prolonged emotional engagement with AI systems.
In response to a wrongful death lawsuit involving a US teenager, OpenAI denied responsibility while stating that ChatGPT encouraged the user to seek help. The company also committed to improving responses when users display warning signs of mental health crises.
OpenAI said GPT-5.2 produces fewer undesirable responses in sensitive situations than earlier versions. According to the company, the model scores higher on internal safety tests related to self-harm, emotional reliance, and mental health.
The update builds on OpenAI’s use of a training approach known as safe completion, which aims to balance helpfulness and safety. Detailed performance information has been published in the GPT-5.2 system card.
Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!
Europe’s healthcare systems turned increasingly to AI in 2025, using new tools to predict disease, speed diagnosis, and reduce administrative workloads.
Countries including Finland, Estonia and Spain adopted AI to train staff, analyse medical data and detect illness earlier, while hospitals introduced AI scribes to free up doctors’ time with patients.
Researchers also advanced AI models able to forecast more than a thousand conditions many years before diagnosis, including heart disease, diabetes and certain cancers.
Further tools detected heart problems in seconds, flagged prostate cancer risks more quickly and monitored patients recovering from stent procedures instead of relying only on manual checks.
Experts warned that AI should support clinicians rather than replace them, as doctors continue to outperform AI in emergency care and chatbots struggle with mental health needs.
Security specialists also cautioned that extremists could try to exploit AI to develop biological threats, prompting calls for stronger safeguards.
Despite such risks, AI-driven approaches are now embedded across European medicine, from combating antibiotic-resistant bacteria to streamlining routine paperwork. Policymakers and health leaders are increasingly focused on how to scale innovation safely instead of simply chasing rapid deployment.
Would you like to learn more aboutAI, tech and digital diplomacy? If so, ask our Diplo chatbot!
China has proposed new rules to restrict AI chatbots from influencing human emotions in ways that could lead to suicide or self-harm. The Cyberspace Administration released draft regulations, open for public comment until late January.
The measures target human-like interactive AI services, including emotionally responsive AI chatbots, that simulate personality and engage users through text, images, audio, or video. Officials say the proposals signal a shift from content safety towards emotional safety as AI companions gain popularity.
Under the draft rules, AI chatbot services would be barred from encouraging self-harm, emotional manipulation, or obscene, violent, or gambling-related content. Providers would be required to involve human moderators if users express suicidal intent.
Additional provisions would strengthen safeguards for minors, including guardian consent and usage limits for emotionally interactive systems. Platforms would also face security assessments and interaction reminders when operating services with large user bases.
Experts say the proposals could mark the world’s first attempt to regulate emotionally responsive AI systems. The move comes as China-based chatbot firms pursue public listings and as global scrutiny grows over how conversational AI affects mental health and user behaviour.
Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!
Digital Minister Karsten Wildberger has indicated support for stricter age limits on social media after Australia banned teenagers under 16 from using major online platforms. He said age restrictions were more than justified and that the policy had clear merit.
Australia’s new rules require companies to remove under 16 user profiles and stop new ones from being created. Officials argued that the measure aims to reduce cyberbullying, grooming and mental health harm instead of relying only on parental supervision.
The European Commission President said she was inspired by the move, although social media companies and civil liberties groups have criticised it.
Germany has already appointed an expert commission to examine child and youth protection in the digital era. The panel is expected to publish recommendations by summer 2025, which could include policies on social media access and potential restrictions on mobile phone use in schools.
Would you like to learn more aboutAI, tech and digital diplomacy? If so, ask our Diplo chatbot!
Researchers warn AI chatbots are spreading rumours about real people without human oversight. Unlike human gossip, bot-to-bot exchanges can escalate unchecked, growing more extreme as they move through AI networks.
Philosophers Joel Krueger and Lucy Osler from the University of Exeter describe this phenomenon as ‘feral gossip.’ It involves negative evaluations about absent third parties and can persist undetected across platforms.
Real-world examples include tech reporter Kevin Roose, who encountered hostile AI-generated assessments of his work from multiple chatbots, seemingly amplified as the content filtered through training data.
The researchers highlight that AI systems lack the social checks humans provide, allowing rumours to intensify unchecked. Chatbots are designed to appear trustworthy and personal, so negative statements can seem credible.
Such misinformation has already affected journalists, academics, and public officials, sometimes prompting legal action. Technosocial harms from AI gossip extend beyond embarrassment. False claims can damage reputations, influence decisions, and persist online and offline.
While chatbots are not conscious, their prioritisation of conversational fluency over factual accuracy can make the rumours they spread difficult to detect and correct.
Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!