SoftBank sold its entire Nvidia stake for $5.83 billion and part of its T-Mobile holding for $9.17 billion, raising cash for OpenAI. Alongside a margin loan on Arm, the proceeds fund a $22.5 billion commitment and other projects. Nvidia slipped 2%; SoftBank referred to it as asset monetisation, not a valuation call.
Executives said the goal is an investor opportunity with balance-sheet strength, including backing for ABB’s robotics deal. Analysts called the quarter’s funding need unusually large but consistent with an AI pivot. SoftBank said the sale recycles capital, not a retreat from Nvidia.
SoftBank has a history with Nvidia: the Vision Fund invested in 2017 and exited in 2019; group ventures still utilise its technology. Projects include the $500 billion Stargate data centre programme, built on accelerated computing. Shares remain volatile amid concerns about the AI bubble and questions regarding the timing of deployment.
Results reflected the shift, with $19 billion in Vision Fund gains helping to double profit in fiscal Q2. SoftBank says its OpenAI stake will rise from 4% to 11% after the recapitalisation, with scope to increase further. The group aims to avoid setting a controlling threshold while scaling exposure to AI.
Management stressed liquidity and shareholder access, flagging a four-for-one stock split and ‘very safe’ funding plans. Further portfolio monetisation is possible as it backs AI infrastructure and applications at scale. Investors will closely monitor execution risks and the timing of returns from OpenAI and its adjacent bets.
Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!
AMD has completed the acquisition of MK1, a California-based company specialising in high-speed inference and reasoning-based AI technologies.
The move marks a significant step in AMD’s strategy to strengthen AI performance and efficiency across hardware and software layers. MK1’s Flywheel and comprehension engines are designed to optimise AMD’s Instinct GPUs, offering scalable, accurate, and cost-efficient AI reasoning.
The MK1 team will join the AMD Artificial Intelligence Group, where their expertise will advance AMD’s enterprise AI software stack and inference capabilities.
Handling over one trillion tokens daily, MK1’s systems are already deployed at scale, providing traceable and efficient AI solutions for complex business processes.
By combining MK1’s advanced AI software innovation with AMD’s compute power, the acquisition enhances AMD’s position in the enterprise and generative AI markets, supporting its goal of delivering accessible, high-performance AI solutions globally.
Would you like to learn more aboutAI, tech and digital diplomacy? If so, ask our Diplo chatbot!
A global study by Deezer and Ipsos highlights growing challenges and concerns around AI-generated music. Surveying 9,000 participants in eight countries, the study found that 97% could not distinguish between AI-generated music and human-created tracks.
Over half of the respondents reported discomfort at being unable to distinguish between the two.
The study also reveals strong support for transparency and fair treatment of artists. Eighty percent of respondents believe AI music should be clearly labelled, while most oppose using copyrighted material to train AI models.
Concerns over income losses are significant, with 70% saying AI tracks could threaten artists’ earnings, and nearly two-thirds fearing a reduction in creativity and musical quality.
Deezer now receives around 40,000 fully AI-generated tracks daily, representing over one-third of its daily uploads. To address transparency, the platform is the only streaming service to detect and label AI music clearly.
All AI tracks are excluded from algorithmic recommendations and editorial playlists, and manipulated streams are removed from royalty calculations.
The study marks a key moment for the music industry, stressing clear labelling, ethical AI use, and protecting artists’ livelihoods alongside innovation. Deezer’s proactive approach sets new industry standards for transparency and fairness in AI music streaming.
Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!
Samsung has unveiled the Vision AI Companion, an advanced conversational AI platform designed to transform the television into a connected household hub.
Unlike voice assistants meant for personal devices, the Vision AI Companion operates on the communal screen, enabling families to ask questions, plan activities, and receive visualised, contextual answers through natural dialogue.
Built into Samsung’s 2025 TV lineup, the system integrates an upgraded Bixby and supports multiple large language models, including Microsoft Copilot and Perplexity.
With its multi-AI agent platform, Vision AI Companion allows users to access personalised recommendations, real-time information, and multimedia responses without leaving their current programme.
It supports 10 languages and includes features such as Live Translate, AI Gaming Mode, Generative Wallpaper, and AI Upscaling Pro. The platform runs on One UI Tizen, offering seven years of software upgrades to ensure longevity and security.
By embedding generative AI into televisions, Samsung aims to redefine how households interact with technology, turning the TV into an intelligent companion that informs, entertains, and connects families across languages and experiences.
Would you like to learn more aboutAI, tech and digital diplomacy? If so, ask our Diplo chatbot!
A Munich regional court has ruled that OpenAI infringed copyright in a landmark case brought by the German rights society GEMA. The court held OpenAI liable for reproducing and memorising copyrighted lyrics without authorisation, rejecting its claim to operate as a non-profit research institute.
The judgement found that OpenAI had violated copyright even in a 15-word passage, setting a low threshold for infringement. Additionally, the court dismissed arguments about accidental reproduction and technical errors, emphasising that both reproduction and memorisation require a licence.
It also denied OpenAI’s request for a grace period to make compliance changes, citing negligence.
Judges concluded that the company could not rely on proportionality defences, noting that licences were available and alternative AI models exist.
OpenAI’s claim that EU copyright law failed to foresee large language models was rejected, as the court reaffirmed that European law ensures a high level of protection for intellectual property.
The ruling marks a significant step for copyright enforcement in the age of generative AI and could shape future litigation across Europe. It also challenges technology companies to adapt their training and licensing practices to comply with existing legal frameworks.
Would you like to learn more aboutAI, tech and digital diplomacy? If so, ask our Diplo chatbot!
The UK government is introducing landmark legislation to prevent AI from being exploited to generate child sexual abuse material. The new law empowers authorised bodies, such as the Internet Watch Foundation, to test AI models and ensure safeguards prevent misuse.
Reports of AI-generated child abuse imagery have surged, with the IWF recording 426 cases in 2025, more than double the 199 cases reported in 2024. The data also reveals a sharp rise in images depicting infants, increasing from five in 2024 to 92 in 2025.
Officials say the measures will enable experts to identify vulnerabilities within AI systems, making it more difficult for offenders to exploit the technology.
The legislation will also require AI developers to build protections against non-consensual intimate images and extreme content. A group of experts in AI and child safety will be established to oversee secure testing and ensure the well-being of researchers.
Ministers emphasised that child safety must be built into AI systems from the start, not added as an afterthought.
By collaborating with the AI sector and child protection groups, the government aims to make the UK the safest place for children to be online. The approach strikes a balance between innovation and strong protections, thereby reinforcing public trust in AI.
Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!
Judges and justice officials from 11 countries across Asia are gathering in Bangkok for a regional training focused on AI and the rule of law. The event, held from 12 November to 14, 2025, is jointly organised by UNESCO, UNDP, and the Thailand Institute of Justice.
Participants will examine how AI can enhance judicial efficiency while upholding human rights and ethical standards.
The training, based on UNESCO’s Global Toolkit on AI and the Rule of Law for the Justice Sector, will help participants assess both the benefits and challenges of AI in judicial processes. Officials will address algorithmic bias, transparency, and accountability to ensure AI tools uphold justice.
AI technologies are already transforming case management, legal research, and court administration. However, experts warn that unchecked use may amplify bias or weaken judicial independence.
The workshop aims to strengthen regional cooperation and train officials to assess AI systems using legal and ethical principles. The initiative supports UN SDG 16 and advances UNESCO’s mission to promote moral, inclusive, and trustworthy governance of AI.
Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!
An AI-generated print by artist Elias Marrow was secretly placed on a gallery wall at the National Museum Cardiff before staff were alerted, and it was removed. The work, titled Empty Plate, shows a young boy in a school uniform holding a plate and was reportedly seen by hundreds of visitors.
Marrow said the piece represents Wales in 2025 and examines how public institutions decide what is worth displaying. He defended the stunt as participatory rather than vandalism, emphasising that AI is a natural evolution of artistic tools.
Visitors photographed the artwork, and some initially thought it was performance art, while the museum confirmed it had no prior knowledge of the piece. Marrow has carried out similar unsanctioned displays at Bristol Museum and Tate Modern, highlighting his interest in challenging traditional curation.
Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!
In a recent statement, the UN highlighted the growing field of neuro-technology, which encompasses devices and software that can measure, access, or manipulate the nervous system, as posing new risks to human rights.
The UN highlighted how such technologies could challenge fundamental concepts like ‘mental integrity’, autonomy and personal identity by enabling unprecedented access to brain data.
It warned that without robust regulation, the benefits of neuro-technology may come with costs such as privacy violations, unequal access and intrusive commercial uses.
The concerns align with broader debates about how advanced technologies, such as AI, are reshaping society, ethics, and international governance.
Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!
Nvidia CEO Jensen Huang said China is ‘nanoseconds’ behind the US in AI and urged Washington to lead by accelerating innovation and courting developers globally. He argued that excluding China would weaken the reach of US technology and risk splintering the ecosystem into incompatible stacks.
Huang’s remarks came amid ongoing export controls that bar Nvidia’s most advanced processors from the Chinese market. He acknowledged national security concerns but cautioned that strict limits can slow the spread of American tools that underpin AI research, deployment, and scaling.
Hardware remains central, Huang said, citing advanced accelerators and data-centre capacity as the substrate for training frontier models. Yet diffusion matters: widespread adoption of US platforms by global developers amplifies influence, reduces fragmentation, and accelerates innovation.
With sales of top-end chips restricted, Huang warned that Chinese firms will continue to innovate on domestic alternatives, increasing the likelihood of parallel systems. He called for policies that enable US leadership while preserving channels to the developer community in China.
Huang framed the objective as keeping America ahead, maintaining the world’s reliance on an American tech stack, and avoiding strategies that would push away half the world’s AI talent.
Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!