Vision AI Companion turns Samsung TVs into conversational AI platforms

Samsung has unveiled the Vision AI Companion, an advanced conversational AI platform designed to transform the television into a connected household hub.

Unlike voice assistants meant for personal devices, the Vision AI Companion operates on the communal screen, enabling families to ask questions, plan activities, and receive visualised, contextual answers through natural dialogue.

Built into Samsung’s 2025 TV lineup, the system integrates an upgraded Bixby and supports multiple large language models, including Microsoft Copilot and Perplexity.

With its multi-AI agent platform, Vision AI Companion allows users to access personalised recommendations, real-time information, and multimedia responses without leaving their current programme.

It supports 10 languages and includes features such as Live Translate, AI Gaming Mode, Generative Wallpaper, and AI Upscaling Pro. The platform runs on One UI Tizen, offering seven years of software upgrades to ensure longevity and security.

By embedding generative AI into televisions, Samsung aims to redefine how households interact with technology, turning the TV into an intelligent companion that informs, entertains, and connects families across languages and experiences.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

OpenAI loses German copyright lawsuit over song lyrics reproduction

A Munich regional court has ruled that OpenAI infringed copyright in a landmark case brought by the German rights society GEMA. The court held OpenAI liable for reproducing and memorising copyrighted lyrics without authorisation, rejecting its claim to operate as a non-profit research institute.

The judgement found that OpenAI had violated copyright even in a 15-word passage, setting a low threshold for infringement. Additionally, the court dismissed arguments about accidental reproduction and technical errors, emphasising that both reproduction and memorisation require a licence.

It also denied OpenAI’s request for a grace period to make compliance changes, citing negligence.

Judges concluded that the company could not rely on proportionality defences, noting that licences were available and alternative AI models exist.

OpenAI’s claim that EU copyright law failed to foresee large language models was rejected, as the court reaffirmed that European law ensures a high level of protection for intellectual property.

The ruling marks a significant step for copyright enforcement in the age of generative AI and could shape future litigation across Europe. It also challenges technology companies to adapt their training and licensing practices to comply with existing legal frameworks.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

UK strengthens AI safeguards to protect children online

The UK government is introducing landmark legislation to prevent AI from being exploited to generate child sexual abuse material. The new law empowers authorised bodies, such as the Internet Watch Foundation, to test AI models and ensure safeguards prevent misuse.

Reports of AI-generated child abuse imagery have surged, with the IWF recording 426 cases in 2025, more than double the 199 cases reported in 2024. The data also reveals a sharp rise in images depicting infants, increasing from five in 2024 to 92 in 2025.

Officials say the measures will enable experts to identify vulnerabilities within AI systems, making it more difficult for offenders to exploit the technology.

The legislation will also require AI developers to build protections against non-consensual intimate images and extreme content. A group of experts in AI and child safety will be established to oversee secure testing and ensure the well-being of researchers.

Ministers emphasised that child safety must be built into AI systems from the start, not added as an afterthought.

By collaborating with the AI sector and child protection groups, the government aims to make the UK the safest place for children to be online. The approach strikes a balance between innovation and strong protections, thereby reinforcing public trust in AI.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Artist secretly hangs AI print Cardiff museum

An AI-generated print by artist Elias Marrow was secretly placed on a gallery wall at the National Museum Cardiff before staff were alerted, and it was removed. The work, titled Empty Plate, shows a young boy in a school uniform holding a plate and was reportedly seen by hundreds of visitors.

Marrow said the piece represents Wales in 2025 and examines how public institutions decide what is worth displaying. He defended the stunt as participatory rather than vandalism, emphasising that AI is a natural evolution of artistic tools.

Visitors photographed the artwork, and some initially thought it was performance art, while the museum confirmed it had no prior knowledge of the piece. Marrow has carried out similar unsanctioned displays at Bristol Museum and Tate Modern, highlighting his interest in challenging traditional curation.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

Oracle and Ci4CC join forces to advance AI in cancer research

Oracle Health and Life Sciences has announced a strategic collaboration with the Cancer Center Informatics Society (Ci4CC) to accelerate AI innovation in oncology. The partnership unites Oracle’s healthcare technology with Ci4CC’s national network of cancer research institutions.

The two organisations plan to co-develop an electronic health record system tailored to oncology, integrating clinical and genomic data for more effective personalised medicine. They also aim to explore AI-driven drug development to enhance research and patient outcomes.

Oracle executives said the collaboration represents an opportunity to use advanced AI applications to transform cancer research. The Ci4CC President highlighted the importance of collective innovation, noting that progress in oncology relies on shared data and cross-institution collaboration.

The agreement, announced at Ci4CC’s annual symposium in Miami Beach US, remains non-binding but signals growing momentum in AI-driven precision medicine. Both organisations see the initiative as a step towards turning medical data into actionable insights that could redefine oncology care.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

Researchers urge governance after LLMs display source-driven bias

Large language models (LLMs) are increasingly used to grade, hire, and moderate text. UZH research shows that evaluations shift when participants are told who wrote identical text, revealing source bias. Agreement stayed high only when authorship was hidden.

When told a human or another AI wrote it, agreement fell, and biases surfaced. The strongest was anti-Chinese across all models, including a model from China, with sharp drops even for well-reasoned arguments.

AI models also preferred ‘human-written’ over ‘AI-written’, showing scepticism toward machine-authored text. Such identity-triggered bias risks unfair outcomes in moderation, reviewing, hiring, and newsroom workflows.

Researchers recommend identity-blind prompts, A/B checks with and without source cues, structured rubrics focused on evidence and logic, and human oversight for consequential decisions.

They call for governance standards: disclose evaluation settings, test for bias across demographics and nationalities, and set guardrails before sensitive deployments. Transparency on prompts, model versions, and calibration is essential.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

Denmark’s new chat control plan raises fresh privacy concerns

Denmark has proposed an updated version of the EU’s controversial ‘chat control’ regulation, shifting from mandatory to voluntary scanning of private messages. Former MEP Patrick Breyer has warned, however, that the revision still threatens Europeans’ right to private communication.

Under the new plan, messaging providers could choose to scan chats for illegal material, but without a clear requirement for court orders. Breyer argued that this sidesteps the European Parliament’s position, which insists on judicial authorisation before any access to communications.

He also criticised the proposal for banning under-16s from using messaging apps like WhatsApp and Telegram, claiming such restrictions would prove ineffective and easily bypassed. In addition, the plan would effectively outlaw anonymous communication, requiring users to verify their identities through IDs.

Privacy advocates say the Danish proposal could set a dangerous precedent by eroding fundamental digital rights. Civil society groups have urged EU lawmakers to reject measures that compromise secure, anonymous communication essential for journalists and whistleblowers.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

Inside OpenAI’s battle to protect AI from prompt injection attacks

OpenAI has identified prompt injection as one of the most pressing new challenges in AI security. As AI systems gain the ability to browse the web, handle personal data and act on users’ behalf, they become targets for malicious instructions hidden within online content.

These attacks, known as prompt injections, can trick AI models into taking unintended actions or revealing sensitive information.

To counter the issue, OpenAI has adopted a multi-layered defence strategy that combines safety training, automated monitoring and system-level security protections. The company’s research into ‘Instruction Hierarchy’ aims to help models distinguish between trusted and untrusted commands.

Continuous red-teaming and automated detection systems further strengthen resilience against evolving threats.

OpenAI also provides users with greater control, featuring built-in safeguards such as approval prompts before sensitive actions, sandboxing for code execution, and ‘Watch Mode’ when operating on financial or confidential sites.

These measures ensure that users remain aware of what actions AI agents perform on their behalf.

While prompt injection remains a developing risk, OpenAI expects adversaries to devote significant resources to exploiting it. The company continues to invest in research and transparency, aiming to make AI systems as secure and trustworthy as a cautious, well-informed human colleague.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Cars.com launches Carson AI to transform online car shopping

The US tech company, Cars.com, has unveiled Carson, a multilingual AI search engine designed to revolutionise the online car shopping experience.

Instead of relying on complex filters, Carson interprets natural language queries such as ‘a reliable car for a family of five’ or ‘a used truck under $30,000’, instantly producing targeted results tailored to each shopper’s needs.

A new AI feature that already powers around 15% of all web and mobile searches on Cars.com, with early data showing that users engaging with Carson return to the site twice as often and save three times more vehicles.

They also generate twice as many leads and convert 30% more frequently from search to vehicle detail pages.

Cars.com aims to simplify decision-making for its 25 million monthly shoppers, 70% of whom begin their search without knowing which brand or model to choose.

Carson helps these undecided users explore lifestyle, emotional and practical preferences while guiding them through Cars.com’s award-winning listings.

Further updates will introduce AI-generated summaries, personalised comparisons and search refinement suggestions.

Cars.com’s parent company, Cars Commerce, plans to expand its use of AI-driven tools to strengthen its role at the forefront of automotive retail innovation, offering a more efficient and intelligent marketplace for both consumers and dealerships.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Appfigures revises iOS estimates as Sora’s launch on Android launch leaps ahead

Sora’s Android launch outpaced its iOS debut, garnering an estimated 470,000 first-day installs across seven markets, according to Appfigures. Broader regional availability, plus the end of invite-only access in top markets, boosted uptake.

OpenAI’s iOS rollout was limited to the US and Canada via invitations, which capped early growth despite strong momentum. The iOS app nevertheless surpassed one million installs in its first week and still ranks highly in the US App Store’s Top Free chart.

Revised Appfigures modelling puts day-one iOS installs at ~110,000 (up from 56,000), with ~69,300 from the US. On Android, availability spans the US, Canada, Japan, South Korea, Taiwan, Thailand, and Vietnam. First-day US installs were ~296,000, showing sustained demand beyond the iOS launch.

Sora allows users to generate videos from text prompts and animate themselves or friends via ‘Cameos’, sharing the results in a TikTok-style vertical feed. Engagement features for creation and discovery are driving word of mouth and repeat use across both platforms.

Competition in mobile AI video and assistants is intensifying, with Meta AI expanding its app in Europe on the same day. Market share will hinge on geographic reach, feature velocity, creator tools, and distribution via app store charts and social feeds.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!