Samsung has unveiled the Vision AI Companion, an advanced conversational AI platform designed to transform the television into a connected household hub.
Unlike voice assistants meant for personal devices, the Vision AI Companion operates on the communal screen, enabling families to ask questions, plan activities, and receive visualised, contextual answers through natural dialogue.
Built into Samsung’s 2025 TV lineup, the system integrates an upgraded Bixby and supports multiple large language models, including Microsoft Copilot and Perplexity.
With its multi-AI agent platform, Vision AI Companion allows users to access personalised recommendations, real-time information, and multimedia responses without leaving their current programme.
It supports 10 languages and includes features such as Live Translate, AI Gaming Mode, Generative Wallpaper, and AI Upscaling Pro. The platform runs on One UI Tizen, offering seven years of software upgrades to ensure longevity and security.
By embedding generative AI into televisions, Samsung aims to redefine how households interact with technology, turning the TV into an intelligent companion that informs, entertains, and connects families across languages and experiences.
Would you like to learn more aboutAI, tech and digital diplomacy? If so, ask our Diplo chatbot!
Judges and justice officials from 11 countries across Asia are gathering in Bangkok for a regional training focused on AI and the rule of law. The event, held from 12 November to 14, 2025, is jointly organised by UNESCO, UNDP, and the Thailand Institute of Justice.
Participants will examine how AI can enhance judicial efficiency while upholding human rights and ethical standards.
The training, based on UNESCO’s Global Toolkit on AI and the Rule of Law for the Justice Sector, will help participants assess both the benefits and challenges of AI in judicial processes. Officials will address algorithmic bias, transparency, and accountability to ensure AI tools uphold justice.
AI technologies are already transforming case management, legal research, and court administration. However, experts warn that unchecked use may amplify bias or weaken judicial independence.
The workshop aims to strengthen regional cooperation and train officials to assess AI systems using legal and ethical principles. The initiative supports UN SDG 16 and advances UNESCO’s mission to promote moral, inclusive, and trustworthy governance of AI.
Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!
Canada and Denmark have signed a joint statement to deepen collaboration in quantum research and innovation.
The agreement, announced at the European Quantum Technologies Conference 2025 in Copenhagen, reflects both countries’ commitment to advancing quantum science responsibly while promoting shared values of openness, ethics and excellence.
Under the partnership, the two nations will enhance research and development ties, encourage open data sharing, and cultivate a skilled talent pipeline. They also aim to boost global competitiveness in quantum technologies, fostering new opportunities for market expansion and secure supply chains.
Canadian Minister Mélanie Joly highlighted that the cooperation showcases a shared ambition to accelerate progress in health care, clean energy and defence.
Denmark’s Minister for Higher Education and Science, Christina Egelund, described Canada as a vital partner in scientific innovation. At the same time, Minister Evan Solomon stressed the agreement’s role in empowering researchers to deliver breakthroughs that shape the future of quantum technologies.
Both Canada and Denmark are recognised as global leaders in quantum science, working together through initiatives such as the NATO Transatlantic Quantum Community.
A partnership that supports Canada’s National Quantum Strategy, launched in 2023, and reinforces its shared goal of driving innovation for sustainable growth and collective security.
Would you like to learn more aboutAI, tech and digital diplomacy? If so, ask our Diplo chatbot!
Nvidia CEO Jensen Huang said China is ‘nanoseconds’ behind the US in AI and urged Washington to lead by accelerating innovation and courting developers globally. He argued that excluding China would weaken the reach of US technology and risk splintering the ecosystem into incompatible stacks.
Huang’s remarks came amid ongoing export controls that bar Nvidia’s most advanced processors from the Chinese market. He acknowledged national security concerns but cautioned that strict limits can slow the spread of American tools that underpin AI research, deployment, and scaling.
Hardware remains central, Huang said, citing advanced accelerators and data-centre capacity as the substrate for training frontier models. Yet diffusion matters: widespread adoption of US platforms by global developers amplifies influence, reduces fragmentation, and accelerates innovation.
With sales of top-end chips restricted, Huang warned that Chinese firms will continue to innovate on domestic alternatives, increasing the likelihood of parallel systems. He called for policies that enable US leadership while preserving channels to the developer community in China.
Huang framed the objective as keeping America ahead, maintaining the world’s reliance on an American tech stack, and avoiding strategies that would push away half the world’s AI talent.
Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!
Hackers are experimenting with malware that taps large language models to morph in real time, according to Google’s Threat Intelligence Group. An experimental family dubbed PROMPTFLUX can rewrite and obfuscate its own code as it executes, aiming to sidestep static, signature-based detection.
PROMPTFLUX interacts with Gemini’s API to request on-demand functions and ‘just-in-time’ evasion techniques, rather than hard-coding behaviours. GTIG describes the approach as a step toward more adaptive, partially autonomous malware that dynamically generates scripts and changes its footprint.
Investigators say the current samples appear to be in development or testing, with incomplete features and limited Gemini API access. Google says it has disabled associated assets and has not observed a successful compromise, yet warns that financially motivated actors are exploring such tooling.
Researchers point to a maturing underground market for illicit AI utilities that lowers barriers for less-skilled offenders. State-linked operators in North Korea, Iran, and China are reportedly experimenting with AI to enhance reconnaissance, influence, and intrusion workflows.
Defenders are turning to AI, using security frameworks and agents like ‘Big Sleep’ to find flaws. Teams should expect AI-assisted obfuscation, emphasise behaviour-based detection, watch model-API abuse, and lock down developer and automation credentials.
Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!
A High Court judge warned that a solicitor who pushed an expert to accept an AI-generated draft breached their duty. Mr Justice Waksman called it a gross breach and cited a case from the latest survey. He noted 14% of experts would accept such terms, which is unacceptable.
Updated guidance clarifies what limited judicial AI use is permissible. Judges may use a private ChatGPT 365 for summaries with confidential prompts. There is no duty to disclose, but the judgment must be the judge’s own.
Waksman cautioned against legal research or analysis done by AI. Hallucinated authorities and fake citations have already appeared. Experts must not let AI answer the questions they are retained to decide.
Survey findings show wider use of AI for drafting and summaries. Waksman drew a bright line between back-office aids and core duties. Convenience cannot trump independence, accuracy and accountability.
For practitioners, two rules follow. Solicitors must not foist AI-drafted expert opinions, and experts should refuse. Within courts, limited, non-determinative AI may assist, but outcomes must be human.
Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!
India’s data centre market is expanding rapidly, driven by rapid AI adoption, mobile internet growth, and massive foreign investment from firms such as Google, Amazon and Meta. The sector is projected to expand 77% by 2027, with billions more expected to be spent on capacity by 2030.
Rapid expansion of energy-hungry and water-intensive facilities is creating serious sustainability challenges, particularly in water-scarce urban clusters like Mumbai, Hyderabad and Bengaluru. Experts warn that by 2030, India’s data centre water consumption could reach 358 billion litres, risking shortages for local communities and critical services in India.
Authorities and industry players are exploring solutions including treated wastewater, low-stress basin selection, and zero-water cooling technologies to mitigate environmental impact. Officials also highlight the need to mandate renewable energy use to balance India’s digital ambitions with decarbonisation goals.
Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!
Oracle Health and Life Sciences has announced a strategic collaboration with the Cancer Center Informatics Society (Ci4CC) to accelerate AI innovation in oncology. The partnership unites Oracle’s healthcare technology with Ci4CC’s national network of cancer research institutions.
The two organisations plan to co-develop an electronic health record system tailored to oncology, integrating clinical and genomic data for more effective personalised medicine. They also aim to explore AI-driven drug development to enhance research and patient outcomes.
Oracle executives said the collaboration represents an opportunity to use advanced AI applications to transform cancer research. The Ci4CC President highlighted the importance of collective innovation, noting that progress in oncology relies on shared data and cross-institution collaboration.
The agreement, announced at Ci4CC’s annual symposium in Miami Beach US, remains non-binding but signals growing momentum in AI-driven precision medicine. Both organisations see the initiative as a step towards turning medical data into actionable insights that could redefine oncology care.
Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!
Large language models (LLMs) are increasingly used to grade, hire, and moderate text. UZH research shows that evaluations shift when participants are told who wrote identical text, revealing source bias. Agreement stayed high only when authorship was hidden.
When told a human or another AI wrote it, agreement fell, and biases surfaced. The strongest was anti-Chinese across all models, including a model from China, with sharp drops even for well-reasoned arguments.
AI models also preferred ‘human-written’ over ‘AI-written’, showing scepticism toward machine-authored text. Such identity-triggered bias risks unfair outcomes in moderation, reviewing, hiring, and newsroom workflows.
Researchers recommend identity-blind prompts, A/B checks with and without source cues, structured rubrics focused on evidence and logic, and human oversight for consequential decisions.
They call for governance standards: disclose evaluation settings, test for bias across demographics and nationalities, and set guardrails before sensitive deployments. Transparency on prompts, model versions, and calibration is essential.
Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!
As interest in AI grows, many companies that previously cut staff are now rehiring some of the same employees. Visier data shows about 5.3 percent of laid-off workers have returned, marking a steady but rising trend.
The findings suggest AI adoption has not yet replaced human labour at the scale some executives anticipated.
Visier’s analysis of 2.4 million employees across 142 global companies indicates that AI tools often automate parts of tasks rather than entire jobs. Experts say organisations are realising that AI implementation costs, including infrastructure, data systems, and security, often exceed initial projections.
Many companies now rely on experienced staff to manage or complement AI tools effectively.
Industry observers highlight a gap between expectations and outcomes. MIT research shows around 95 percent of firms have yet to see measurable financial returns from AI investments.
Cost-cutting measures such as layoffs also carry hidden expenses, with estimates suggesting companies spend $1.27 for every $1 saved when reducing staff.
Executives are urged to carefully assess AI’s true impact before assuming workforce reductions will deliver long-term savings. Rehiring former employees has become a practical response to bridge skill gaps and ensure technology integration succeeds without disrupting operations.
Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!