Samsung has unveiled the Vision AI Companion, an advanced conversational AI platform designed to transform the television into a connected household hub.
Unlike voice assistants meant for personal devices, the Vision AI Companion operates on the communal screen, enabling families to ask questions, plan activities, and receive visualised, contextual answers through natural dialogue.
Built into Samsung’s 2025 TV lineup, the system integrates an upgraded Bixby and supports multiple large language models, including Microsoft Copilot and Perplexity.
With its multi-AI agent platform, Vision AI Companion allows users to access personalised recommendations, real-time information, and multimedia responses without leaving their current programme.
It supports 10 languages and includes features such as Live Translate, AI Gaming Mode, Generative Wallpaper, and AI Upscaling Pro. The platform runs on One UI Tizen, offering seven years of software upgrades to ensure longevity and security.
By embedding generative AI into televisions, Samsung aims to redefine how households interact with technology, turning the TV into an intelligent companion that informs, entertains, and connects families across languages and experiences.
Would you like to learn more aboutAI, tech and digital diplomacy? If so, ask our Diplo chatbot!
A Munich regional court has ruled that OpenAI infringed copyright in a landmark case brought by the German rights society GEMA. The court held OpenAI liable for reproducing and memorising copyrighted lyrics without authorisation, rejecting its claim to operate as a non-profit research institute.
The judgement found that OpenAI had violated copyright even in a 15-word passage, setting a low threshold for infringement. Additionally, the court dismissed arguments about accidental reproduction and technical errors, emphasising that both reproduction and memorisation require a licence.
It also denied OpenAI’s request for a grace period to make compliance changes, citing negligence.
Judges concluded that the company could not rely on proportionality defences, noting that licences were available and alternative AI models exist.
OpenAI’s claim that EU copyright law failed to foresee large language models was rejected, as the court reaffirmed that European law ensures a high level of protection for intellectual property.
The ruling marks a significant step for copyright enforcement in the age of generative AI and could shape future litigation across Europe. It also challenges technology companies to adapt their training and licensing practices to comply with existing legal frameworks.
Would you like to learn more aboutAI, tech and digital diplomacy? If so, ask our Diplo chatbot!
The UK government is introducing landmark legislation to prevent AI from being exploited to generate child sexual abuse material. The new law empowers authorised bodies, such as the Internet Watch Foundation, to test AI models and ensure safeguards prevent misuse.
Reports of AI-generated child abuse imagery have surged, with the IWF recording 426 cases in 2025, more than double the 199 cases reported in 2024. The data also reveals a sharp rise in images depicting infants, increasing from five in 2024 to 92 in 2025.
Officials say the measures will enable experts to identify vulnerabilities within AI systems, making it more difficult for offenders to exploit the technology.
The legislation will also require AI developers to build protections against non-consensual intimate images and extreme content. A group of experts in AI and child safety will be established to oversee secure testing and ensure the well-being of researchers.
Ministers emphasised that child safety must be built into AI systems from the start, not added as an afterthought.
By collaborating with the AI sector and child protection groups, the government aims to make the UK the safest place for children to be online. The approach strikes a balance between innovation and strong protections, thereby reinforcing public trust in AI.
Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!
Canada and Denmark have signed a joint statement to deepen collaboration in quantum research and innovation.
The agreement, announced at the European Quantum Technologies Conference 2025 in Copenhagen, reflects both countries’ commitment to advancing quantum science responsibly while promoting shared values of openness, ethics and excellence.
Under the partnership, the two nations will enhance research and development ties, encourage open data sharing, and cultivate a skilled talent pipeline. They also aim to boost global competitiveness in quantum technologies, fostering new opportunities for market expansion and secure supply chains.
Canadian Minister Mélanie Joly highlighted that the cooperation showcases a shared ambition to accelerate progress in health care, clean energy and defence.
Denmark’s Minister for Higher Education and Science, Christina Egelund, described Canada as a vital partner in scientific innovation. At the same time, Minister Evan Solomon stressed the agreement’s role in empowering researchers to deliver breakthroughs that shape the future of quantum technologies.
Both Canada and Denmark are recognised as global leaders in quantum science, working together through initiatives such as the NATO Transatlantic Quantum Community.
A partnership that supports Canada’s National Quantum Strategy, launched in 2023, and reinforces its shared goal of driving innovation for sustainable growth and collective security.
Would you like to learn more aboutAI, tech and digital diplomacy? If so, ask our Diplo chatbot!
Finland will introduce stricter reporting obligations for crypto asset service providers from 2026 as part of international efforts to enhance tax transparency.
The move aligns with the OECD’s Crypto Asset Reporting Framework (CARF), which aims to standardise the exchange of crypto-related tax information globally. More than 70 countries and jurisdictions have already committed to the framework.
Finnish and foreign crypto providers must collect and report users’ transaction data, including purchases, sales, and transfers. The Finnish Tax Administration will begin receiving annual reports in 2027, enabling cross-border exchange under the CARF and the amended EU DAC8 directive.
The government proposal, due for parliamentary debate in autumn 2025, would extend Finland’s reporting requirements beyond international standards. Providers must also supply data allowing authorities to calculate capital gains and losses for Finnish residents and estates.
The Tax Administration will review and update its guidance on financial account reporting to align with these changes.
Despite the increased flow of information, individuals trading crypto assets will still need to declare profits, losses, and related income in their annual tax returns. The first international exchange of crypto asset data is expected to take place by September 2027.
Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!
In a recent statement, the UN highlighted the growing field of neuro-technology, which encompasses devices and software that can measure, access, or manipulate the nervous system, as posing new risks to human rights.
The UN highlighted how such technologies could challenge fundamental concepts like ‘mental integrity’, autonomy and personal identity by enabling unprecedented access to brain data.
It warned that without robust regulation, the benefits of neuro-technology may come with costs such as privacy violations, unequal access and intrusive commercial uses.
The concerns align with broader debates about how advanced technologies, such as AI, are reshaping society, ethics, and international governance.
Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!
Nvidia CEO Jensen Huang said China is ‘nanoseconds’ behind the US in AI and urged Washington to lead by accelerating innovation and courting developers globally. He argued that excluding China would weaken the reach of US technology and risk splintering the ecosystem into incompatible stacks.
Huang’s remarks came amid ongoing export controls that bar Nvidia’s most advanced processors from the Chinese market. He acknowledged national security concerns but cautioned that strict limits can slow the spread of American tools that underpin AI research, deployment, and scaling.
Hardware remains central, Huang said, citing advanced accelerators and data-centre capacity as the substrate for training frontier models. Yet diffusion matters: widespread adoption of US platforms by global developers amplifies influence, reduces fragmentation, and accelerates innovation.
With sales of top-end chips restricted, Huang warned that Chinese firms will continue to innovate on domestic alternatives, increasing the likelihood of parallel systems. He called for policies that enable US leadership while preserving channels to the developer community in China.
Huang framed the objective as keeping America ahead, maintaining the world’s reliance on an American tech stack, and avoiding strategies that would push away half the world’s AI talent.
Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!
Hackers are experimenting with malware that taps large language models to morph in real time, according to Google’s Threat Intelligence Group. An experimental family dubbed PROMPTFLUX can rewrite and obfuscate its own code as it executes, aiming to sidestep static, signature-based detection.
PROMPTFLUX interacts with Gemini’s API to request on-demand functions and ‘just-in-time’ evasion techniques, rather than hard-coding behaviours. GTIG describes the approach as a step toward more adaptive, partially autonomous malware that dynamically generates scripts and changes its footprint.
Investigators say the current samples appear to be in development or testing, with incomplete features and limited Gemini API access. Google says it has disabled associated assets and has not observed a successful compromise, yet warns that financially motivated actors are exploring such tooling.
Researchers point to a maturing underground market for illicit AI utilities that lowers barriers for less-skilled offenders. State-linked operators in North Korea, Iran, and China are reportedly experimenting with AI to enhance reconnaissance, influence, and intrusion workflows.
Defenders are turning to AI, using security frameworks and agents like ‘Big Sleep’ to find flaws. Teams should expect AI-assisted obfuscation, emphasise behaviour-based detection, watch model-API abuse, and lock down developer and automation credentials.
Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!
A High Court judge warned that a solicitor who pushed an expert to accept an AI-generated draft breached their duty. Mr Justice Waksman called it a gross breach and cited a case from the latest survey. He noted 14% of experts would accept such terms, which is unacceptable.
Updated guidance clarifies what limited judicial AI use is permissible. Judges may use a private ChatGPT 365 for summaries with confidential prompts. There is no duty to disclose, but the judgment must be the judge’s own.
Waksman cautioned against legal research or analysis done by AI. Hallucinated authorities and fake citations have already appeared. Experts must not let AI answer the questions they are retained to decide.
Survey findings show wider use of AI for drafting and summaries. Waksman drew a bright line between back-office aids and core duties. Convenience cannot trump independence, accuracy and accountability.
For practitioners, two rules follow. Solicitors must not foist AI-drafted expert opinions, and experts should refuse. Within courts, limited, non-determinative AI may assist, but outcomes must be human.
Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!
The Central Bank of Ireland has launched a new campaign to alert consumers to increasingly sophisticated scams targeting financial services users. Officials warned that scammers are adapting, making caution essential with online offers and investments.
Scammers are now using tactics such as fake comparison websites that appear legitimate but collect personal information for fraudulent products or services. Fraud recovery schemes are also common, promising to recover lost funds for an upfront fee, which often leads to further financial loss.
Advanced techniques include AI-generated social media profiles and ads, or ‘deepfakes’, impersonating public figures to promote fake investment platforms.
Deputy Governor Colm Kincaid warned that scams now offer slightly above-market returns, making them harder to spot. Consumers are encouraged to verify information, use regulated service providers, and seek regulated advice before making financial decisions.
The Central Bank advises using trusted comparison sites, checking ads and investment platforms, ignoring unsolicited recovery offers, and following the SAFE test: Stop, Assess, Factcheck, Expose. Reporting suspected scams to the Central Bank or An Garda Síochána remains crucial to protecting personal finances.
Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!