The third UK-EU Cyber Dialogue was held in Brussels on 9 and 10 December 2025, bringing together senior officials under the UK-EU Trade and Cooperation Agreement to strengthen cooperation on cybersecurity and digital resilience.
The meeting was co-chaired by Andrew Whittaker from the UK Foreign, Commonwealth and Development Office and Irfan Hemani from the Department for Science, Innovation and Technology, alongside EU representatives from the European External Action Service and the European Commission.
Officials from Europol and ENISA also participated, reinforcing operational and regulatory coordination rather than fragmented policy approaches.
Discussions covered cyber legislation, deterrence strategies, countering cybercrime, incident response and cyber capacity development, with an emphasis on maintaining strong security standards while reducing unnecessary compliance burdens on industry.
Both sides confirmed that the next UK-EU Cyber Dialogue will take place in London in 2026.
Would you like to learn more aboutAI, tech and digital diplomacy? If so, ask our Diplo chatbot!
US credit reporting company 700Credit has confirmed a data breach affecting more than 5.6 million individuals after attackers exploited a compromised third-party API used to exchange consumer data with external integration partners.
An incident that originated from a supply chain failure after one partner was breached earlier in 2025 and failed to notify 700Credit.
The attackers launched a sustained, high-volume data extraction campaign starting on October 25, 2025, which operated for more than two weeks before access was shut down.
Around 20 percent of consumer records were accessed, exposing names, home addresses, dates of birth and Social Security numbers, while internal systems, payment platforms and login credentials were not compromised.
Despite the absence of financial system access, the exposed personal data significantly increases the risk of identity theft and sophisticated phishing attacks impersonating credit reporting services.
The breach has been reported to the Federal Trade Commission and the FBI, with regulators coordinating responses through industry bodies representing affected dealerships.
Individuals impacted by the incident are currently being notified and offered two years of free credit monitoring, complimentary credit reports and access to a dedicated support line.
Authorities have urged recipients to act promptly by monitoring their credit activity and taking protective measures to minimise the risk of fraud.
Would you like to learn more aboutAI, tech and digital diplomacy? If so, ask our Diplo chatbot!
Libraries Connected, supported by a £310,400 grant from the UK Government’s Digital Inclusion Innovation Fund administered by the Department for Science, Industry and Technology (DSIT), is launching Innovating in Trusted Spaces: Libraries Advancing the Digital Inclusion Action Plan.
The programme will run from November 2025 to March 2026 across 121 library branches in Newcastle, Northumberland, Nottingham City and Nottinghamshire, targeting older people, low-income families and individuals with disabilities to ensure they are not left behind amid rapid digital and AI-driven change.
Public libraries are already a leading provider of free internet access and basic digital skills support, offering tens of thousands of public computers and learning opportunities each year. However, only around 27 percent of UK adults currently feel confident in recognising AI-generated content online, underscoring the need for improved digital and media literacy.
The project will create and test a new digital inclusion guide for library staff, focusing on the benefits and risks of AI tools, misinformation and emerging technologies, as well as building a national network of practice for sharing insights.
Partners in the programme include Good Things Foundation and WSA Community, which will help co-design materials and evaluate the initiative’s impact to inform future digital inclusion efforts across communities.
Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!
Podcasts generated by AI are rapidly reshaping the audio industry, with automated shows flooding platforms such as Spotify, Apple Podcasts and YouTube.
Advances in voice cloning and speech synthesis have enabled the production to large volumes of content at minimal cost, allowing AI hosts to compete directly with human creators in an already crowded market.
Some established podcasters are experimenting cautiously, using cloned voices for translation, post-production edits or emergency replacements. Others have embraced full automation, launching synthetic personalities designed to deliver commentary, biographies and niche updates at speed.
Studios, such as Los Angeles-based Inception Point AI, have scaled the model to scale, producing hundreds of thousands of episodes by targeting micro-audiences and trending searches instead of premium advertising slots.
The rapid expansion is fuelling concern across the industry, where trust and human connection remain central to listener loyalty.
Researchers and networks warn that large-scale automation risks devaluing premium content, while creators and audiences question how far AI voices can replace authenticity without undermining the medium itself.
Would you like to learn more aboutAI, tech and digital diplomacy? If so, ask our Diplo chatbot!
Canada has launched a major new quantum initiative aimed at strengthening domestic technological sovereignty and accelerating the development of industrial-scale quantum computing.
Announced in Toronto, Phase 1 of the Canadian Quantum Champions Program forms part of a wider $334.3 million investment under Budget 2025 to expand Canada’s quantum ecosystem.
The programme will provide up to $92 million in initial funding, with agreements signed with Anyon Systems, Nord Quantique, Photonic and Xanadu Quantum Technologies for up to $23 million each.
A funding that is designed to support the development of fault-tolerant quantum computers capable of solving real-world problems, while anchoring advanced research, talent, and production in Canada, rather than allowing strategic capabilities to migrate abroad.
The initiative also supports Canada’s forthcoming Defence Industrial Strategy, reflecting the growing role of quantum technologies in cryptography, materials science and threat analysis.
Technical progress will be assessed through a new Benchmarking Quantum Platform led by the National Research Council of Canada, with further programme phases to be announced as development milestones are reached.
Would you like to learn more aboutAI, tech and digital diplomacy? If so, ask our Diplo chatbot!
Google is rolling out a major Translate upgrade powered by Gemini to improve text and speech translation. The update enhances contextual understanding so idioms, tone and intent are interpreted more naturally.
A beta feature for live headphone translation enables real-time speech-to-speech output. Gemini processes audio directly, preserving cadence and emphasis to improve conversations and lectures. Android users in the US, Mexico and India gain early access, with wider availability planned for 2026.
Translate is also gaining expanded language-learning tools for speaking practice and progress tracking. Additional language pairs, including English to German and Portuguese, broaden support for learners worldwide.
Google aims to reduce friction in global communication by focusing on meaning rather than literal phrasing. Engineers expect user feedback to shape the AI live translation beta across platforms.
Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!
LG Electronics will unveil a new AI Cabin Platform at CES 2026 in Las Vegas, positioning the system as a next step beyond today’s software-defined vehicles and toward what the company calls AI-defined mobility.
The platform is designed to run on automotive high-performance computing systems and is powered by Qualcomm Technologies’ Snapdragon Cockpit Elite. LG says it applies generative AI models directly to in-vehicle infotainment, enabling more context-aware and personalised driving experiences.
Unlike cloud-dependent systems, all AI processing occurs on-device within the vehicle. LG says this approach enables real-time responses while improving reliability, privacy, and data security by avoiding communication with external servers.
Using data from internal and external cameras, the system can assess driving conditions and driver awareness to provide proactive alerts. LG also demonstrated adaptive infotainment features, including AI-generated visuals and music suggestions that respond to weather, time, and driving context.
LG will showcase the AI Cabin Platform at a private CES event, alongside a preview of its AI-defined vehicle concept. The company says the platform builds on its expanding partnership with Qualcomm Technologies and on its earlier work integrating infotainment and driver-assistance systems.
Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!
YouTube channels spreading fake and inflammatory anti-Labour videos have attracted more than a billion views this year, as opportunistic creators use AI-generated content to monetise political division in the UK.
Research by non-profit group Reset Tech identified more than 150 channels promoting hostile narratives about the Labour Party and Prime Minister Keir Starmer. The study found the channels published over 56,000 videos, gaining 5.3 million subscribers and nearly 1.2 billion views in 2025.
Many videos used alarmist language, AI-generated scripts and British-accented narration to boost engagement. Starmer was referenced more than 15,000 times in titles or descriptions, often alongside fabricated claims of arrests, political collapse or public humiliation.
Reset Tech said the activity reflects a wider global trend driven by cheap AI tools and engagement-based incentives. Similar networks were found across Europe, although UK-focused channels were mostly linked to creators seeking advertising revenue rather than foreign actors.
YouTube removed all identified channels after being contacted, citing spam and deceptive practices as violations of its policies. Labour officials warned that synthetic misinformation poses a serious threat to democratic trust, urging platforms to act more quickly and strengthen their moderation systems.
Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!
Platform X has paid an administrative fine of nearly Rp80 million after failing to meet Indonesia’s content moderation requirements related to pornographic material, according to the country’s digital regulator.
The Ministry of Communication and Digital Affairs said the payment was made on 12 December 2025, after a third warning letter and further exchanges with the company. Officials confirmed that Platform X appointed a representative to complete the process, who is based in Singapore.
The regulator welcomed the company’s compliance, framing the payment as a demonstration of responsibility by an electronic system operator under Indonesian law. Authorities said the move supports efforts to keep the national digital space safe, healthy, and productive.
All funds were processed through official channels and transferred directly to the state treasury managed by the Ministry of Finance, in line with existing regulations, the ministry said.
Officials said enforcement actions against domestic and global platforms, including those operating from regional hubs such as Singapore, remain a priority. The measures aim to protect children and vulnerable groups and encourage stronger content moderation and communication.
Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!
A majority of leading US research universities are encouraging the use of generative AI in teaching, according to a new study analysing institutional policies and guidance documents across higher education.
The research reviewed publicly available policies from 116 R1 universities and found that 63 percent explicitly support the use of generative AI, while 41 percent provide detailed classroom guidance. More than half of the institutions also address ethical considerations linked to AI adoption.
Most guidance focuses on writing-related activities, with far fewer references to coding or STEM applications. The study notes that while many universities promote experimentation, expectations placed on faculty can be demanding, often implying significant changes to teaching practices.
US researchers also found wide variation in how universities approach oversight. Some provide sample syllabus language and assignment design advice, while others discourage the use of AI-detection tools, citing concerns around reliability and academic trust.
The authors caution that policy statements may not reflect real classroom behaviour and say further research is needed to understand how generative AI is actually being used by educators and students in practice.
Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!