UK-EU cyber dialogue strengthens policy alignment

The third UK-EU Cyber Dialogue was held in Brussels on 9 and 10 December 2025, bringing together senior officials under the UK-EU Trade and Cooperation Agreement to strengthen cooperation on cybersecurity and digital resilience.

The meeting was co-chaired by Andrew Whittaker from the UK Foreign, Commonwealth and Development Office and Irfan Hemani from the Department for Science, Innovation and Technology, alongside EU representatives from the European External Action Service and the European Commission.

Officials from Europol and ENISA also participated, reinforcing operational and regulatory coordination rather than fragmented policy approaches.

Discussions covered cyber legislation, deterrence strategies, countering cybercrime, incident response and cyber capacity development, with an emphasis on maintaining strong security standards while reducing unnecessary compliance burdens on industry.

Both sides confirmed that the next UK-EU Cyber Dialogue will take place in London in 2026.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Credit reporting breach exposes 5.6 millions consumers through third party API

US credit reporting company 700Credit has confirmed a data breach affecting more than 5.6 million individuals after attackers exploited a compromised third-party API used to exchange consumer data with external integration partners.

An incident that originated from a supply chain failure after one partner was breached earlier in 2025 and failed to notify 700Credit.

The attackers launched a sustained, high-volume data extraction campaign starting on October 25, 2025, which operated for more than two weeks before access was shut down.

Around 20 percent of consumer records were accessed, exposing names, home addresses, dates of birth and Social Security numbers, while internal systems, payment platforms and login credentials were not compromised.

Despite the absence of financial system access, the exposed personal data significantly increases the risk of identity theft and sophisticated phishing attacks impersonating credit reporting services.

The breach has been reported to the Federal Trade Commission and the FBI, with regulators coordinating responses through industry bodies representing affected dealerships.

Individuals impacted by the incident are currently being notified and offered two years of free credit monitoring, complimentary credit reports and access to a dedicated support line.

Authorities have urged recipients to act promptly by monitoring their credit activity and taking protective measures to minimise the risk of fraud.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

AI content flood drives ‘slop’ to word of the year

Merriam-Webster has chosen ‘slop’ as its 2025 word of the year, reflecting the rise of low-quality digital content produced by AI. The term originally meant soft mud, but now describes absurd or fake online material.

Greg Barlow, Merriam-Webster’s president, said the word captures how AI-generated content has fascinated, annoyed and sometimes alarmed people. Tools like AI video generators can produce deepfakes and manipulated clips in seconds.

The spike in searches for ‘slop’ shows growing public awareness of poor-quality content and a desire for authenticity. People want real, genuine material rather than AI-driven junk content.

AI-generated slop includes everything from absurd videos to fake news and junky digital books. Merriam-Webster selects its word of the year by analysing search trends and cultural relevance.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

Streaming platforms face pressure over AI-generated music

Musicians are raising the alarm over AI-generated tracks appearing on their profiles without consent, presenting fraudulent work as their own. British folk artist Emily Portman discovered an AI-generated album, Orca, on Spotify and Apple Music, which copied her folk style and lyrics.

Fans initially congratulated her on a release she had not made since 2022.

Australian musician Paul Bender reported a similar experience, with four ‘bizarrely bad’ AI tracks appearing under his band, The Sweet Enoughs. Both artists said that weak distributor security allows scammers to easily upload content, calling it ‘the easiest scam in the world.’

A petition launched by Bender garnered tens of thousands of signatures, urging platforms to strengthen their protections.

AI-generated music has become increasingly sophisticated, making it nearly impossible for listeners to distinguish from genuine tracks. While revenues from such fraudulent streams are low individually, bots and repeated listening can significantly increase payouts.

Industry representatives note that the primary motive is to collect royalties from unsuspecting users.

Despite the threat of impersonation, Portman is continuing her creative work, emphasising human collaboration and authentic artistry. Spotify and Apple Music have pledged to collaborate with distributors to enhance the detection and prevention of AI-generated fraud.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

US TikTok investors face uncertainty as sale delayed again

Investors keen to buy TikTok’s US operations say they are left waiting as the sale is delayed again. ByteDance, TikTok’s Chinese owner, was required to sell or be blocked under a 2024 law.

US President Donald Trump seems set to extend the deadline for a fifth time. Billionaires, including Frank McCourt, Alexis Ohanian and Kevin O’Leary, are awaiting approval.

Investor McCourt confirmed his group has raised the necessary capital and is prepared to move forward once the sale is allowed. National security concerns remain the main reason for the ongoing delays.

Project Liberty, led by McCourt, plans to operate TikTok without Chinese technology, including the recommendation algorithm. The group has developed alternative systems to run the platform independently.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

Zoom launches AI Companion 3.0 with expanded features

Zoom has unveiled AI Companion 3.0, its latest AI assistant, which extends functionality beyond meetings with a new web interface, workflow tools, and agentic search. Select features are now accessible to free Zoom Workplace Basic users, while full access is available via a paid add-on.

Free users can generate meeting summaries, action item lists, and insights, albeit with usage limitations.

The updated AI Companion introduces agentic retrieval, enabling searches across meeting summaries, transcripts, and connected services, such as Google Drive and Microsoft OneDrive, with Gmail and Outlook support planned.

Users can automatically generate follow-up tasks and draft emails using a post-meeting template, while the Daily Reflection Report summarises tasks and updates to help prioritise work.

A new agentic writing mode allows drafting, editing, and refining business documents in a canvas-style interface, and AI-created content can be exported in multiple formats, including Markdown, PDF, Word, and Zoom Docs.

Additional tools include AI-based brainstorming and, for Custom AI Companion users, a deep research mode consolidating insights from multiple meetings and documents.

Basic plan users get limited access for up to three meetings per month, including automated summaries, in-meeting queries, and AI-generated notes. Up to 20 prompts are included via the side panel and web interface, while broader access requires a subscription priced at Rs 1,080 per month.

The new web interface also offers built-in prompts to guide users in exploring the assistant’s capabilities.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

AI tools enable large-scale monetisation of political misinformation in the UK

YouTube channels spreading fake and inflammatory anti-Labour videos have attracted more than a billion views this year, as opportunistic creators use AI-generated content to monetise political division in the UK.

Research by non-profit group Reset Tech identified more than 150 channels promoting hostile narratives about the Labour Party and Prime Minister Keir Starmer. The study found the channels published over 56,000 videos, gaining 5.3 million subscribers and nearly 1.2 billion views in 2025.

Many videos used alarmist language, AI-generated scripts and British-accented narration to boost engagement. Starmer was referenced more than 15,000 times in titles or descriptions, often alongside fabricated claims of arrests, political collapse or public humiliation.

Reset Tech said the activity reflects a wider global trend driven by cheap AI tools and engagement-based incentives. Similar networks were found across Europe, although UK-focused channels were mostly linked to creators seeking advertising revenue rather than foreign actors.

YouTube removed all identified channels after being contacted, citing spam and deceptive practices as violations of its policies. Labour officials warned that synthetic misinformation poses a serious threat to democratic trust, urging platforms to act more quickly and strengthen their moderation systems.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

Indonesia fines Platform X for pornographic content violations

Platform X has paid an administrative fine of nearly Rp80 million after failing to meet Indonesia’s content moderation requirements related to pornographic material, according to the country’s digital regulator.

The Ministry of Communication and Digital Affairs said the payment was made on 12 December 2025, after a third warning letter and further exchanges with the company. Officials confirmed that Platform X appointed a representative to complete the process, who is based in Singapore.

The regulator welcomed the company’s compliance, framing the payment as a demonstration of responsibility by an electronic system operator under Indonesian law. Authorities said the move supports efforts to keep the national digital space safe, healthy, and productive.

All funds were processed through official channels and transferred directly to the state treasury managed by the Ministry of Finance, in line with existing regulations, the ministry said.

Officials said enforcement actions against domestic and global platforms, including those operating from regional hubs such as Singapore, remain a priority. The measures aim to protect children and vulnerable groups and encourage stronger content moderation and communication.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

Universities back generative AI but guidance remains uneven

A majority of leading US research universities are encouraging the use of generative AI in teaching, according to a new study analysing institutional policies and guidance documents across higher education.

The research reviewed publicly available policies from 116 R1 universities and found that 63 percent explicitly support the use of generative AI, while 41 percent provide detailed classroom guidance. More than half of the institutions also address ethical considerations linked to AI adoption.

Most guidance focuses on writing-related activities, with far fewer references to coding or STEM applications. The study notes that while many universities promote experimentation, expectations placed on faculty can be demanding, often implying significant changes to teaching practices.

US researchers also found wide variation in how universities approach oversight. Some provide sample syllabus language and assignment design advice, while others discourage the use of AI-detection tools, citing concerns around reliability and academic trust.

The authors caution that policy statements may not reflect real classroom behaviour and say further research is needed to understand how generative AI is actually being used by educators and students in practice.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

Conduit revolutionises neuro-language research with 10,000-hour dataset

A San Francisco start-up, named Conduit, has spent six months building what it claims is the largest neural language dataset ever assembled, capturing around 10,000 hours of non-invasive brain recordings from thousands of participants.

The project aims to train thought-to-text AI systems that interpret semantic intent from brain activity moments before speech or typing occurs.

Participants take part in extended conversational sessions instead of rigid laboratory tasks, interacting freely with large language models through speech or simplified keyboards.

Engineers found that natural dialogue produced higher quality data, allowing tighter alignment between neural signals, audio and text while increasing overall language output per session.

Conduit developed its own sensing hardware after finding no commercial system capable of supporting large-scale multimodal recording.

Custom headsets combine multiple neural sensing techniques within dense training rigs, while future inference devices will be simplified once model behaviour becomes clearer.

Power systems and data pipelines were repeatedly redesigned to balance signal clarity with scalability, leading to improved generalisation across users and environments.

As data volume increased, operational costs fell through automation and real time quality control, allowing continuous collection across long daily schedules.

With data gathering largely complete, the focus has shifted toward model training, raising new questions about the future of neural interfaces, AI-mediated communication and cognitive privacy.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!