No sensitive data compromised in SoundCloud incident

SoundCloud has confirmed a recent security incident that temporarily affected platform availability and involved the limited exposure of user data. The company detected unauthorised activity on an ancillary service dashboard and acted immediately to contain the situation.

Third-party cybersecurity experts were engaged to investigate and support the response. The incident resulted in two brief denial-of-service attacks, temporarily disrupting web access.

Approximately 20% of users were affected; however, no sensitive data, such as passwords or financial details, were compromised. Only email addresses and publicly visible profile information were involved.

In response, SoundCloud has strengthened its systems, enhancing monitoring, reviewing identity and access controls, and auditing related systems. Some configuration updates have led to temporary VPN connectivity issues, which the company is working to resolve.

SoundCloud emphasises that user privacy remains a top priority and encourages vigilance against phishing. The platform will continue to provide updates and take steps to minimise the risk of future incidents.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

UK sets course for comprehensive crypto regulation

The UK government has announced plans to bring cryptoassets firmly within the regulatory perimeter, aiming to support innovation while strengthening consumer protection and attracting long-term investment into the sector.

From 2027, cryptoasset firms will be regulated by the Financial Conduct Authority under rules similar to those governing traditional financial products, such as stocks and shares. The move is intended to provide legal clarity and increase confidence among consumers and businesses.

Ministers say that proportionate regulation will support innovation, ensure competitive markets, and strengthen the UK’s position as a global hub for digital assets. Enhanced oversight will boost transparency, aid sanctions enforcement, and help detect and tackle illicit activity.

The initiative forms part of a broader strategy to shape global crypto standards, including ongoing cooperation with the United States through the Transatlantic Taskforce, as the UK seeks to secure its role in the future of digital finance.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Streaming platforms face pressure over AI-generated music

Musicians are raising the alarm over AI-generated tracks appearing on their profiles without consent, presenting fraudulent work as their own. British folk artist Emily Portman discovered an AI-generated album, Orca, on Spotify and Apple Music, which copied her folk style and lyrics.

Fans initially congratulated her on a release she had not made since 2022.

Australian musician Paul Bender reported a similar experience, with four ‘bizarrely bad’ AI tracks appearing under his band, The Sweet Enoughs. Both artists said that weak distributor security allows scammers to easily upload content, calling it ‘the easiest scam in the world.’

A petition launched by Bender garnered tens of thousands of signatures, urging platforms to strengthen their protections.

AI-generated music has become increasingly sophisticated, making it nearly impossible for listeners to distinguish from genuine tracks. While revenues from such fraudulent streams are low individually, bots and repeated listening can significantly increase payouts.

Industry representatives note that the primary motive is to collect royalties from unsuspecting users.

Despite the threat of impersonation, Portman is continuing her creative work, emphasising human collaboration and authentic artistry. Spotify and Apple Music have pledged to collaborate with distributors to enhance the detection and prevention of AI-generated fraud.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Russia rejects crypto as money but expands legal recognition

Russian lawmakers have reiterated that cryptocurrencies will not be recognised as money, maintaining a strict ban on their use for domestic payments while allowing limited application as investment assets.

Anatoly Aksakov, head of the State Duma Committee on the Financial Market, emphasised that all payments within Russia must be conducted in rubles, echoing the central bank’s long-standing stance against the use of cryptocurrencies in internal settlements.

At the same time, legislative proposals point to a more nuanced legal approach. A bill submitted by United Russia lawmaker Igor Antropenko seeks to recognise cryptocurrencies as marital property, classifying digital assets acquired during marriage as jointly owned in divorce proceedings.

The proposal reflects the growing adoption of cryptocurrency in Russia, where digital assets are increasingly used for investment and savings. It also aligns family law with broader regulatory shifts that permit the use of crypto in foreign trade under an experimental framework.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

Zoom launches AI Companion 3.0 with expanded features

Zoom has unveiled AI Companion 3.0, its latest AI assistant, which extends functionality beyond meetings with a new web interface, workflow tools, and agentic search. Select features are now accessible to free Zoom Workplace Basic users, while full access is available via a paid add-on.

Free users can generate meeting summaries, action item lists, and insights, albeit with usage limitations.

The updated AI Companion introduces agentic retrieval, enabling searches across meeting summaries, transcripts, and connected services, such as Google Drive and Microsoft OneDrive, with Gmail and Outlook support planned.

Users can automatically generate follow-up tasks and draft emails using a post-meeting template, while the Daily Reflection Report summarises tasks and updates to help prioritise work.

A new agentic writing mode allows drafting, editing, and refining business documents in a canvas-style interface, and AI-created content can be exported in multiple formats, including Markdown, PDF, Word, and Zoom Docs.

Additional tools include AI-based brainstorming and, for Custom AI Companion users, a deep research mode consolidating insights from multiple meetings and documents.

Basic plan users get limited access for up to three meetings per month, including automated summaries, in-meeting queries, and AI-generated notes. Up to 20 prompts are included via the side panel and web interface, while broader access requires a subscription priced at Rs 1,080 per month.

The new web interface also offers built-in prompts to guide users in exploring the assistant’s capabilities.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

Canada advances quantum computing with a strategic $92 million public investment

Canada has launched a major new quantum initiative aimed at strengthening domestic technological sovereignty and accelerating the development of industrial-scale quantum computing.

Announced in Toronto, Phase 1 of the Canadian Quantum Champions Program forms part of a wider $334.3 million investment under Budget 2025 to expand Canada’s quantum ecosystem.

The programme will provide up to $92 million in initial funding, with agreements signed with Anyon Systems, Nord Quantique, Photonic and Xanadu Quantum Technologies for up to $23 million each.

A funding that is designed to support the development of fault-tolerant quantum computers capable of solving real-world problems, while anchoring advanced research, talent, and production in Canada, rather than allowing strategic capabilities to migrate abroad.

The initiative also supports Canada’s forthcoming Defence Industrial Strategy, reflecting the growing role of quantum technologies in cryptography, materials science and threat analysis.

Technical progress will be assessed through a new Benchmarking Quantum Platform led by the National Research Council of Canada, with further programme phases to be announced as development milestones are reached.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Building trustworthy AI for humanitarian response

A new vision for Humanitarian AI is emerging around a simple idea, and that is that technology should grow from local knowledge if it is to work everywhere. Drawing on the IFRC’s slogan ‘Local, everywhere,’ this approach argues that AI should not be driven by hype or raw computing power, but by the lived experience of communities and humanitarian workers on the ground. With millions of volunteers and staff worldwide, the Red Cross and Red Crescent Movement holds a vast reservoir of practical knowledge that AI can help preserve, organise, and share for more effective crisis response.

In a recent blog post, Jovan Kurbalija explains that this bottom-up approach is not only practical but also ethically sound. AI systems grounded in local humanitarian knowledge can better reflect cultural and social contexts, reduce bias and misinformation, and strengthen trust by being governed by humanitarian organisations rather than opaque commercial platforms. Trust, he argues, lies in people and institutions behind the technology, not in algorithms themselves.

Kurbalija also notes that developing such AI is technically and financially realistic. Open-source models, mobile and edge computing, and domain-specific AI tools enable the deployment to functional systems even in low-resource environments. Most humanitarian tasks, from decision support to translation or volunteer guidance, do not require massive infrastructure, but high-quality, well-structured knowledge rooted in real-world experience.

If developed carefully, Humanitarian AI could also support the IFRC’s broader renewal goals, from strengthening local accountability and collaboration to safeguarding independence and humanitarian principles. Starting with small pilot projects and scaling up gradually, the Movement could transform AI into a shared public good that not only enhances responses to today’s crises but also preserves critical knowledge for future generations.

Would you like to learn more about AI, tech and digital diplomacyIf so, ask our Diplo chatbot!

CES 2026 to feature LG’s new AI-driven in-car platform

LG Electronics will unveil a new AI Cabin Platform at CES 2026 in Las Vegas, positioning the system as a next step beyond today’s software-defined vehicles and toward what the company calls AI-defined mobility.

The platform is designed to run on automotive high-performance computing systems and is powered by Qualcomm Technologies’ Snapdragon Cockpit Elite. LG says it applies generative AI models directly to in-vehicle infotainment, enabling more context-aware and personalised driving experiences.

Unlike cloud-dependent systems, all AI processing occurs on-device within the vehicle. LG says this approach enables real-time responses while improving reliability, privacy, and data security by avoiding communication with external servers.

Using data from internal and external cameras, the system can assess driving conditions and driver awareness to provide proactive alerts. LG also demonstrated adaptive infotainment features, including AI-generated visuals and music suggestions that respond to weather, time, and driving context.

LG will showcase the AI Cabin Platform at a private CES event, alongside a preview of its AI-defined vehicle concept. The company says the platform builds on its expanding partnership with Qualcomm Technologies and on its earlier work integrating infotainment and driver-assistance systems.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

Universities back generative AI but guidance remains uneven

A majority of leading US research universities are encouraging the use of generative AI in teaching, according to a new study analysing institutional policies and guidance documents across higher education.

The research reviewed publicly available policies from 116 R1 universities and found that 63 percent explicitly support the use of generative AI, while 41 percent provide detailed classroom guidance. More than half of the institutions also address ethical considerations linked to AI adoption.

Most guidance focuses on writing-related activities, with far fewer references to coding or STEM applications. The study notes that while many universities promote experimentation, expectations placed on faculty can be demanding, often implying significant changes to teaching practices.

US researchers also found wide variation in how universities approach oversight. Some provide sample syllabus language and assignment design advice, while others discourage the use of AI-detection tools, citing concerns around reliability and academic trust.

The authors caution that policy statements may not reflect real classroom behaviour and say further research is needed to understand how generative AI is actually being used by educators and students in practice.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

Conduit revolutionises neuro-language research with 10,000-hour dataset

A San Francisco start-up, named Conduit, has spent six months building what it claims is the largest neural language dataset ever assembled, capturing around 10,000 hours of non-invasive brain recordings from thousands of participants.

The project aims to train thought-to-text AI systems that interpret semantic intent from brain activity moments before speech or typing occurs.

Participants take part in extended conversational sessions instead of rigid laboratory tasks, interacting freely with large language models through speech or simplified keyboards.

Engineers found that natural dialogue produced higher quality data, allowing tighter alignment between neural signals, audio and text while increasing overall language output per session.

Conduit developed its own sensing hardware after finding no commercial system capable of supporting large-scale multimodal recording.

Custom headsets combine multiple neural sensing techniques within dense training rigs, while future inference devices will be simplified once model behaviour becomes clearer.

Power systems and data pipelines were repeatedly redesigned to balance signal clarity with scalability, leading to improved generalisation across users and environments.

As data volume increased, operational costs fell through automation and real time quality control, allowing continuous collection across long daily schedules.

With data gathering largely complete, the focus has shifted toward model training, raising new questions about the future of neural interfaces, AI-mediated communication and cognitive privacy.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!