Streaming platforms face pressure over AI-generated music

Musicians are raising the alarm over AI-generated tracks appearing on their profiles without consent, presenting fraudulent work as their own. British folk artist Emily Portman discovered an AI-generated album, Orca, on Spotify and Apple Music, which copied her folk style and lyrics.

Fans initially congratulated her on a release she had not made since 2022.

Australian musician Paul Bender reported a similar experience, with four ‘bizarrely bad’ AI tracks appearing under his band, The Sweet Enoughs. Both artists said that weak distributor security allows scammers to easily upload content, calling it ‘the easiest scam in the world.’

A petition launched by Bender garnered tens of thousands of signatures, urging platforms to strengthen their protections.

AI-generated music has become increasingly sophisticated, making it nearly impossible for listeners to distinguish from genuine tracks. While revenues from such fraudulent streams are low individually, bots and repeated listening can significantly increase payouts.

Industry representatives note that the primary motive is to collect royalties from unsuspecting users.

Despite the threat of impersonation, Portman is continuing her creative work, emphasising human collaboration and authentic artistry. Spotify and Apple Music have pledged to collaborate with distributors to enhance the detection and prevention of AI-generated fraud.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Russia rejects crypto as money but expands legal recognition

Russian lawmakers have reiterated that cryptocurrencies will not be recognised as money, maintaining a strict ban on their use for domestic payments while allowing limited application as investment assets.

Anatoly Aksakov, head of the State Duma Committee on the Financial Market, emphasised that all payments within Russia must be conducted in rubles, echoing the central bank’s long-standing stance against the use of cryptocurrencies in internal settlements.

At the same time, legislative proposals point to a more nuanced legal approach. A bill submitted by United Russia lawmaker Igor Antropenko seeks to recognise cryptocurrencies as marital property, classifying digital assets acquired during marriage as jointly owned in divorce proceedings.

The proposal reflects the growing adoption of cryptocurrency in Russia, where digital assets are increasingly used for investment and savings. It also aligns family law with broader regulatory shifts that permit the use of crypto in foreign trade under an experimental framework.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

Canada advances quantum computing with a strategic $92 million public investment

Canada has launched a major new quantum initiative aimed at strengthening domestic technological sovereignty and accelerating the development of industrial-scale quantum computing.

Announced in Toronto, Phase 1 of the Canadian Quantum Champions Program forms part of a wider $334.3 million investment under Budget 2025 to expand Canada’s quantum ecosystem.

The programme will provide up to $92 million in initial funding, with agreements signed with Anyon Systems, Nord Quantique, Photonic and Xanadu Quantum Technologies for up to $23 million each.

A funding that is designed to support the development of fault-tolerant quantum computers capable of solving real-world problems, while anchoring advanced research, talent, and production in Canada, rather than allowing strategic capabilities to migrate abroad.

The initiative also supports Canada’s forthcoming Defence Industrial Strategy, reflecting the growing role of quantum technologies in cryptography, materials science and threat analysis.

Technical progress will be assessed through a new Benchmarking Quantum Platform led by the National Research Council of Canada, with further programme phases to be announced as development milestones are reached.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

OpenAI’s rise signals a shift in frontier tech investment

OpenAI overtook SpaceX as the world’s most valuable private company in October after a secondary share sale valued the AI firm at $500 billion. The deal put Sam Altman’s company about $100 billion ahead of Elon Musk’s space venture.

That lead may prove short-lived, as SpaceX is now planning its own secondary share sale that could value the company at around $800 billion. An internal letter seen by multiple outlets suggests Musk would reclaim the top spot within months.

The clash is the latest chapter in a rivalry that dates back to OpenAI’s founding in 2015, before Musk left the organisation in 2018 and later launched the startup xAI. Since then, lawsuits and public criticism have marked a sharp breakdown in relations.

Musk also confirmed on X that SpaceX is exploring a major initial public offering, while OpenAI’s recent restructuring allows it to pursue an IPO in the future. The valuation battle reflects soaring investor appetite for frontier technologies, as AI, space, robotics and defence startups attract unprecedented private funding.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

Study warns that LLMs are vulnerable to minimal tampering

Researchers from Anthropic, the UK AI Security Institute and the Alan Turing Institute have shown that only a few hundred crafted samples can poison LLM models. The tests revealed that around 250 malicious entries could embed a backdoor that triggers gibberish responses when a specific phrase appears.

Models ranging from 600 million to 13 billion parameters (such as Pythia) were affected, highlighting the scale-independent nature of the weakness. A planted phrase such as ‘sudo’ caused output collapse, raising concerns about targeted disruption and the ease of manipulating widely trained systems.

Security specialists note that denial-of-service effects are worrying, yet deceptive outputs pose far greater risk. Prior studies already demonstrated that medical and safety-critical models can be destabilised by tiny quantities of misleading data, heightening the urgency for robust dataset controls.

Researchers warn that open ecosystems and scraped corpora make silent data poisoning increasingly feasible. Developers are urged to adopt stronger provenance checks and continuous auditing, as reliance on LLMs continues to expand for AI purposes across technical and everyday applications.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

CES 2026 to feature LG’s new AI-driven in-car platform

LG Electronics will unveil a new AI Cabin Platform at CES 2026 in Las Vegas, positioning the system as a next step beyond today’s software-defined vehicles and toward what the company calls AI-defined mobility.

The platform is designed to run on automotive high-performance computing systems and is powered by Qualcomm Technologies’ Snapdragon Cockpit Elite. LG says it applies generative AI models directly to in-vehicle infotainment, enabling more context-aware and personalised driving experiences.

Unlike cloud-dependent systems, all AI processing occurs on-device within the vehicle. LG says this approach enables real-time responses while improving reliability, privacy, and data security by avoiding communication with external servers.

Using data from internal and external cameras, the system can assess driving conditions and driver awareness to provide proactive alerts. LG also demonstrated adaptive infotainment features, including AI-generated visuals and music suggestions that respond to weather, time, and driving context.

LG will showcase the AI Cabin Platform at a private CES event, alongside a preview of its AI-defined vehicle concept. The company says the platform builds on its expanding partnership with Qualcomm Technologies and on its earlier work integrating infotainment and driver-assistance systems.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

AI tools enable large-scale monetisation of political misinformation in the UK

YouTube channels spreading fake and inflammatory anti-Labour videos have attracted more than a billion views this year, as opportunistic creators use AI-generated content to monetise political division in the UK.

Research by non-profit group Reset Tech identified more than 150 channels promoting hostile narratives about the Labour Party and Prime Minister Keir Starmer. The study found the channels published over 56,000 videos, gaining 5.3 million subscribers and nearly 1.2 billion views in 2025.

Many videos used alarmist language, AI-generated scripts and British-accented narration to boost engagement. Starmer was referenced more than 15,000 times in titles or descriptions, often alongside fabricated claims of arrests, political collapse or public humiliation.

Reset Tech said the activity reflects a wider global trend driven by cheap AI tools and engagement-based incentives. Similar networks were found across Europe, although UK-focused channels were mostly linked to creators seeking advertising revenue rather than foreign actors.

YouTube removed all identified channels after being contacted, citing spam and deceptive practices as violations of its policies. Labour officials warned that synthetic misinformation poses a serious threat to democratic trust, urging platforms to act more quickly and strengthen their moderation systems.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

Indonesia fines Platform X for pornographic content violations

Platform X has paid an administrative fine of nearly Rp80 million after failing to meet Indonesia’s content moderation requirements related to pornographic material, according to the country’s digital regulator.

The Ministry of Communication and Digital Affairs said the payment was made on 12 December 2025, after a third warning letter and further exchanges with the company. Officials confirmed that Platform X appointed a representative to complete the process, who is based in Singapore.

The regulator welcomed the company’s compliance, framing the payment as a demonstration of responsibility by an electronic system operator under Indonesian law. Authorities said the move supports efforts to keep the national digital space safe, healthy, and productive.

All funds were processed through official channels and transferred directly to the state treasury managed by the Ministry of Finance, in line with existing regulations, the ministry said.

Officials said enforcement actions against domestic and global platforms, including those operating from regional hubs such as Singapore, remain a priority. The measures aim to protect children and vulnerable groups and encourage stronger content moderation and communication.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

Universities back generative AI but guidance remains uneven

A majority of leading US research universities are encouraging the use of generative AI in teaching, according to a new study analysing institutional policies and guidance documents across higher education.

The research reviewed publicly available policies from 116 R1 universities and found that 63 percent explicitly support the use of generative AI, while 41 percent provide detailed classroom guidance. More than half of the institutions also address ethical considerations linked to AI adoption.

Most guidance focuses on writing-related activities, with far fewer references to coding or STEM applications. The study notes that while many universities promote experimentation, expectations placed on faculty can be demanding, often implying significant changes to teaching practices.

US researchers also found wide variation in how universities approach oversight. Some provide sample syllabus language and assignment design advice, while others discourage the use of AI-detection tools, citing concerns around reliability and academic trust.

The authors caution that policy statements may not reflect real classroom behaviour and say further research is needed to understand how generative AI is actually being used by educators and students in practice.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

How data centres affect electricity, prices, water consumption and jobs

Data centres have become critical infrastructure for modern economies, supporting services ranging from digital communications and online commerce to emergency response systems and financial transactions.

As AI expands, demand for cloud computing continues to accelerate, increasing the need for additional data centre capacity worldwide.

Concerns about environmental impact often focus on electricity and water use, yet recent data indicate that data centres are not primary drivers of higher power prices and consume far less water than many traditional industries.

Studies show that rising electricity costs are largely linked to grid upgrades, climate-related damage and fuel prices instead of large-scale computing facilities, while water use by data centres remains a small fraction of overall consumption.

Technological improvements have further reduced resource intensity. Operators have significantly improved water efficiency per unit of computing power, adopting closed-loop liquid cooling and advanced energy management systems.

In many regions, water is required only intermittently, with consumption levels lower than those in sectors such as clothing manufacturing, agriculture and automotive services.

Beyond digital services, data centres deliver tangible economic benefits to local communities. Large-scale investments generate construction activity, long-term technical employment and stable tax revenues, while infrastructure upgrades and skills programmes support regional development.

As cloud computing and AI continue to shape everyday life, data centres are increasingly positioned as both economic and technological anchors.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!