AI playlist creator comes to Youtube for Premium subscribers

YouTube has introduced a new AI Playlist feature for YouTube Premium and YouTube Music Premium subscribers on Android and iOS, enabling users to generate customised music playlists by describing a mood, genre, activity or vibe in natural language.

From the Library tab, users can tap ‘New,’ select ‘AI playlist’, and enter text or voice prompts, such as ‘sad post-rock’ or ’90s classic hits,’ to instantly build a curated list of tracks.

The rollout builds on YouTube’s earlier AI experiments in music discovery and positions the company alongside other streaming services like Spotify, Amazon Music and Deezer, which have launched similar generative playlist tools.

The feature reflects a broader trend of streaming platforms embedding generative AI to personalise discovery and enhance user engagement for paying subscribers.

Details such as the degree of user control over generated playlists and support for iterative refinement remain limited, and YouTube has not clarified how often playlists can be refreshed or edited after creation.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Global coalition demands ban on AI-nudification tools over child-safety fears

More than 100 organisations have urged governments to outlaw AI-nudification tools after a surge in non-consensual digital images.

Groups such as Amnesty International, the European Commission, and Interpol argue that the technology now fuels harmful practices that undermine human dignity and child safety. Their concerns intensified after the Grok nudification scandal, where users created sexualised images from ordinary photographs.

Campaigners warn that the tools often target women and children instead of staying within any claimed adult-only environment. Millions of manipulated images have circulated across social platforms, with many linked to blackmail, coercion and child sexual abuse material.

Experts say the trauma caused by these AI images is no less serious because the abuse occurs online.

Organisations within the coalition maintain that tech companies already possess the ability to detect and block such material but have failed to apply essential safeguards.

They want developers and platforms to be held accountable and believe that strict prohibitions are now necessary to prevent further exploitation. Advocates argue that meaningful action is overdue and that protection of users must take precedence over commercial interests.

Would you like to learn more about AI, tech and digital diplomacyIf so, ask our Diplo chatbot!

AI controls animal behaviour using light-guided technology

Scientists at Nagoya University have developed an advanced AI system capable of identifying specific animal behaviours with over 90% accuracy and controlling the brain circuits that drive them in real-time across multiple species.

The system, named YORU (Your Optimal Recognition Utility), recognises entire behaviours from single video frames rather than tracking individual body parts, making it 30% faster than previous tools.

Researchers demonstrated the technology’s precision by combining it with optogenetics to silence a male fruit fly’s courtship song mid-performance, causing the unimpressed female to walk away.

The breakthrough lies in the system’s ability to target individual animals within social groups, so previous optogenetic methods illuminated entire laboratory chambers, affecting all subjects simultaneously.

YORU’s AI-driven light source can now track and manipulate a single subject’s neurons whilst its neighbours move freely nearby. The tool has proven its versatility across diverse species, successfully analysing food-sharing in ants, social orientation in zebrafish, and grooming patterns in mice.

Requiring minimal training data and no programming skills, YORU is available online for researchers worldwide studying the neural mechanisms underlying social interactions.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

EU launches cyberbullying action plan to protect children online

The European Commission has launched an Action Plan Against Cyberbullying aimed at protecting the mental health and well-being of children and teenagers online across the EU. The initiative focuses on reporting access, national coordination, and prevention.

A central element is the development of an EU-wide reporting app that would allow victims to report cyberbullying, receive support, and safely store evidence. The Commission will provide a blueprint for Member States to adapt and link to national helplines.

To ensure consistent protection, Member States are encouraged to adopt a shared understanding of cyberbullying and develop national action plans. This would support comparable data collection and a more coordinated EU response.

The Action Plan builds on existing legislation, including the Digital Services Act, the Audiovisual Media Services Directive, and the AI Act. Updated guidelines will strengthen platform obligations and address AI-enabled forms of abuse.

Prevention and education are also prioritised through expanded resources for schools and families via Safer Internet Centres and the Better Internet for Kids platform. The Commission will implement the plan with Member States, industry, civil society, and children.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

Microsoft explores superconductors for AI data centres

Microsoft is studying high-temperature superconductors to transmit electricity to its AI data centres in the US. The company says zero-resistance cables could reduce power losses and eliminate heat generated during transmission.

High-temperature superconductors can carry large currents through compact cables, potentially cutting space requirements for substations and overhead lines. Microsoft argues that denser infrastructure could support expanding AI workloads across the US.

The main obstacle is cooling, as superconducting materials must operate at extremely low temperatures using cryogenic systems. Even high-temperature variants require conditions near minus 200 degrees Celsius.

Rising electricity demand from AI systems has strained grids in the US, prompting political scrutiny and industry pledges to fund infrastructure upgrades. Microsoft says efficiency gains could ease pressure while it develops additional power solutions.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

Custom AI bots support student negotiating skills

In Cambridge, instructors at MIT and the Harvard Negotiation Project are using AI negotiation bots to enhance classroom simulations. The tools are designed to prompt reflection rather than offer fixed answers.

Students taking part in a multiparty exercise called Harborco engage with preparation, back-table and debriefing bots. The system helps them analyse stakeholder interests and test strategies before and after live negotiations.

Back-table bots simulate unseen political or organisational actors who often influence real-world negotiations. Students can safely explore trade-offs and persuasion tactics in a protected digital setting.

According to reported course findings, most participants said the AI bots improved preparation and sharpened their understanding of opposing interests. Instructors in Cambridge stress that AI supports, rather than replaces, human teaching and peer learning.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

AI and human love in the digital age debate

AI is increasingly entering intimate areas of human life, including romance and emotional companionship. AI chatbots are now widely used as digital companions, raising broader questions about emotional authenticity and human-machine relationships.

Millions of people use AI companion apps, and studies suggest that a significant share of them describe their relationship with a chatbot as romantic. While users may experience genuine emotions, experts stress that current AI systems do not feel love but generate responses based on patterns in data.

Researchers explain that large language models can simulate empathy and emotional understanding, yet they lack consciousness and subjective experience. Their outputs are designed to imitate human interaction rather than reflect genuine emotion.

Scientific research describes love as deeply rooted in biology. Hormones such as dopamine and oxytocin, along with specific brain regions, shape attraction, attachment, and emotional bonding. These processes are embodied and chemical, which machines do not possess.

Some scholars argue that future AI systems could replicate certain cognitive aspects of attachment, such as loyalty or repeated engagement. However, most agree that replicating human love would likely require consciousness, which remains poorly understood and technically unresolved.

Debate continues over whether conscious AI is theoretically possible. While some researchers believe advanced architectures or neuromorphic computing could move in that direction, no existing system meets the established criteria for consciousness.

In practice, human-AI romantic relationships remain asymmetrical. Chatbots are designed to engage, agree, and provide comfort, which can create dependency or unrealistic expectations about real-world relationships.

Experts therefore emphasise transparency and AI literacy, stressing that users should understand AI companions simulate emotion and do not possess feelings, intentions, or awareness; while these systems can imitate expressions of love, they do not experience it, and the emotional reality remains human even when the interaction is digital.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

MIT researchers tackle antimicrobial resistance with AI and synthetic biology

A pioneering research initiative at MIT is deploying AI and synthetic biology to combat the escalating global crisis of antimicrobial resistance, which has been fuelled by decades of antibiotic overuse and misuse.

The $3 million, three-year project, led by Professor James J. Collins at MIT’s Department of Biological Engineering, centres on developing programmable antibacterials designed to target specific pathogens.

The approach uses AI to design small proteins that turn off specific bacterial functions. These designer molecules would be produced and delivered by engineered microbes, offering a more precise alternative to traditional antibiotics.

Antimicrobial resistance impacts low and middle-income countries most severely, where limited diagnostic infrastructure causes treatment delays. Drug-resistant infections continue to rise globally, whilst the development of new antibacterial tools has stagnated.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

AI workshops strengthen digital skills in Wales tourism sector

Wales has launched a national programme of practical AI workshops to help tourism and hospitality businesses adopt digital tools. Funded by Visit Wales and the Welsh Government, the initiative aims to strengthen the sector’s competitiveness by assisting companies to save time and enhance their online presence.

Strong demand reflects growing readiness within the sector to embrace AI. Delivered through Business Wales, the free sessions have quickly reached near capacity, with most places booked shortly after launch. The programme is tailored to small and medium-sized enterprises and prioritises hands-on learning over technical theory.

Workshops focus on simple, immediately usable tools that improve website content, search visibility, and customer engagement. Organisers highlight that AI-driven search features are reshaping how visitors discover tourism services, making accuracy, consistency, and authoritative digital content increasingly important.

At the centre of the initiative is Harri, a bespoke AI tool explicitly developed for Welsh tourism businesses. Designed to reflect the local context, it supports listings management, customer enquiries, and search optimisation. Early feedback indicates that the approach delivers practical and measurable benefits.

Would you like to learn more about AI, tech and digital diplomacyIf so, ask our Diplo chatbot!

Cisco warns AI agents need checks before joining workforces

The US-based conglomerate Cisco is promoting a future in which AI agents work alongside employees rather than operate as mere tools. Jeetu Patel, the company’s president, revealed that Cisco has already produced a product written entirely with AI-generated code and expects several more by the end of 2026.

A shift to spec-driven development that allows smaller human teams to work with digital agents instead of relying on larger groups of developers.

Human oversight will still play a central role. Coders will be asked to review AI-generated outputs as they adjust to a workplace where AI influences every stage of development. Patel argues that AI should be viewed as part of every loop rather than kept at the edge of decision-making.

Security concerns dominate the company’s planning. Patel warns that AI agents acting as digital co-workers must undergo background checks in the same way that employees do.

Cisco is investing billions in security systems to protect agents from external attacks and to prevent agents that malfunction or act independently from harming society.

Looking ahead, Cisco expects AI to deliver insights that extend beyond human knowledge. Patel believes that the most significant gains will emerge from breakthroughs in science, health, energy and poverty reduction rather than simple productivity improvements.

He also positions Cisco as a core provider of infrastructure designed to support the next stage of the AI era.

Would you like to learn more about AI, tech and digital diplomacyIf so, ask our Diplo chatbot!