Learnovate launches community of practice on AI for learning

The Learnovate Centre, a global innovation hub focused on the future of work and learning at Trinity College Dublin, is spearheading a community of practice on responsible AI in learning, bringing together educators, policymakers, institutional leaders and sector specialists to discuss safe, effective and compliant uses of AI in educational settings.

This initiative aims to help practitioners interpret emerging policy frameworks, including EU AI Act requirements, share practical insights and align AI implementation with ethical and pedagogical principles.

One of the community’s early activities includes virtual meetings designed to build consensus around AI norms in teaching, compliance strategies and knowledge exchange on real-world implementation.

Participants come from diverse education domains, including schools, higher and vocational education and training, as well as representatives from government and unions, reflecting a broader push to coordinate AI adoption across the sector.

Learnovate plays a wider role in AI and education innovation, supporting research, summits and collaborative programmes that explore AI-powered tools for personalised learning, upskilling and ethical use cases.

It also partners with start-ups and projects (such as AI platforms for teachers and learners) to advance practical solutions that balance innovation with safeguards.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

AI-generated ‘slop’ spreads on Spotify, raising platform integrity concerns

A TechRadar report highlights the growing presence of AI-generated music on Spotify, often produced in large quantities and designed to exploit platform algorithms or royalty systems.

These tracks, sometimes described as ‘AI slop’, are appearing in playlists and recommendations, raising concerns about quality control and fairness for human musicians.

The article outlines signs that a track may be AI-generated, including generic or repetitive artwork, minimal or inconsistent artist profiles, and unusually high volumes of releases in a short time. Some tracks also feature vague or formulaic titles and metadata, making them difficult to trace to real creators.

Readers are encouraged to use Spotify’s reporting tools to flag suspicious or low-quality AI content.

The issue is a part of a broader governance challenge for streaming platforms, which must balance open access to generative tools with the need to maintain content quality, transparency and fair compensation for artists.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Tech firms push longer working hours to compete in AI race

Tech companies competing in AI are increasingly expecting employees to work longer weeks to keep pace with rapid innovation. Some start-ups openly promote 70-hour schedules, presenting intense effort as necessary to launch products faster and stay ahead of rivals.

Investors and founders often believe that extended working hours improve development speed and increase the chances of securing funding. Fast growth and fierce global competition have made urgency a defining feature of many AI workplaces.

However, research shows productivity rises only up to a limit before fatigue reduces efficiency and focus. Experts warn that excessive workloads can lead to burnout and make it harder for companies to retain experienced professionals.

Health specialists link extended working weeks to higher risks of heart disease and stroke. Many experts argue that smarter management and efficient use of technology offer safer and more effective paths to lasting productivity.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

AI in education reveals a critical evidence gap

Universities are increasingly reorganising around AI, treating AI-based instruction as a proven solution for delivering education more efficiently. This shift reflects a broader belief that AI can reliably replace or reduce human-led teaching, despite growing uncertainty about its actual impact on learning.

Recent research challenges this assumption by re-examining the evidence used to justify AI-driven reforms. A comprehensive re-analysis of AI and learning studies reveals severe publication bias, with positive results published far more frequently than negative or null findings. Once corrected, reported learning gains from AI shrink substantially and may be negligible.

More critically, the research exposes deep inconsistency across studies. Outcomes vary so widely that the evidence cannot predict whether AI will help or harm learning in a given context, and no educational level, discipline, or AI application shows consistent benefits.

By contrast, human-mediated teaching remains a well-established foundation of learning. Decades of research demonstrate that understanding develops through interaction, adaptation, and shared meaning-making, leading the article to conclude that AI in education remains an open question, while human instruction remains the known constant.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

Study questions reliability of AI medical guidance

AI chatbots are not yet capable of providing reliable health advice, according to new research published in the journal Nature Medicine. Findings show users gain no greater diagnostic accuracy from chatbots than from traditional internet searches.

Researchers tested nearly 1,300 UK participants using ten medical scenarios, ranging from minor symptoms to conditions requiring urgent care. Participants were assigned to use either OpenAI’s GPT-4o, Meta’s Llama 3, Command R+, or a standard search engine to assess symptoms and determine next steps.

Chatbot users identified their condition about one-third of the time, with only 45 percent selecting the correct medical response. Performance levels matched those relying solely on search engines, despite AI systems scoring highly on medical licensing benchmarks.

Experts attributed the gap to communication failures. Users often provided incomplete information or misinterpreted chatbot guidance.

Researchers and bioethicists warned that growing reliance on AI for medical queries could pose public health risks without professional oversight.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

When grief meets AI

AI is now being used to create ‘deathbots’, chatbots designed to mimic people after they die using their messages and voice recordings. The technology is part of a growing digital afterlife industry, with some people using it to maintain a sense of connection with loved ones who have passed away.

Researchers at Cardiff University studied how these systems recreate personalities using digital data such as texts, emails, and audio recordings. The findings described the experience as both fascinating and unsettling, raising questions about memory, identity, and emotional impact.

Tests showed current deathbots often fail to accurately reproduce voices or personalities due to technical limitations. Researchers warned that these systems rely on simplified versions of people, which may distort memories rather than preserve them authentically.

Experts believe the technology could improve, but remain uncertain whether it will become widely accepted. Concerns remain about emotional consequences and whether digital versions could alter how people remember those who have died.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

Pakistan pledges major investment in AI by 2030

Pakistan plans to invest $1 billion in AI by 2030, Prime Minister Shehbaz Sharif said at the opening of Indus AI Week in Islamabad. The pledge aims to build a national AI ecosystem in Pakistan.

The government in Pakistan said AI education would expand to schools and universities, including remote regions. Islamabad also plans 1,000 fully funded PhD scholarships in AI to strengthen research capacity in Pakistan.

Shehbaz Sharif said Pakistan would train one million non IT professionals in AI skills by 2030. Islamabad identified agriculture, mining and industry as priority sectors for AI driven productivity gains in Pakistan.

Pakistan approved a National AI Policy in 2025, although implementation has moved slowly. Officials in Islamabad said Indus AI Week marks an early step towards broader adoption of AI across Pakistan.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

Singtel opens largest AI ready data centre in Singapore

Singtel’s data centre arm Nxera has opened its largest data centre in Singapore at Tuas. The facility strengthens Singapore’s role as a regional hub for AI infrastructure.

The Tuas site in Singapore offers 58MW of AI-ready capacity and is described as the country’s highest- power-density data centre. More than 90 per cent of Singapore’s capacity was committed before the official launch.

Nxera said the Singapore facility is hyperconnected through direct access to international and domestic networks. Singapore gains lower latency and improved reliability from integration with a cable landing station.

Singtel said the Tuas development supports rising demand in Singapore for AI, cloud and high performance computing. Nxera plans further expansion in Asia while reinforcing Singapore’s position in digital infrastructure.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

Educators turn to AI despite platform fatigue

Educators in the US are increasingly using AI to address resource shortages, despite growing frustration with fragmented digital platforms. A new survey highlights rising dependence on AI tools across schools and universities in the US.

The study found many educators in the US juggle numerous digital systems that fail to integrate smoothly. Respondents said constant switching between platforms adds to workload pressures and burnout in the US education sector.

AI use in the US is focused on boosting productivity, with educators applying tools to research, writing and administrative tasks. Many also use AI to support student learning as budgets tighten in the US.

Concerns remain in the US around data security, ethics and system overload. Educators said better integration between AI and learning tools could ease strain and improve outcomes in the US classroom.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

New York weighs pause on data centre expansion

Lawmakers in New York have introduced a bill proposing a three year pause on permits for new data centres. Supporters say rapid expansion linked to AI infrastructure risks straining energy systems in New York.

Concerns in New York focus on rising electricity demand and higher household bills as tech companies scale AI operations. Critics across the US argue local communities bear the cost of supporting large scale computing facilities.

The New York proposal has drawn backing from environmental groups and politicians in the US who want time to set stricter rules. US senator Bernie Sanders has also called for a nationwide halt on new data centres.

Officials in New York say the pause would allow stronger policies on grid access and fair cost sharing. The debate reflects wider US tension between economic growth driven by AI and environmental limits.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot