The ambitions of the EU to streamline telecom rules are facing fresh uncertainty after a Commission document indicated that the Digital Networks Act may create more administrative demands for national regulators instead of easing their workload.
The plan to simplify long-standing procedures risks becoming more complex as officials examine the impact on oversight bodies.
Concerns are growing among telecom authorities and BEREC, which may need to adjust to new reporting duties and heightened scrutiny. The additional requirements could limit regulators’ ability to respond quickly to national needs.
Policymakers hoped the new framework would reduce bureaucracy and modernise the sector. The emerging assessment now suggests that greater coordination at the EU level may introduce extra layers of compliance at a time when regulators seek clarity and flexibility.
The debate has intensified as governments push for faster network deployment and more predictable governance. The prospect of heavier administrative tasks could slow progress rather than deliver the streamlined system originally promised.
Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!
Nigeria has been advised to develop its coal reserves to benefit from the rapidly expanding global AI economy. A policy organisation said the country could capture part of the projected $650 billion AI investment by strengthening its energy supply capacity.
AI infrastructure requires vast and reliable electricity to power data centres and advanced computing systems. Technology companies worldwide are increasing energy investments as competition intensifies and demand for computing power continues to grow rapidly.
Nigeria holds nearly five billion metric tonnes of coal, offering a significant opportunity to support global energy needs. Experts warned that failure to develop these resources could result in major economic losses and missed industrial growth.
The organisation also proposed creating a national corporation to convert coal into high-value energy and industrial products. Analysts stressed that urgent government action is needed to secure Nigeria’s position in the emerging AI-driven economy.
Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!
The Learnovate Centre, a global innovation hub focused on the future of work and learning at Trinity College Dublin, is spearheading a community of practice on responsible AI in learning, bringing together educators, policymakers, institutional leaders and sector specialists to discuss safe, effective and compliant uses of AI in educational settings.
This initiative aims to help practitioners interpret emerging policy frameworks, including EU AI Act requirements, share practical insights and align AI implementation with ethical and pedagogical principles.
One of the community’s early activities includes virtual meetings designed to build consensus around AI norms in teaching, compliance strategies and knowledge exchange on real-world implementation.
Participants come from diverse education domains, including schools, higher and vocational education and training, as well as representatives from government and unions, reflecting a broader push to coordinate AI adoption across the sector.
Learnovate plays a wider role in AI and education innovation, supporting research, summits and collaborative programmes that explore AI-powered tools for personalised learning, upskilling and ethical use cases.
It also partners with start-ups and projects (such as AI platforms for teachers and learners) to advance practical solutions that balance innovation with safeguards.
Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!
A TechRadar report highlights the growing presence of AI-generated music on Spotify, often produced in large quantities and designed to exploit platform algorithms or royalty systems.
These tracks, sometimes described as ‘AI slop’, are appearing in playlists and recommendations, raising concerns about quality control and fairness for human musicians.
The article outlines signs that a track may be AI-generated, including generic or repetitive artwork, minimal or inconsistent artist profiles, and unusually high volumes of releases in a short time. Some tracks also feature vague or formulaic titles and metadata, making them difficult to trace to real creators.
Readers are encouraged to use Spotify’s reporting tools to flag suspicious or low-quality AI content.
The issue is a part of a broader governance challenge for streaming platforms, which must balance open access to generative tools with the need to maintain content quality, transparency and fair compensation for artists.
Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!
Discord is preparing a global transition to teen-appropriate settings that will apply to all users unless they confirm they are adults.
The phased rollout begins in early March and forms part of the company’s wider effort to offer protection tailored to younger audiences rather than relying on voluntary safety choices. Controls will cover communication settings, sensitive content and access to age-restricted communities.
The update is based on an expanded age assurance system designed to protect privacy while accurately identifying users’ age groups. People can use facial age estimation on their own device or select identity verification handled by approved partners.
Discord will also rely on an age-inference model that runs quietly in the background. Verification results remain private, and documents are deleted quickly, with users able to appeal group assignments through account settings.
Stricter defaults will apply across the platform. Sensitive media will stay blurred unless a user is confirmed as an adult, and access to age-gated servers or commands will require verification.
Message requests from unfamiliar contacts will be separated, friend-request alerts will be more prominent and only adults will be allowed to speak on community stages instead of sharing the feature with teens.
Discord is complementing the update by creating a Teen Council to offer advice on future safety tools and policies. The council will include up to a dozen young users and aims to embed real teen insight in product development.
The global rollout builds on earlier launches in the UK and Australia, adding to an existing safety ecosystem that includes Teen Safety Assist, Family Centre, and several moderation tools intended to support positive and secure online interactions.
Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!
Tech companies competing in AI are increasingly expecting employees to work longer weeks to keep pace with rapid innovation. Some start-ups openly promote 70-hour schedules, presenting intense effort as necessary to launch products faster and stay ahead of rivals.
Investors and founders often believe that extended working hours improve development speed and increase the chances of securing funding. Fast growth and fierce global competition have made urgency a defining feature of many AI workplaces.
However, research shows productivity rises only up to a limit before fatigue reduces efficiency and focus. Experts warn that excessive workloads can lead to burnout and make it harder for companies to retain experienced professionals.
Health specialists link extended working weeks to higher risks of heart disease and stroke. Many experts argue that smarter management and efficient use of technology offer safer and more effective paths to lasting productivity.
Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!
AI chatbots are not yet capable of providing reliable health advice, according to new research published in the journal Nature Medicine. Findings show users gain no greater diagnostic accuracy from chatbots than from traditional internet searches.
Researchers tested nearly 1,300 UK participants using ten medical scenarios, ranging from minor symptoms to conditions requiring urgent care. Participants were assigned to use either OpenAI’s GPT-4o, Meta’s Llama 3, Command R+, or a standard search engine to assess symptoms and determine next steps.
Chatbot users identified their condition about one-third of the time, with only 45 percent selecting the correct medical response. Performance levels matched those relying solely on search engines, despite AI systems scoring highly on medical licensing benchmarks.
Experts attributed the gap to communication failures. Users often provided incomplete information or misinterpreted chatbot guidance.
Researchers and bioethicists warned that growing reliance on AI for medical queries could pose public health risks without professional oversight.
Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!
AI is now being used to create ‘deathbots’, chatbots designed to mimic people after they die using their messages and voice recordings. The technology is part of a growing digital afterlife industry, with some people using it to maintain a sense of connection with loved ones who have passed away.
Researchers at Cardiff University studied how these systems recreate personalities using digital data such as texts, emails, and audio recordings. The findings described the experience as both fascinating and unsettling, raising questions about memory, identity, and emotional impact.
Tests showed current deathbots often fail to accurately reproduce voices or personalities due to technical limitations. Researchers warned that these systems rely on simplified versions of people, which may distort memories rather than preserve them authentically.
Experts believe the technology could improve, but remain uncertain whether it will become widely accepted. Concerns remain about emotional consequences and whether digital versions could alter how people remember those who have died.
Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!
South Korea’s second-largest cryptocurrency exchange, Bithumb, is attempting to recover more than $40bn in Bitcoin after a promotional payout error credited customers with Bitcoin rather than Korean won.
The mistake occurred on 6 February during a ‘random box’ event, when prize values were entered in Bitcoin rather than in Bitcoin. Intended rewards totalled 620,000 won for 695 users, yet 620,000 bitcoins were distributed.
Only 249 customers opened their boxes, but the credited sums exceeded the exchange’s holdings.
Most balances were reversed through internal ledger corrections. About 13bn won ($9m) remains unrecovered after some users sold or withdrew funds before accounts were frozen. Authorities said 86 customers liquidated roughly 1,788 Bitcoins within 35 minutes.
Regulators have opened a full investigation, and lawmakers have scheduled an emergency hearing. Legal uncertainty remains over liability, while the exchange confirmed no hacking was involved and pledged stronger internal controls.
Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!
Pakistan plans to invest $1 billion in AI by 2030, Prime Minister Shehbaz Sharif said at the opening of Indus AI Week in Islamabad. The pledge aims to build a national AI ecosystem in Pakistan.
The government in Pakistan said AI education would expand to schools and universities, including remote regions. Islamabad also plans 1,000 fully funded PhD scholarships in AI to strengthen research capacity in Pakistan.
Shehbaz Sharif said Pakistan would train one million non IT professionals in AI skills by 2030. Islamabad identified agriculture, mining and industry as priority sectors for AI driven productivity gains in Pakistan.
Pakistan approved a National AI Policy in 2025, although implementation has moved slowly. Officials in Islamabad said Indus AI Week marks an early step towards broader adoption of AI across Pakistan.
Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!