EU challenges Meta over WhatsApp AI restrictions

The European Commission has warned Meta that it may have breached EU antitrust rules by restricting third-party AI assistants from operating on WhatsApp. A Statement of Objections outlines regulators’ preliminary view that the policy could distort competition in the AI assistant market.

The probe centres on updated WhatsApp Business terms announced in October 2025 and enforced from January 2026. Under the changes, rival general-purpose AI assistants were effectively barred from accessing the platform, leaving Meta AI as the only integrated assistant available to users.

Regulators argue that WhatsApp serves as a critical gateway for consumers AI access AI services. Excluding competitors could reinforce Meta’s dominance in communication applications while limiting market entry and expansion opportunities for smaller AI developers.

Interim measures are now under consideration to prevent what authorities describe as potentially serious and irreversible competitive harm. Meta can respond before any interim measures are imposed, while the broader antitrust probe continues.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

Coal reserves could help Nigeria enter $650 billion AI economy

Nigeria has been advised to develop its coal reserves to benefit from the rapidly expanding global AI economy. A policy organisation said the country could capture part of the projected $650 billion AI investment by strengthening its energy supply capacity.

AI infrastructure requires vast and reliable electricity to power data centres and advanced computing systems. Technology companies worldwide are increasing energy investments as competition intensifies and demand for computing power continues to grow rapidly.

Nigeria holds nearly five billion metric tonnes of coal, offering a significant opportunity to support global energy needs. Experts warned that failure to develop these resources could result in major economic losses and missed industrial growth.

The organisation also proposed creating a national corporation to convert coal into high-value energy and industrial products. Analysts stressed that urgent government action is needed to secure Nigeria’s position in the emerging AI-driven economy.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

EU faces pressure to boost action on health disinformation

A global health organisation is urging the EU to make fuller use of its digital rules to curb health disinformation as concerns grow over the impact of deepfakes on public confidence.

Warnings point to a rising risk that manipulated content could reduce vaccine uptake instead of supporting informed public debate.

Experts argue that the Digital Services Act already provides the framework needed to limit harmful misinformation, yet enforcement remains uneven. Stronger oversight could improve platforms’ ability to detect manipulated content and remove inaccurate claims that jeopardise public health.

Campaigners emphasise that deepfake technology is now accessible enough to spread false narratives rapidly. The trend threatens vaccination campaigns at a time when several member states are attempting to address declining trust in health authorities.

The EU officials continue to examine how digital regulation can reinforce public health strategies. The call for stricter enforcement highlights the pressure on Brussels to ensure that digital platforms act responsibly rather than allowing misleading material to circulate unchecked.

Would you like to learn more about AI, tech and digital diplomacyIf so, ask our Diplo chatbot!  

AI redefines criminal justice decision making

AI is increasingly being considered for use in criminal justice systems, raising significant governance and accountability questions. Experts warn that, despite growing adoption, there are currently no clear statutory rules governing the deployment of AI in criminal proceedings, underscoring the need for safeguards, transparency, and human accountability in high-stakes decisions.

Within this context, AI is being framed primarily as a support tool rather than a decision maker. Government advisers argue that AI could assist judges, police, and justice officials by structuring data, drafting reports, and supporting risk assessments, while final decisions on sentencing and release remain firmly in human hands.

However, concerns persist about the reliability of AI systems in legal settings. The risk of inaccuracies, or so-called hallucinations, in which systems generate incorrect or fabricated information, is particularly problematic when AI outputs could influence judicial outcomes or public safety.

The debate is closely linked to wider sentencing reforms aimed at reducing prison populations. Proposals include phasing out short custodial sentences, expanding alternatives such as community service and electronic monitoring, and increasing the relevance of AI-supported risk assessments.

At the same time, AI tools are already being used in parts of the justice system for predictive analytics, case management, and legal research, often with limited oversight. This gap between practice and regulation has intensified calls for clearer standards and disclosure requirements.

Proponents also highlight potential efficiency gains. AI could help ease administrative burdens on courts and police by automating routine tasks and analysing large volumes of data, freeing professionals to focus on judgment and oversight.

Would you like to learn more about AI, tech and digital diplomacyIf so, ask our Diplo chatbot!

Writing as thinking in the age of AI

In his article, Richard Gunderman argues that writing is not merely a way to present ideas but a core human activity through which people think, reflect and form meaning.

He contends that when AI systems generate text on behalf of users, they risk replacing this cognitive process with automated output, weakening the connection between thought and expression.

According to the piece, writing serves as a tool for reasoning, emotional processing and moral judgment. Offloading it to AI can diminish originality, flatten individual voice and encourage passive consumption of machine-produced ideas.

Gunderman warns that this shift could lead to intellectual dependency, where people rely on AI to structure arguments and articulate positions rather than developing those skills themselves.

The article also raises ethical concerns about authenticity and responsibility. If AI produces large portions of written work, it becomes unclear who is accountable for the ideas expressed. Gunderman suggests that overreliance on AI writing tools may undermine trust in communication and blur the line between human and machine authorship.

Overall, the piece calls for a balanced approach: AI may assist with editing or idea generation, but the act of writing itself should remain fundamentally human, as it is central to critical thinking, identity and social responsibility.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Learnovate launches community of practice on AI for learning

The Learnovate Centre, a global innovation hub focused on the future of work and learning at Trinity College Dublin, is spearheading a community of practice on responsible AI in learning, bringing together educators, policymakers, institutional leaders and sector specialists to discuss safe, effective and compliant uses of AI in educational settings.

This initiative aims to help practitioners interpret emerging policy frameworks, including EU AI Act requirements, share practical insights and align AI implementation with ethical and pedagogical principles.

One of the community’s early activities includes virtual meetings designed to build consensus around AI norms in teaching, compliance strategies and knowledge exchange on real-world implementation.

Participants come from diverse education domains, including schools, higher and vocational education and training, as well as representatives from government and unions, reflecting a broader push to coordinate AI adoption across the sector.

Learnovate plays a wider role in AI and education innovation, supporting research, summits and collaborative programmes that explore AI-powered tools for personalised learning, upskilling and ethical use cases.

It also partners with start-ups and projects (such as AI platforms for teachers and learners) to advance practical solutions that balance innovation with safeguards.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

AI-generated ‘slop’ spreads on Spotify, raising platform integrity concerns

A TechRadar report highlights the growing presence of AI-generated music on Spotify, often produced in large quantities and designed to exploit platform algorithms or royalty systems.

These tracks, sometimes described as ‘AI slop’, are appearing in playlists and recommendations, raising concerns about quality control and fairness for human musicians.

The article outlines signs that a track may be AI-generated, including generic or repetitive artwork, minimal or inconsistent artist profiles, and unusually high volumes of releases in a short time. Some tracks also feature vague or formulaic titles and metadata, making them difficult to trace to real creators.

Readers are encouraged to use Spotify’s reporting tools to flag suspicious or low-quality AI content.

The issue is a part of a broader governance challenge for streaming platforms, which must balance open access to generative tools with the need to maintain content quality, transparency and fair compensation for artists.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Tech firms push longer working hours to compete in AI race

Tech companies competing in AI are increasingly expecting employees to work longer weeks to keep pace with rapid innovation. Some start-ups openly promote 70-hour schedules, presenting intense effort as necessary to launch products faster and stay ahead of rivals.

Investors and founders often believe that extended working hours improve development speed and increase the chances of securing funding. Fast growth and fierce global competition have made urgency a defining feature of many AI workplaces.

However, research shows productivity rises only up to a limit before fatigue reduces efficiency and focus. Experts warn that excessive workloads can lead to burnout and make it harder for companies to retain experienced professionals.

Health specialists link extended working weeks to higher risks of heart disease and stroke. Many experts argue that smarter management and efficient use of technology offer safer and more effective paths to lasting productivity.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

AI in education reveals a critical evidence gap

Universities are increasingly reorganising around AI, treating AI-based instruction as a proven solution for delivering education more efficiently. This shift reflects a broader belief that AI can reliably replace or reduce human-led teaching, despite growing uncertainty about its actual impact on learning.

Recent research challenges this assumption by re-examining the evidence used to justify AI-driven reforms. A comprehensive re-analysis of AI and learning studies reveals severe publication bias, with positive results published far more frequently than negative or null findings. Once corrected, reported learning gains from AI shrink substantially and may be negligible.

More critically, the research exposes deep inconsistency across studies. Outcomes vary so widely that the evidence cannot predict whether AI will help or harm learning in a given context, and no educational level, discipline, or AI application shows consistent benefits.

By contrast, human-mediated teaching remains a well-established foundation of learning. Decades of research demonstrate that understanding develops through interaction, adaptation, and shared meaning-making, leading the article to conclude that AI in education remains an open question, while human instruction remains the known constant.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

Study questions reliability of AI medical guidance

AI chatbots are not yet capable of providing reliable health advice, according to new research published in the journal Nature Medicine. Findings show users gain no greater diagnostic accuracy from chatbots than from traditional internet searches.

Researchers tested nearly 1,300 UK participants using ten medical scenarios, ranging from minor symptoms to conditions requiring urgent care. Participants were assigned to use either OpenAI’s GPT-4o, Meta’s Llama 3, Command R+, or a standard search engine to assess symptoms and determine next steps.

Chatbot users identified their condition about one-third of the time, with only 45 percent selecting the correct medical response. Performance levels matched those relying solely on search engines, despite AI systems scoring highly on medical licensing benchmarks.

Experts attributed the gap to communication failures. Users often provided incomplete information or misinterpreted chatbot guidance.

Researchers and bioethicists warned that growing reliance on AI for medical queries could pose public health risks without professional oversight.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot