AI redefines criminal justice decision making

AI is increasingly being considered for use in criminal justice systems, raising significant governance and accountability questions. Experts warn that, despite growing adoption, there are currently no clear statutory rules governing the deployment of AI in criminal proceedings, underscoring the need for safeguards, transparency, and human accountability in high-stakes decisions.

Within this context, AI is being framed primarily as a support tool rather than a decision maker. Government advisers argue that AI could assist judges, police, and justice officials by structuring data, drafting reports, and supporting risk assessments, while final decisions on sentencing and release remain firmly in human hands.

However, concerns persist about the reliability of AI systems in legal settings. The risk of inaccuracies, or so-called hallucinations, in which systems generate incorrect or fabricated information, is particularly problematic when AI outputs could influence judicial outcomes or public safety.

The debate is closely linked to wider sentencing reforms aimed at reducing prison populations. Proposals include phasing out short custodial sentences, expanding alternatives such as community service and electronic monitoring, and increasing the relevance of AI-supported risk assessments.

At the same time, AI tools are already being used in parts of the justice system for predictive analytics, case management, and legal research, often with limited oversight. This gap between practice and regulation has intensified calls for clearer standards and disclosure requirements.

Proponents also highlight potential efficiency gains. AI could help ease administrative burdens on courts and police by automating routine tasks and analysing large volumes of data, freeing professionals to focus on judgment and oversight.

Would you like to learn more about AI, tech and digital diplomacyIf so, ask our Diplo chatbot!

Learnovate launches community of practice on AI for learning

The Learnovate Centre, a global innovation hub focused on the future of work and learning at Trinity College Dublin, is spearheading a community of practice on responsible AI in learning, bringing together educators, policymakers, institutional leaders and sector specialists to discuss safe, effective and compliant uses of AI in educational settings.

This initiative aims to help practitioners interpret emerging policy frameworks, including EU AI Act requirements, share practical insights and align AI implementation with ethical and pedagogical principles.

One of the community’s early activities includes virtual meetings designed to build consensus around AI norms in teaching, compliance strategies and knowledge exchange on real-world implementation.

Participants come from diverse education domains, including schools, higher and vocational education and training, as well as representatives from government and unions, reflecting a broader push to coordinate AI adoption across the sector.

Learnovate plays a wider role in AI and education innovation, supporting research, summits and collaborative programmes that explore AI-powered tools for personalised learning, upskilling and ethical use cases.

It also partners with start-ups and projects (such as AI platforms for teachers and learners) to advance practical solutions that balance innovation with safeguards.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Discord expands teen-by-default protection worldwide

Discord is preparing a global transition to teen-appropriate settings that will apply to all users unless they confirm they are adults.

The phased rollout begins in early March and forms part of the company’s wider effort to offer protection tailored to younger audiences rather than relying on voluntary safety choices. Controls will cover communication settings, sensitive content and access to age-restricted communities.

The update is based on an expanded age assurance system designed to protect privacy while accurately identifying users’ age groups. People can use facial age estimation on their own device or select identity verification handled by approved partners.

Discord will also rely on an age-inference model that runs quietly in the background. Verification results remain private, and documents are deleted quickly, with users able to appeal group assignments through account settings.

Stricter defaults will apply across the platform. Sensitive media will stay blurred unless a user is confirmed as an adult, and access to age-gated servers or commands will require verification.

Message requests from unfamiliar contacts will be separated, friend-request alerts will be more prominent and only adults will be allowed to speak on community stages instead of sharing the feature with teens.

Discord is complementing the update by creating a Teen Council to offer advice on future safety tools and policies. The council will include up to a dozen young users and aims to embed real teen insight in product development.

The global rollout builds on earlier launches in the UK and Australia, adding to an existing safety ecosystem that includes Teen Safety Assist, Family Centre, and several moderation tools intended to support positive and secure online interactions.

Would you like to learn more about AI, tech and digital diplomacyIf so, ask our Diplo chatbot!  

Tech firms push longer working hours to compete in AI race

Tech companies competing in AI are increasingly expecting employees to work longer weeks to keep pace with rapid innovation. Some start-ups openly promote 70-hour schedules, presenting intense effort as necessary to launch products faster and stay ahead of rivals.

Investors and founders often believe that extended working hours improve development speed and increase the chances of securing funding. Fast growth and fierce global competition have made urgency a defining feature of many AI workplaces.

However, research shows productivity rises only up to a limit before fatigue reduces efficiency and focus. Experts warn that excessive workloads can lead to burnout and make it harder for companies to retain experienced professionals.

Health specialists link extended working weeks to higher risks of heart disease and stroke. Many experts argue that smarter management and efficient use of technology offer safer and more effective paths to lasting productivity.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

AI in education reveals a critical evidence gap

Universities are increasingly reorganising around AI, treating AI-based instruction as a proven solution for delivering education more efficiently. This shift reflects a broader belief that AI can reliably replace or reduce human-led teaching, despite growing uncertainty about its actual impact on learning.

Recent research challenges this assumption by re-examining the evidence used to justify AI-driven reforms. A comprehensive re-analysis of AI and learning studies reveals severe publication bias, with positive results published far more frequently than negative or null findings. Once corrected, reported learning gains from AI shrink substantially and may be negligible.

More critically, the research exposes deep inconsistency across studies. Outcomes vary so widely that the evidence cannot predict whether AI will help or harm learning in a given context, and no educational level, discipline, or AI application shows consistent benefits.

By contrast, human-mediated teaching remains a well-established foundation of learning. Decades of research demonstrate that understanding develops through interaction, adaptation, and shared meaning-making, leading the article to conclude that AI in education remains an open question, while human instruction remains the known constant.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

Study questions reliability of AI medical guidance

AI chatbots are not yet capable of providing reliable health advice, according to new research published in the journal Nature Medicine. Findings show users gain no greater diagnostic accuracy from chatbots than from traditional internet searches.

Researchers tested nearly 1,300 UK participants using ten medical scenarios, ranging from minor symptoms to conditions requiring urgent care. Participants were assigned to use either OpenAI’s GPT-4o, Meta’s Llama 3, Command R+, or a standard search engine to assess symptoms and determine next steps.

Chatbot users identified their condition about one-third of the time, with only 45 percent selecting the correct medical response. Performance levels matched those relying solely on search engines, despite AI systems scoring highly on medical licensing benchmarks.

Experts attributed the gap to communication failures. Users often provided incomplete information or misinterpreted chatbot guidance.

Researchers and bioethicists warned that growing reliance on AI for medical queries could pose public health risks without professional oversight.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

When grief meets AI

AI is now being used to create ‘deathbots’, chatbots designed to mimic people after they die using their messages and voice recordings. The technology is part of a growing digital afterlife industry, with some people using it to maintain a sense of connection with loved ones who have passed away.

Researchers at Cardiff University studied how these systems recreate personalities using digital data such as texts, emails, and audio recordings. The findings described the experience as both fascinating and unsettling, raising questions about memory, identity, and emotional impact.

Tests showed current deathbots often fail to accurately reproduce voices or personalities due to technical limitations. Researchers warned that these systems rely on simplified versions of people, which may distort memories rather than preserve them authentically.

Experts believe the technology could improve, but remain uncertain whether it will become widely accepted. Concerns remain about emotional consequences and whether digital versions could alter how people remember those who have died.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

Pakistan pledges major investment in AI by 2030

Pakistan plans to invest $1 billion in AI by 2030, Prime Minister Shehbaz Sharif said at the opening of Indus AI Week in Islamabad. The pledge aims to build a national AI ecosystem in Pakistan.

The government in Pakistan said AI education would expand to schools and universities, including remote regions. Islamabad also plans 1,000 fully funded PhD scholarships in AI to strengthen research capacity in Pakistan.

Shehbaz Sharif said Pakistan would train one million non IT professionals in AI skills by 2030. Islamabad identified agriculture, mining and industry as priority sectors for AI driven productivity gains in Pakistan.

Pakistan approved a National AI Policy in 2025, although implementation has moved slowly. Officials in Islamabad said Indus AI Week marks an early step towards broader adoption of AI across Pakistan.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

Singtel opens largest AI ready data centre in Singapore

Singtel’s data centre arm Nxera has opened its largest data centre in Singapore at Tuas. The facility strengthens Singapore’s role as a regional hub for AI infrastructure.

The Tuas site in Singapore offers 58MW of AI-ready capacity and is described as the country’s highest- power-density data centre. More than 90 per cent of Singapore’s capacity was committed before the official launch.

Nxera said the Singapore facility is hyperconnected through direct access to international and domestic networks. Singapore gains lower latency and improved reliability from integration with a cable landing station.

Singtel said the Tuas development supports rising demand in Singapore for AI, cloud and high performance computing. Nxera plans further expansion in Asia while reinforcing Singapore’s position in digital infrastructure.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

Educators turn to AI despite platform fatigue

Educators in the US are increasingly using AI to address resource shortages, despite growing frustration with fragmented digital platforms. A new survey highlights rising dependence on AI tools across schools and universities in the US.

The study found many educators in the US juggle numerous digital systems that fail to integrate smoothly. Respondents said constant switching between platforms adds to workload pressures and burnout in the US education sector.

AI use in the US is focused on boosting productivity, with educators applying tools to research, writing and administrative tasks. Many also use AI to support student learning as budgets tighten in the US.

Concerns remain in the US around data security, ethics and system overload. Educators said better integration between AI and learning tools could ease strain and improve outcomes in the US classroom.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot