Kazakhstan adopts AI robotics for orthopaedic surgery

Kazakhstan has introduced an AI-enabled robotic system in Astana to improve the accuracy and efficiency of orthopaedic surgeries. The technology supports more precise surgical planning and execution.

The system was presented during an event highlighting growing cooperation between Kazakhstan and India in medical technologies. Officials from both countries emphasised knowledge exchange and joint progress in advanced healthcare solutions.

Health authorities say robotic assistance could help narrow the gap between performed joint replacements and unmet patient demand. Standardised procedures and improved precision are expected to raise treatment quality nationwide.

The initiative builds on recent medical advances, including Kazakhstan’s first robot-assisted heart surgery in Astana. Authorities view such technologies as part of broader efforts to modernise healthcare funding and expand access to high-tech treatment.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

New ETSI standard defines cybersecurity rules for AI systems

ETSI has released ETSI EN 304 223, a new European Standard establishing baseline cybersecurity requirements for AI systems.

Approved by national standards bodies, the framework becomes the first globally applicable EN focused specifically on securing AI, extending its relevance beyond European markets.

The standard recognises that AI introduces security risks not found in traditional software. Threats such as data poisoning, indirect prompt injection and vulnerabilities linked to complex data management demand tailored defences instead of conventional approaches alone.

ETSI EN 304 223 combines established cybersecurity practices with targeted measures designed for the distinctive characteristics of AI models and systems.

Adopting a full lifecycle perspective, the ETSI framework defines thirteen principles across secure design, development, deployment, maintenance and end of life.

Alignment with internationally recognised AI lifecycle models supports interoperability and consistent implementation across existing regulatory and technical ecosystems.

ETSI EN 304 223 is intended for organisations across the AI supply chain, including vendors, integrators and operators, and covers systems based on deep neural networks, including generative AI.

Further guidance is expected through ETSI TR 104 159, which will focus on generative AI risks such as deepfakes, misinformation, confidentiality concerns and intellectual property protection.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

RCB to use AI cameras at Chinnaswamy Stadium for crowd management

The Royal Challengers Bengaluru (RCB) franchise has announced plans to install AI-enabled camera systems at M. Chinnaswamy Stadium in Bengaluru ahead of the upcoming Indian Premier League (IPL) season.

The AI cameras are intended to support stadium security teams by providing real-time crowd management, identifying high-density areas and aiding safer entry and exit flows.

The system will use computer vision and analytics to monitor spectators and alert authorities to potential bottlenecks or risks, helping security personnel intervene proactively. RCB officials say the technology is part of broader efforts to improve spectator experience and safety, particularly in large-crowd environments.

The move reflects the broader adoption of AI and video analytics tools in sports venues to enhance operational efficiency and public safety.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

AI-generated song removed from Swedish rankings

Sweden has removed a chart-topping song from its official rankings after ruling it was mainly created using AI. The track had attracted millions of streams on Spotify within weeks.

Industry investigators found no public profile for the artist, later linking the song to executives at a music firm using AI tools. Producers insisted that technology merely assisted a human-led creative process.

Music organisations say AI-generated tracks threaten existing industry rules and creator revenues. The decision intensifies debate over how to regulate AI in cultural markets.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

AI technology aim to detect emotional distress and depression sooner

A University of Auckland researcher is developing AI tools to identify early signs of depression in young men. The work focuses on using physiological and behavioural data to offer personalised, early-stage mental health support.

Led by bioengineering researcher Kunal Gupta, the research uses data from devices such as smart watches to detect stress or low mood early. The approach aims to complement existing mental health services rather than replace professional care.

One project, Tōku Hoa, uses an AI-powered virtual companion that responds to biological signals and daily behaviour to encourage small, practical actions. The system is designed to help users recognise patterns in mood and stress over time.

With clinical and community testing planned, the research highlights the potential of adaptive AI systems to provide earlier, more personalised mental health support for young men who are often reluctant to seek help.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

How autonomous vehicles shape physical AI trust

Physical AI is increasingly embedded in public and domestic environments, from self-driving vehicles to delivery robots and household automation. As intelligent machines begin to operate alongside people in shared spaces, trust emerges as a central condition for adoption instead of technological novelty alone.

Autonomous vehicles provide the clearest illustration of how trust must be earned through openness, accountability, and continuous engagement.

Self-driving systems address long-standing challenges such as road safety, congestion, and unequal access to mobility by relying on constant perception, rule-based behaviour, and fatigue-free operation.

Trials and early deployments suggest meaningful improvements in safety and efficiency, yet public confidence remains uneven. Social acceptance depends not only on performance outcomes but also on whether communities understand how systems behave and why specific decisions occur.

Dialogue plays a critical role at two levels. Ongoing communication among policymakers, developers, emergency services, and civil society helps align technical deployment with social priorities such as safety, accessibility, and environmental impact.

At the same time, advances in explainable AI allow machines to communicate intent and reasoning directly to users, replacing opacity with interpretability and predictability.

The experience of autonomous vehicles suggests a broader framework for physical AI governance centred on demonstrable public value, transparent performance data, and systems capable of explaining behaviour in human terms.

As physical AI expands into infrastructure, healthcare, and domestic care, trust will depend on sustained dialogue and responsible design rather than the speed of deployment alone.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Law schools urged to embed practical AI training in legal education

With AI tools now widely available to legal professionals, educators and practitioners argue that law schools should integrate practical AI instruction into curricula rather than leave students to learn informally.

The article describes a semester-long experiment in an Entrepreneurship Clinic where students were trained on legal AI tools from platforms such as Bloomberg Law, Lexis and Westlaw, with exercises designed to show both advantages and limitations of these systems.

In structured exercises, students used different AI products to carry out tasks like drafting, research and client communication, revealing that tools vary widely in capabilities and reinforcing the importance of independent legal judgement.

Educators emphasise that AI should be taught as a complement to legal reasoning, not a substitute, and that understanding how and when to verify AI outputs is essential for responsible practice.

The article concludes that clarifying the distinction between AI as a tool and as a crutch will help prepare future lawyers to use technology ethically and competently in both transactional work and litigation.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Microsoft urges systems approach to AI skills in Europe

AI is increasingly reshaping European workplaces, though large-scale job losses have not yet materialised. Studies by labour bodies show that tasks change faster than roles disappear.

Policymakers and employers face pressure to expand AI skills while addressing unequal access to them. Researchers warn that the benefits and risks concentrate among already skilled workers and larger organisations.

Education systems across Europe are beginning to integrate AI literacy, including teacher training and classroom tools. Progress remains uneven between countries and regions.

Microsoft experts say workforce readiness will depend on evidence-based policy and sustained funding. Skills programmes alone may not offset broader economic and social disruption from AI adoption.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

Samsara turns operational data into real-world impact

Samsara has built a platform that helps companies with physical operations run more safely and efficiently. Founded in 2015 by MIT alumni John Bicket and Sanjit Biswas, the company connects workers, vehicles, and equipment through cloud-based analytics.

The platform combines sensors, AI cameras, GPS tracking, and real-time alerts to cut accidents, fuel use, and maintenance costs. Large companies across logistics, construction, manufacturing, and energy report cost savings and improved safety after adopting the system.

Samsara turns large volumes of operational data into actionable insights for frontline workers and managers. Tools like driver coaching, predictive maintenance, and route optimisation reduce risk at scale while recognising high-performing field workers.

The company is expanding its use of AI to manage weather risk, support sustainability, and enable the adoption of electric fleets. They position data-driven decision-making as central to modernising critical infrastructure worldwide.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

Microsoft launches Elevate for Educators programme

Elevate for Educators, launched by Microsoft, is a global programme designed to help teachers build the skills and confidence to use AI tools in the classroom. The initiative provides free access to training, credentials, and professional learning resources.

The programme connects educators to peer networks, self-paced courses, and AI-powered simulations. The aim is to support responsible AI adoption while improving teaching quality and classroom outcomes.

New educator credentials have been developed in partnership with ISTE and ASCD. Schools and education systems can also gain recognition for supporting professional development and demonstrating impact in classrooms.

AI-powered education tools within Microsoft 365 have been expanded to support lesson planning and personalised instruction. New features help teachers adapt materials to different learning needs and provide students with faster feedback.

College students will also receive free access to Microsoft 365 Premium and LinkedIn Premium Career for 12 months. The offer includes AI tools, productivity apps, and career resources to support future employment.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!