A generative AI model helps athletes avoid injuries and recover faster

Researchers at the University of California, San Diego, have developed a generative AI model designed to prevent sports injuries and assist rehabilitation.

The system, named BIGE (Biomechanics-informed GenAI for Exercise Science), integrates data on human motion with biomechanical constraints such as muscle force limits to create realistic training guidance.

BIGE can generate video demonstrations of optimal movements that athletes can imitate to enhance performance or avoid injury. It can also produce adaptive motions suited for athletes recovering from injuries, offering a personalised approach to rehabilitation.

The model merges generative AI with accurate modelling, overcoming limitations of previous systems that produced anatomically unrealistic results or required heavy computational resources.

To train BIGE, researchers used motion-capture data of athletes performing squats, converting them into 3D skeletal models with precise force calculations. The project’s next phase will expand to other types of movements and individualised training models.

Beyond sports, researchers suggest the tool could predict fall risks among the elderly. Professor Andrew McCulloch described the technology as ‘the future of exercise science’, while co-author Professor Rose Yu said its methods could be widely applied across healthcare and fitness.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Church of Greece launches AI tool LOGOS for believers

LOGOS, a digital tool developed by the Metropolis of Nea Ionia, Filadelfia, Iraklio and Halkidona alongside the University of the Aegean, has marked the Church of Greece’s entry into the age of AI.

The tool gathers information on questions of Christian faith and provides clear, practical answers instead of replacing human guidance.

Metropolitan Gabriel, who initiated the project, emphasised that LOGOS does not substitute priests but acts as a guide, bringing believers closer to the Church. He said the Church must engage the digital world, insisting that technology should serve humanity instead of the other way around.

An AI tool that also supports younger users, allowing them to safely access accurate information on Orthodox teachings and counter misleading or harmful content found online. While it cannot receive confessions, it offers prayers and guidance to prepare believers spiritually.

The Church views LOGOS as part of a broader strategy to embrace digital tools responsibly, ensuring that faith remains accessible and meaningful in the modern technological landscape.

Would you like to learn more about AI, tech and digital diplomacyIf so, ask our Diplo chatbot!

At UMN, AI meets ethics, history, and craft

AI is remaking daily life, but it can’t define what makes us human. The liberal arts help us probe ethics, meaning, and power as algorithms scale. At the University of Minnesota Twin Cities, that lens anchors curiosity with responsibility.

In the College of Liberal Arts, scholars are treating AI as both a tool and a textbook. They test its limits, trace its histories, and surface trade-offs around bias, authorship, and agency. Students learn to question design choices rather than just consume outputs.

Linguist Amanda Dalola, who directs the Language Center, experiments with AI as a language partner and reflective coach. Her aim isn’t replacement but augmentation, faster feedback, broader practice, richer cultural context. The point is discernment: when to use, when to refuse.

Statistician Galin Jones underscores the scaffolding beneath the hype. You cannot do AI without statistics, he tells students, so the School of Statistics emphasises inference, uncertainty, and validation. Graduates leave fluent in models, and in the limits of what models claim.

Composer Frederick Kennedy’s opera I am Alan Turing turns theory into performance. By staging Turing’s questions about machine thought and human identity, the work fuses history, sound design, and code. Across philosophy, music, and more, CLA frames AI as a human story first.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

Australia demands answers from AI chatbot providers over child safety

Australia’s eSafety Commissioner has issued legal notices to four major AI companion platforms, requiring them to explain how they are protecting children from harmful or explicit content.

Character.ai, Nomi, Chai, and Chub.ai were all served under the country’s Online Safety Act and must demonstrate compliance with Australia’s Basic Online Safety Expectations.

The notices follow growing concern that AI companions, designed for friendship and emotional support, can expose minors to sexualised conversations, suicidal ideation, and other psychological risks.

eSafety Commissioner Julie Inman Grant said the companies must show how their systems prevent such harms, not merely react to them, warning that failure to comply could lead to penalties of up to $825,000 per day.

AI companion chatbots have surged in popularity among young users, with Character.ai alone attracting nearly 160,000 monthly active users in Australia.

The Commissioner stressed that these services must integrate safety measures by design, as new enforceable codes now extend to AI platforms that previously operated with minimal oversight.

A move that comes amid wider efforts to regulate emerging AI technologies and ensure stronger child protection standards online.

Breaches of the new codes could result in civil penalties of up to $49.5 million, marking one of the toughest online safety enforcement regimes globally.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

OpenAI outlines Japan’s AI Blueprint for inclusive economic growth

A new Japan Economic Blueprint released by OpenAI sets out how AI can power innovation, competitiveness, and long-term prosperity across the country. The plan estimates that AI could add more than ¥100 trillion to Japan’s economy and raise GDP by up to 16%.

Centred on inclusive access, infrastructure, and education, the Blueprint calls for equal AI opportunities for citizens and small businesses, national investment in semiconductors and renewable energy, and expanded lifelong learning to build an adaptive workforce.

AI is already reshaping Japanese industries from manufacturing and healthcare to education and public administration. Factories reduce inspection costs, schools use ChatGPT Edu for personalised teaching, and cities from Saitama to Fukuoka employ AI to enhance local services.

OpenAI suggests that the focus of Japan on ethical and human-centred innovation could make it a model for responsible AI governance. By aligning digital and green priorities, the report envisions technology driving creativity, equality, and shared prosperity across generations.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

EU states split over children’s social media rules

European leaders remain divided over how to restrict children’s use of social media platforms. While most governments agree stronger protections are needed, there is no consensus on enforcement or age limits.

Twenty-five EU countries, joined by Norway and Iceland, recently signed a declaration supporting tougher child protection rules online. The plan calls for a digital age of majority, potentially restricting under-15s or under-16s from joining social platforms.

France and Denmark back full bans for children below 15, while others, prefer verified parental consent. Some nations argue parents should retain primary responsibility, with the state setting only basic safeguards.

Brussels faces pressure to propose EU-wide legislation, but several capitals insist decisions should stay national. Estonia and Belgium declined to sign the declaration, warning that new bans risk overreach and calling instead for digital education.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

Google launches Skills platform to boost AI and digital learning

Google has launched Google Skills, a platform helping individuals and organisations build AI and digital expertise. The platform offers nearly 3,000 courses, labs, and credentials from Google Cloud, DeepMind, Grow with Google, and Google for Education in one central hub.

Learners can gain practical experience through hands-on labs, skill badges, certificates, and certifications. Google Skills covers a wide range of learning paths- from AI Essentials and large language model research to quick 10-minute AI Boost Bites.

Gamified features, such as progress streaks and achievements, encourage engagement, while Cloud customers can personalise training for their teams with leaderboards and advanced reporting.

Google Skills also connects learners to employment opportunities. A hiring consortium of over 150 companies, including Jack Henry, uses the platform to fast-track qualified candidates through skills-based hiring initiatives.

No-cost options are available for individuals, higher education institutions, government programmes, NGOs, and Google Cloud customers, helping to bridge the growing digital skills gap.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

Kenya leads the way in AI skilling across Africa

Kenya’s AI Skilling Initiative (AINSI) is offering valuable insights for African countries aiming to build digital capabilities. With AI projected to create 230 million digital jobs across Africa by 2030, coordinated investment in skills development is vital to unlock this potential.

Despite growing ambition, fragmented efforts and uneven progress continue to limit impact.

Government leadership plays a central role in building national AI capacity. Kenya’s Regional Centre of Competence for Digital and AI Skilling has trained thousands of public servants through structured bootcamps and online programmes.

Standardising credentials and aligning training with industry needs are crucial to ensure skilling efforts translate into meaningful employment.

Industry and the informal economy are key to scaling transformation. Partnerships with KEPSA and MESH are training entrepreneurs and SMEs in AI and cybersecurity while tackling affordability, connectivity, and data access challenges.

Education initiatives, from K–12 to universities and technical institutions, are embedding AI training into curricula to prepare future generations.

Civil society collaboration further broadens access, with community-based programmes reaching gig workers and underserved groups. Kenya’s approach shows how inclusive, cross-sector frameworks can scale digital skills and support Africa’s AI-driven growth.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

Teachers become intelligence coaches in AI-driven learning

AI is reshaping education, pushing teachers to act as intelligence coaches and co-creators instead of traditional instructors.

Experts at an international conference, hosted in Greece, to celebrate Athens College’s centennial, discussed how AI personalises learning and demands a redefined teaching role.

Bill McDiarmid, professor emeritus at the University of North Carolina, said educators must now ask students where they find their information and why they trust it.

Similarly, Yong Zhao of the University of Kansas highlighted that AI enables individualised learning, allowing every student to achieve their full potential.

Speakers agreed AI should serve as a supportive partner, not a replacement, helping schools prepare students for an active role in shaping their futures.

The event, held under Greek President Konstantinos Tasoulas’ auspices, also urged caution when experimenting with AI on minors due to potential long-term risks.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Tech giants fund teacher AI training amid classroom chatbot push

Major technology companies are shifting strategic emphasis toward education by funding teacher training in artificial intelligence. Companies such as Microsoft, OpenAI and Anthropic have pledged millions of dollars to train educators and bring chatbots into classrooms.

Under a deal with the American Federation of Teachers (AFT) in the United States, Microsoft will contribute $12.5 million over five years, OpenAI will provide $8 million plus $2 million in technical resources, and Anthropic has pledged $500,000. The AFT plans to build AI training hubs, including one in New York, and aims to train around 400,000 teachers over five years.

At a workshop in San Antonio, dozens of teachers used AI tools such as ChatGPT, Google’s Gemini and Microsoft CoPilot to generate lesson plans, podcasts and bilingual flashcards. One teacher noted how quickly AI could generate materials: ‘It can save you so much time.’

However, the initiative raises critical questions. Educators expressed concerns about being replaced by AI, while unions emphasise that teachers must lead training content and maintain control over learning. Technology companies see this as a way to expand into education, but also face scrutiny over influence and the implications for teaching practice.

As schools increasingly adopt AI tools, experts say training must go beyond technical skills to cover ethical use, student data protection and critical thinking. The reforms reflect a broader push to prepare both teachers and students for a future defined by AI.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot