New AI model uses abdominal scans to assess fall risk

Scientists and clinicians have created an AI model that can analyse routine abdominal imaging, such as CT scans, to identify adults at increased risk of future falls.

By detecting subtle patterns in body composition and muscle quality that may be linked to frailty, the AI system shows promise in augmenting traditional clinical assessments of fall risk.

Falls are a leading cause of injury and disability among older adults, and predicting who is most at risk can be challenging with standard clinical measures alone.

Integrating AI-based analysis with existing imaging data could enable earlier interventions, targeted therapies and personalised care plans, potentially reducing hospitalisations and long-term complications.

Although further validation is needed before routine clinical adoption, this research highlights how AI applications in medical imaging can extend beyond primary diagnosis to support predictive and preventative healthcare strategies.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Hollywood figures back anti-AI campaign

More than 800 creatives in the US have signed an anti-AI campaign accusing big technology companies of exploiting human work. High-profile figures from film and television in the country have backed the initiative, which argues that training AI on creative content without consent amounts to theft.

The campaign was launched by the Human Artistry Campaign, a coalition representing creators, unions and industry groups in the country. Supporters say AI systems should not be allowed to use artistic work without permission and fair compensation.

Actors and filmmakers in the US warned that unchecked AI adoption threatens livelihoods across film, television and music. Campaign organisers said innovation should not come at the expense of creators’ rights or ownership of their work.

The statement adds to growing pressure on lawmakers and technology firms in the US. Creative workers are calling for clearer rules on how AI can be developed and deployed across the entertainment industry.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

Alaska student arrested after eating AI-generated art in protest

On 13 January 2026, Graham Granger, a film and performing arts major at the University of Alaska Fairbanks, was arrested and charged with criminal mischief after ripping AI-assisted artwork from a campus gallery wall and eating around 57 of the images as part of what he described as a protest and performance piece against the use of AI in art.

The destroyed exhibit, titled Shadow Searching: ChatGPT psychosis, was created by another student, Nick Dwyer, using AI to explore his personal experiences with the technology.

Dwyer criticised Granger’s actions as damaging to the artist’s work and initially considered pressing charges, though he later dropped those in favour of the state pursuing the case.

Granger defended his act as both protest and performance art, arguing that reliance on AI undermines human creativity and that the process of making art matters as much as the finished product. He said he did not regret the incident and saw it as a way to spark conversation about the role of AI in creative fields.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Japan arrests suspect over AI deepfake pornography

Police in Japan have arrested a man accused of creating and selling non-consensual deepfake pornography using AI tools. The Tokyo Metropolitan Police Department said thousands of manipulated images of female celebrities were distributed through paid websites.

Investigators in Japan allege the suspect generated hundreds of thousands of images over two years using freely available generative AI software. Authorities say the content was promoted on social media before being sold via subscription platforms.

The arrest follows earlier cases in Japan and reflects growing concern among police worldwide. In South Korea, law enforcement has reported hundreds of arrests linked to deepfake sexual crimes, while cases have also emerged in the UK.

European agencies, including Europol, have also coordinated arrests tied to AI-generated abuse material. Law enforcement bodies say the spread of accessible AI tools is forcing rapid changes in forensic investigation and in the handling of digital evidence.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

Education for Countries programme signals OpenAI push into public education policy

OpenAI has launched the Education for Countries programme, a new global initiative designed to support governments in modernising education systems and preparing workforces for an AI-driven economy.

The programme responds to a widening gap between rapid advances in AI capabilities and people’s ability to use them effectively in everyday learning and work.

Education systems are positioned at the centre of closing that gap, as research suggests a significant share of core workplace skills will change by the end of the decade.

By integrating AI tools, training and research into schools and universities, national education frameworks can evolve alongside technological change and better equip students for future labour markets.

The programme combines access to tools such as ChatGPT Edu and advanced language models with large-scale research on learning outcomes, tailored national training schemes and internationally recognised certifications.

A global network of governments, universities and education leaders will also share best practices and shape responsible approaches to AI use in classrooms.

Initial partners include Estonia, Greece, Italy, Jordan, Kazakhstan, Slovakia, Trinidad and Tobago and the United Arab Emirates. Early national rollouts, particularly in Estonia, already involve tens of thousands of students and educators, with further countries expected to join later in 2026.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

WEF paper warns of widening AI investment gap

Policy-makers are being urged to take a more targeted approach to ‘sovereign AI’ spending, as a new paper released alongside the World Economic Forum meeting in Davos argues that no country can realistically build every part of the AI stack alone. Instead, the authors recommend treating AI sovereignty as ‘strategic interdependence’, combining selective domestic investment with trusted partnerships and alliances.

The paper, co-authored by the World Economic Forum and Bain & Co, highlights how heavily the United States and China dominate the global AI landscape. It estimates that the two countries capture around 65% of worldwide investment across the AI value chain, reflecting a full-stack model, from chips and cloud infrastructure to applications, that most other economies cannot match at the same scale.

For smaller and mid-sized economies, that imbalance can translate into a competitive disadvantage, because AI infrastructure, such as data centres and computing capacity, is increasingly viewed as the backbone of national AI capability. Still, the report argues that faster-moving countries can carve out a niche by focusing on a few priority areas, pooling regional capacity, or securing access through partnerships rather than trying to replicate the US-China approach.

The message was echoed in Davos by Nvidia chief executive Jensen Huang, who said every country should treat AI as essential infrastructure, comparable to electricity grids and transport networks. He argued that building AI data centres could drive demand for well-paid skilled trades, from electricians and plumbers to network engineers, framing the boom as a major job creator rather than a trigger for widespread job losses.

At the same time, the paper warns that physical constraints could slow expansion, including the availability of land, energy and water, as well as shortages of highly skilled workers. It also notes that local regulation can delay projects, although some industry groups argue that regulatory and cost pressures may push countries to innovate sooner in efficiency and greener data-centre design.

In the UK, industry body UKAI says high energy prices, limited grid capacity, complex planning rules and public scrutiny already create the same hurdles many other countries may soon face. It argues these constraints are helping drive improvements in efficiency, system design and coordination, seen as building blocks for more sustainable AI infrastructure.

Diplo is live reporting on all sessions from the World Economic Forum 2026 in Davos.

Would you like to learn more about AI, tech and digital diplomacyIf so, ask our Diplo chatbot!

Tata’s $11 billion Innovation City plan gains global visibility at Davos

Tata Sons plans to invest $11 billion to build a large ‘Innovation City’ near the upcoming Navi Mumbai International Airport, according to Maharashtra Chief Minister Devendra Fadnavis, speaking at the World Economic Forum (WEF) in Davos. He said the project has drawn strong interest from international investors and will include major infrastructure upgrades alongside a data centre.

Fadnavis said the aim is to turn Mumbai and its wider region into a global, ‘plug-and-play’ innovation hub where companies can quickly set up and scale new technologies. He described the initiative as the first of its kind in India and said work is expected to begin within six to eight months.

The location next to the Adani Group–developed Navi Mumbai Airport is being positioned as an advantage, linking global connectivity with the high-tech industry. The project also reflects a broader global rush to expand data centres as companies roll out AI services, with firms such as Microsoft, Alphabet, and Amazon investing heavily in new capacity worldwide.

Maharashtra, which contributes more than 10 percent of India’s GDP and hosts the country’s financial capital, is also pushing a wider infrastructure drive, including a $30 billion plan to upgrade Mumbai. State leaders have framed these investments as part of an effort to boost growth and respond to economic pressures, including unemployment.

The Innovation City is expected to support India’s ambitions in AI and semiconductors, with national officials pointing to a public-private partnership approach rather than leaving development solely to big tech companies. Alongside this, the state is exploring energy innovation, including potential collaborations on small modular nuclear reactors, following recent legislative support for smaller-scale nuclear projects.

Taken together, the plan is being presented as a bid to attract global investment, accelerate high-tech development, and strengthen India’s role in emerging industrial and technology shifts centred on AI, advanced manufacturing, and digital infrastructure.

Diplo is live reporting on all sessions from the World Economic Forum 2026 in Davos.

Would you like to learn more about AI, tech and digital diplomacyIf so, ask our Diplo chatbot!

Humanoid robots and AI take centre stage as Musk joins Davos 2026

Elon Musk made his first appearance at the World Economic Forum in Davos despite years of public criticism towards the gathering, arguing that AI and robotics represent the only realistic route to global abundance.

Speaking alongside BlackRock chief executive Larry Fink, Musk framed robotics as a civilisational shift rather than a niche innovation, claiming widespread automation will raise living standards and reshape economic growth.

Musk predicted a future where robots outnumber humans, with humanoid systems embedded across industry, healthcare and domestic life.

He highlighted elder care as a key use case in ageing societies facing labour shortages, suggesting that robotics could compensate for demographic decline rather than relying solely on migration or extended working lives.

Tesla’s Optimus humanoid robots are already performing simple factory tasks, with more complex functions expected within a year.

Musk indicated public sales could begin by 2027 once reliability thresholds are met. He also argued autonomous driving is largely resolved, pointing to expanding robotaxi deployments in the US and imminent regulatory decisions in Europe and China.

The global market for humanoid robotics remains relatively small, but analysts expect rapid expansion as AI capabilities improve and costs fall.

Musk at Davos 2026 presented robotics as an engine for economic acceleration, suggesting ubiquitous automation could unlock productivity gains on a scale comparable to past industrial revolutions.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

AI expands healthcare access in Africa

Health care in Africa is set to benefit from AI through a new initiative by the Gates Foundation and OpenAI. Horizon1000 aims to expand AI-powered support across 1,000 primary care clinics in Rwanda by 2028.

Severe shortages of health workers in Sub-Saharan Africa have limited access to quality care, with the region facing a shortfall of nearly six million professionals. AI tools will assist doctors and nurses by handling administrative tasks and providing clinical guidance.

Rwanda has launched an AI Health Intelligence Centre to utilise limited resources better and improve patient outcomes. The initiative will deploy AI in communities and homes, ensuring support reaches beyond clinic walls.

Experts believe AI represents a major medical breakthrough, comparable to vaccines and antibiotics. By helping health workers focus on patient care, the technology could reduce preventable deaths and transform health systems across low- and middle-income countries.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

Advanced Linux malware framework VoidLink likely built with AI

Security researchers from Check Point have uncovered VoidLink. This advanced and modular Linux malware framework has been developed predominantly with AI assistance, likely by a single individual rather than a well-resourced threat group.

VoidLink’s development process, exposed due to the developer’s operational security (OPSEC) failures, indicates that AI models were used not just for parts of the code but to orchestrate the entire project plan, documentation and implementation.

According to analysts, the malware framework reached a functional state in under a week with more than 88,000 lines of code, compressing what would traditionally take weeks or months into days.

While no confirmed in-the-wild attacks have yet been reported, researchers caution that the advent of AI-assisted malware represents a significant cybersecurity shift, lowering the barrier to creating sophisticated threats and potentially enabling widespread future misuse.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!