AI tool uses walking patterns to detect early signs of dementia

Fujitsu and Acer Medical are trialling an AI-powered tool to help identify early signs of dementia and Parkinson’s disease by analysing patients’ walking patterns. The system, called aiGait and powered by Fujitsu’s Uvance skeleton recognition technology, converts routine movements into health data.

Initial tests are taking place at a daycare centre linked to Taipei Veterans Hospital, using tablets and smartphones to record basic patient movements. The AI compares this footage with known movement patterns associated with neurodegenerative conditions, helping caregivers detect subtle abnormalities.

The tool is designed to support early intervention, with abnormal results prompting follow-up by healthcare professionals. Acer Medical plans to expand the service to elderly care centres across Taiwan by the end of the year.

Fujitsu’s AI was originally developed for gymnastics scoring and adapted to analyse real-world gait data with high accuracy using everyday mobile devices. Both companies hope to extend the technology’s use to paediatrics, sports science, and rehabilitation in future.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Thinking Machines Lab raises $2bn to build safer AI

Thinking Machines Lab, an AI startup founded earlier this year by former OpenAI executive Mira Murati, has raised $2 billion in new funding. The round, which values the company at $12 billion, was led by Andreessen Horowitz and backed by Nvidia, Cisco, AMD, and others.

The company aims to develop safer and more reliable AI systems by focusing on how people naturally interact with the world, including speech and vision. Its first product, due in the coming months, will offer open-source components designed to support researchers and startups.

At launch, nearly two-thirds of the team had previously worked at OpenAI, underscoring the company’s ambition to lead in the field of frontier AI. Murati said the startup plans to make its science publicly available to support understanding and transparency.

The investment comes amid a surge in AI-related funding, which accounted for over 64% of all US startup deal value in the first half of 2025. Growing interest in generative and multimodal AI continues to attract major capital despite wider concerns over tech sector spending.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

AI Appreciation Day highlights progress and growing concerns

AI is marking another milestone as experts worldwide reflect on its rapid rise during AI Appreciation Day. From reshaping business workflows to transforming customer experiences, AI’s presence is expanding — but so are concerns over its long-term implications.

Industry leaders point to AI’s growing role across sectors. Patrick Harrington from MetaRouter highlights how control over first-party data is now seen as key instead of just processing large datasets.

Vall Herard of Saifr adds that successful AI implementations depend on combining curated data with human oversight rather than relying purely on machine-driven systems.

Meanwhile, Paula Felstead from HBX Group believes AI could significantly enhance travel experiences, though scaling it across entire organisations remains a challenge.

Voice AI is changing industries that depend on customer interaction, according to Natalie Rutgers from Deepgram. Instead of complex interfaces, voice technology is improving communication in restaurants, hospitals, and banks.

At the same time, experts like Ivan Novikov from Wallarm stress the importance of securing AI systems and the APIs connecting them, as these form the backbone of modern AI services.

While some celebrate AI’s advances, others raise caution. SentinelOne’s Ezzeldin Hussein envisions AI becoming a trusted partner through responsible development rather than unchecked growth.

Naomi Buckwalter from Contrast Security warns that AI-generated code could open security gaps instead of fully replacing human engineering, while Geoff Burke from Object First notes that AI-powered cyberattacks are becoming inevitable for businesses unable to keep pace with evolving threats.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Meta unveils 5GW AI data centre plans

Meta has unveiled plans to build a 5GW data centre in Louisiana, part of a significant expansion of its AI infrastructure. CEO Mark Zuckerberg said the Hyperion complex will cover an area nearly the size of Manhattan, with the first 1.5GW phase expected online in 2026.

The company is also constructing a 1GW cluster named Prometheus in US, Ohio, which combines Meta-owned infrastructure with leased systems. Both projects will use a mix of renewable and natural gas power, underlining Meta’s strategy to ramp up compute capacity rapidly.

Zuckerberg stated Meta would invest hundreds of billions of dollars into superintelligence development, supported by elite talent recruited from major rivals. He added that the new data centres would offer the highest compute-per-researcher in the industry.

Amidst growing demand, Meta recently sought $29 billion in financing and secured 1GW of renewable power. Yet the expansion has raised environmental concerns, with one data centre in Georgia reportedly consuming 10% of a county’s water supply.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

OpenAI economist shares four key skills for kids in AI era

As AI reshapes jobs and daily life, OpenAI’s chief economist, Ronnie Chatterji, teaches his children four core skills to help them adapt and thrive.

Instead of relying solely on technology, he believes critical thinking, adaptability, emotional intelligence, and financial numeracy will remain essential.

Chatterji highlighted these skills during an episode of the OpenAI podcast, saying critical thinking helps children spot problems rather than follow instructions. Given constant changes in AI, climate, and geopolitics, he stressed adaptability as another priority.

Rather than expecting children to master coding alone, Chatterji argues that emotional intelligence will make humans valuable partners alongside AI.

The fourth skill he emphasises is financial numeracy, including understanding maths without calculators and maintaining writing skills even with dictation software available. Instead of predicting specific future job titles, Chatterji believes focusing on these abilities equips children for any outcome.

His approach reflects a broader trend among tech leaders, with others like Alexis Ohanian and Sam Altman also promoting AI literacy while valuing traditional skills such as reading, writing, and arithmetic.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Women see AI as more harmful across life settings

Women are showing more scepticism than men when it comes to AI particularly regarding its ethics, fairness and transparency.

A national study from Georgetown University, Boston University and the University of Vermont found that women were more concerned about AI’s risks in decision-making. Concerns were especially prominent around AI tools used in the workplace, such as hiring platforms and performance review systems.

Bias may be introduced when such tools rely on historical data, which often underrepresents women and other marginalised groups. The study also found that gender influenced compliance with workplace rules surrounding AI use, especially in restrictive environments.

When AI use was banned, women were more likely to follow the rules than men. Usage jumped when tools were explicitly permitted. In cases where AI was allowed, over 80% of both women and men reported using the tools.

Women were generally more wary of AI’s impact across all areas of life — not just in the professional sphere. From personal settings to public life, survey respondents who identified as women consistently viewed AI as more harmful than beneficial.

The study, conducted via Qualtrics in August 2023, surveyed a representative US sample with a majority of female respondents. On average, participants were 45 years old, with over half identifying as women across different educational and professional backgrounds.

The research comes amid wider concerns in the AI field about ethics and accountability, often led by women researchers. High-profile cases include Google’s dismissal of Timnit Gebru and later Margaret Mitchell, both of whom raised ethical concerns about large language models.

The study’s authors concluded that building public trust in AI may require clearer policies and greater transparency in how systems are designed. They also highlighted the importance of increasing diversity among those developing AI tools to ensure more inclusive outcomes.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Claude integrates Canva to power AI-first workflows

Claude AI has introduced integration with Canva, enabling users to generate and manage design content using simple text prompts. The new feature allows paid users to create presentations, edit visuals, and explore templates directly within Claude’s chat interface.

Alongside Canva, Claude now supports additional connectors like Notion, Stripe, and desktop apps like Figma and Prisma, expanding its ability to fetch and process data contextually. These integrations are powered by the open-source Model Context Protocol (MCP).

Canva’s head of ecosystem highlighted that users can now generate, summarise, and publish designs in one continuous workflow within Claude. The move represents another step toward AI-first productivity, removing the need for manual app-switching during the creative process.

Claude is the first AI assistant to enable Canva workflows through MCP, following recent partnerships with tools like Figma. A new integrations directory has also launched, helping users discover compatible apps for both web and desktop experiences.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Oracle commits billions to expand AI infrastructure in Europe

Oracle has confirmed a $3 billion investment in its AI and cloud infrastructure across Germany and the Netherlands over the next five years. The move aims to boost its capacity in Europe as demand for advanced computing services continues to rise.

The company plans to invest $2 billion in Germany and $1 billion in the Netherlands, joining other major tech firms ramping up data centre infrastructure. Oracle’s strategy reflects broader market trends, with companies like Meta and Amazon committing large sums to meet AI-driven cloud needs.

The firm expects capital expenditure to exceed $25 billion in fiscal 2026, primarily focused on expanding data centre capabilities for AI. Analysts say Oracle’s AI and cloud services are increasingly competitive with traditional software, fuelling its strong performance this year.

Oracle shares have climbed nearly 38% since January, with a recent regulatory filing revealing a future deal worth over $30 billion in annual revenue beginning in 2028. The company sees its growing infrastructure as key to accelerating revenue and profit.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Online health search grows, but scepticism about AI stays high

Trust in traditional healthcare providers remains high, but Americans are increasingly turning to AI for health information, according to new data from the Annenberg Public Policy Centre (APPC).

While 90% of adults trust their personal health provider, nearly 8 in 10 say they are likely to look online for answers to health-related questions. The rise of the internet gave the public access to government health authorities such as the CDC, FDA, and NIH.

Although trust in these institutions dipped during the Covid-19 pandemic, confidence remains relatively high at 66%–68%. Generative AI tools are now becoming a third key source of health information.

AI-generated summaries — such as Google’s ‘AI Overviews‘ or Bing’s ‘Copilot Answers’ — appear prominently in search results.

Despite disclaimers that responses may contain mistakes, nearly two-thirds (63%) of online health searchers find these responses somewhat or very reliable. Around 31% report often or always finding the answers they need in the summaries.

Public attitudes towards AI in clinical settings remain more cautious. Nearly half (49%) of US adults say they are not comfortable with providers using AI tools instead of their own experience. About 36% express some level of comfort, while 41% believe providers are already using AI at least occasionally.

AI use is growing, but most online health seekers continue exploring beyond the initial summary. Two-thirds follow links to websites such as Mayo Clinic, WebMD, or non-profit organisations like the American Heart Association. Federal resources such as the CDC and NIH are also consulted.

Younger users are more likely to recognise and interact with AI summaries. Among those aged 18 to 49, between 69% and 75% have seen AI-generated content in search results, compared to just 49% of users over 65.

Despite high smartphone ownership (93%), only 59% of users track their health with apps. Among these, 52% are likely to share data with a provider, although 36% say they would not. Most respondents (80%) welcome prescription alerts from pharmacies.

The survey, fielded in April 2025 among 1,653 US adults, highlights growing reliance on AI for health information but also reveals concerns about its use in professional medical decision-making. APPC experts urge greater transparency and caution, especially for vulnerable users who may not understand the limitations of AI-generated content.

Director Kathleen Hall Jamieson warns that confusing AI-generated summaries with professional guidance could cause harm. Analyst Laura A. Gibson adds that outdated information may persist in AI platforms, reinforcing the need for user scepticism.

As the public turns to digital health tools, researchers recommend clearer policies, increased transparency, and greater diversity in AI development to ensure safe and inclusive outcomes.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

How AI-generated video is reshaping the film industry

AI-generated video has evolved at breakneck speed, moving from distorted and unconvincing clips to hyper-realistic creations that rival traditional filmmaking. What was once a blurry, awkward depiction of Will Smith eating spaghetti in 2023 is now flawlessly rendered on platforms like Google’s Veo 3.

In just months, tools such as Luma Labs’ Dream Machine, OpenAI’s Sora, and Runway AI’s Gen-4 have redefined what’s possible, drawing the attention of Hollywood studios, advertisers, and artists eager to test the limits of this new creative frontier.

Major industry players are already experimenting with AI for previsualisation, visual effects, and even entire animated films. Lionsgate and AMC Networks have partnered with Runway AI, with executives exploring AI-generated family-friendly versions of blockbuster franchises like John Wick and The Hunger Games.

The technology drastically cuts costs for complex scenes, making it possible to create elaborate previews—like a snowstorm filled with thousands of soldiers—for a fraction of the traditional price. However, while some see AI as a tool to expand creative possibilities, resistance remains strong.

Critics argue that AI threatens traditional artistic processes, raises ethical concerns over energy use and data training, and risks undermining human creativity. The debate mirrors past technological shifts in entertainment—inevitable yet disruptive.

As Runway and other pioneers push toward immersive experiences in augmented and virtual reality, the future of filmmaking may no longer be defined solely by Hollywood, but by anyone with access to these powerful tools.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!