Engineers at the University of Pennsylvania have found that foams, long assumed to behave like static glass, remain in constant internal motion while preserving their outward form.
Computer simulations revealed that bubbles in wet foams continue shifting through many configurations instead of settling into fixed positions.
Researchers observed that this behaviour closely mirrors the mathematics behind deep learning, where AI systems repeatedly adjust internal parameters during training. Instead of converging on a single optimal state, both foams and AI models operate within broad solution spaces that allow flexibility and resilience.
The study challenges earlier theories that treated foam bubbles as particles trapped in low-energy states. A revised mathematical approach shows that continuous reorganisation offers stability at a larger scale, rather than undermining structural integrity.
The findings suggest that learning-like dynamics may represent a broader organising principle across materials science, biology and computation.
Researchers believe the insight could inform the design of adaptive materials and improve understanding of dynamic biological structures such. as cellular scaffolding.
Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!
AI is no longer confined to chatbots and content tools. In the food and beverage sector, companies are utilising advanced AI systems to forecast consumer trends, expedite product development, and explore new ingredients for future products.
Mars, the multinational behind brands such as Dolmio, Pedigree, and Mars bars, is using AI to support its health and sustainability goals. Darren Logan, vice president of research at the Mars Advanced Research Institute, said the company is exploring plant compounds and alternative proteins.
Fermentation is also expanding Mars’ ingredient research by generating new chemical compounds through interactions between plants and microbes. Logan said combining plants with microbes increases chemical diversity, producing substances that would not otherwise exist.
The chocolate manufacturer partnered with UC Davis spin-out PIPA and its AI research platform LEAP to support this work. The system constructs knowledge graphs utilising scientific literature, databases, and the company’s proprietary data to establish connections between ingredients, microbes, and human health.
Logan said the platform helps reduce the time and cost of experimentation by guiding researchers towards more promising test options. Human oversight remains central to every AI-assisted decision.
Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!
Scientists have developed a radar-based sensor that detects irregular heart rhythms without physical contact. The system uses radio waves and AI to identify atrial fibrillation and allow earlier detection.
The technology was tested on more than 6,200 patients during routine heart checks. Results showed accuracy comparable to standard electrocardiogram tests, demonstrating its potential for clinical use.
Trials during sleep revealed that the system could detect hidden heart rhythm issues even when patients were at rest. Many episodes of atrial fibrillation go unnoticed at night, so this could improve early intervention.
Further studies will examine how the system performs in everyday life. Researchers hope these tests will show whether the technology can be used reliably outside clinics to monitor heart health.
Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!
Rising concern surrounds the growing number of people seeking help after becoming victims of AI-generated intimate deepfakes in Guernsey, part of the UK. Support services report a steady increase in cases.
Existing law criminalises sharing intimate images without consent, but AI-generated creations remain legal. Proposed reforms aim to close this gap and strengthen victim protection.
Police and support charities warn that deepfakes cause severe emotional harm and are challenging to prosecute. Cross-border platforms and anonymous perpetrators complicate enforcement and reporting.
Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!
The robotics company Agibot has launched a series of Asia-Pacific strategic initiatives for 2026 with a high-profile event in Malaysia, signalling its push to expand embodied AI and robotics across the region.
The launch, held at i-City in Selangor, was attended by executives, Malaysian government officials, partners, and customers. It also marked the opening of the first AI and Robotics Experience Centre in Malaysia.
The centre was developed in partnership with I-Bhd and officiated by Science, Technology and Innovation Minister Chang Lih Kang. Agibot said the facility will showcase real-world applications of humanoid robotics.
Founder and CEO of Agibot, Deng Taihua, said the company produced its 5,000th humanoid robot in 2025, strengthening its position as it begins regional expansion in 2026.
The firm plans to deploy its systems across property, hospitality, tourism, and urban services, while its partnership with I-Bhd will focus on wellness, longevity, and residential robotics.
Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!
Rising use of AI is transforming cyberattacks in the UAE, enabling deepfakes, automated phishing and rapid data theft. Expanding digital services increase exposure for businesses and residents.
Criminals deploy autonomous AI tools to scan networks, exploit weaknesses and steal information faster than humans. Shorter detection windows raise risks of breaches, disruption and financial loss.
High-value sectors such as government, finance and healthcare face sustained targeting amid skills shortages. Protection relies on cautious users, stronger governance and secure-by-design systems across smart infrastructure.
Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!
UK lawmaker Jess Asato said an AI-altered image depicting her in a bikini circulated online. The incident follows wider reports of sexualised deepfake abuse targeting women on social media.
Platforms hosted thousands of comments, including further manipulated images, heightening distress. Victims describe the content as realistic, dehumanising and violating personal consent.
Government ministers of the UK pledged to ban nudification tools and criminalise non-consensual intimate images. Technology firms face pressure to remove content, suspend accounts, and follow Ofcom guidance to maintain a safe online environment.
Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!
A Grok-powered AI support tool has been added to Starlink’s website, expanding automated help for broadband users. The chatbot builds on a similar service already available through the company’s mobile app.
Users can access the chatbot via the checkout support page, receiving a link by email. Responses are limited to Starlink services and usually appear within several seconds.
The system is designed to streamline support for millions of users worldwide, including rural UK customers. Public opinion remains divided over the growing reliance on AI instead of human support staff.
Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!
European Commission Executive Vice President Teresa Ribera has stated that the EU has a constitutional obligation under its treaties to uphold its digital rulebook, including the Digital Markets Act (DMA).
Speaking at a competition law conference, Ribera framed enforcement as a duty to protect fair competition and market balance across the bloc.
Her comments arrive amid growing criticism from US technology companies and political pressure from Washington, where enforcement of EU digital rules has been portrayed as discriminatory towards American firms.
Several designated gatekeepers have argued that the DMA restricts innovation and challenges existing business models.
Ribera acknowledged the right of companies to challenge enforcement through the courts, while emphasising that designation decisions are based on lengthy and open consultation processes. The Commission, she said, remains committed to applying the law effectively rather than retreating under external pressure.
Apple and Meta have already announced plans to appeal fines imposed in 2025 for alleged breaches of DMA obligations, reinforcing expectations that legal disputes around EU digital regulation will continue in parallel with enforcement efforts.
Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!
The US Department of Defence plans to integrate Elon Musk’s AI tool Grok into Pentagon networks later in January, according to Defence Secretary Pete Hegseth.
The system is expected to operate across both classified and unclassified military environments as part of a broader push to expand AI capabilities.
Hegseth also outlined an AI acceleration strategy designed to increase experimentation, reduce administrative barriers and prioritise investment across defence technology.
An approach that aims to enhance access to data across federated IT systems, aligning with official views that military AI performance relies on data availability and interoperability.
The move follows earlier decisions by the Pentagon to adopt Google’s Gemini for an internal AI platform and to award large contracts to Anthropic, OpenAI, Google and xAI for agentic AI development.
Officials describe these efforts as part of a long-term strategy to strengthen US military competitiveness in AI.
Grok’s integration comes amid ongoing controversy, including criticism over generated imagery and previous incidents involving extremist and offensive content. Several governments and regulators have already taken action against the tool, adding scrutiny to its expanded role within defence systems.
Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!