AI advances turn sweat into a new health signal

Researchers in Australia are examining how sweat could support new forms of health monitoring. A recent study highlights its diagnostic potential when combined with machine learning, noting the appeal of simple, non-invasive collection for people already using wearables.

Early hydration patches show how sweat data is entering the sports and fitness space. Advances in microfluidics and flexible electronics have enabled thin, real-time sweat-sampling patches. UTS researchers say AI can extract useful biomarkers and deliver personalised insights for everyday tracking.

Experts say sweat remains underused despite carrying biological signals relevant to preventive care. UTS scientists point to gains from reading multiple biomarkers and sending data wirelessly for assessment. Improvements in pattern recognition now support more accurate interpretation.

Development work in Sydney, Australia, includes microfluidic devices that detect trace levels of glucose and cortisol. Most systems remain prototypes, yet commercial interest is increasing as companies explore non-invasive alternatives to blood-based testing.

The research team expects broader adoption as sensor accuracy improves. They anticipate wearables that monitor stress markers and help identify chronic conditions earlier, framing skin-based sensing combined with AI as a route to wider access to continuous health insights.

Would you like to learn more about AI, tech and digital diplomacyIf so, ask our Diplo chatbot!

UAE launches scholarship to develop future AI leaders

The UAE unveiled a scholarship programme to nurture future leaders in AI at MBZUAI. The initiative, guided by Sheikh Tahnoon bin Zayed, targets outstanding undergraduates beginning in the 2025 academic year.

Approximately 350 students will be supported over six years following a rigorous selection process. Applicants will be assessed for mathematical strength, leadership potential and entrepreneurial drive in line with national technological ambitions.

Scholars will gain financial backing alongside opportunities to represent the UAE internationally and develop innovative ventures. Senior officials said the programme strengthens the nation’s aim to build a world-class cohort of AI specialists.

MBZUAI highlighted its interdisciplinary approach that blends technical study with ethics, leadership and business education. Students will have access to advanced facilities, industry placements, and mentorships designed to prepare them for global technology roles.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

Pope urges guidance for youth in an AI-shaped world

Pope Leo XIV urged global institutions to guide younger generations as they navigate the expanding influence of AI. He warned that rapid access to information cannot replace the deeper search for meaning and purpose.

Previously, the Pope had warned students not to rely solely on AI for educational support. He encouraged educators and leaders to help young people develop discernment and confidence when encountering digital systems.

Additionally, he called for coordinated action across politics, business, academia and faith communities to steer technological progress toward the common good. He argued that AI development should not be treated as an inevitable pathway shaped by narrow interests.

He noted that AI reshapes human relationships and cognition, raising concerns about its effects on freedom, creativity and contemplation. He insisted that safeguarding human dignity is essential to managing AI’s wide-ranging consequences.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

GenAI gains ground as manufacturers overhaul shop-floor workflows

AI adoption in manufacturing is accelerating as generative tools are reshaping frontline roles. Many firms see connected worker platforms as a response to labour shortages and a draw for younger recruits. GenAI is emerging as a support layer that boosts productivity without displacing staff.

Operators face mixed training needs, language gaps and stricter safety demands. GenAI supports tailored instructions and smoother knowledge transfer, cutting documentation effort.

Retrieval is becoming more critical as factories digitise. Frontline teams need fast access to clear guidance across text, image and video formats. AI-enabled search interprets intent, reducing delays caused by navigating large content libraries.

Video-based guidance is rising in prominence as short-form media becomes a preferred way for younger workers to learn. AI can convert lengthy procedures into concise visual steps, while multilingual transcription expands accessibility for diverse teams across global operations.

The growing use of AI tools marks a shift toward more adaptive factory operations. Manufacturers view connected worker platforms as vital to competitiveness, with AI integration offering gains in engagement, safety and performance.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

Ireland and Australia deepen cooperation on online safety

Ireland’s online safety regulator has agreed a new partnership with Australia’s eSafety Commissioner to strengthen global approaches to digital harm. The Memorandum of Understanding (MoU) reinforces shared ambitions to improve online protection for children and adults.

The Irish and Australian plan to exchange data, expertise and methodological insights to advance safer digital platforms. Officials describe the arrangement as a way to enhance oversight of systems used to minimise harmful content and promote responsible design.

Leaders from both organisations emphasised the need for accountability across the tech sector. Their comments highlighted efforts to ensure that platforms embed user protection into their product architecture, rather than relying solely on reactive enforcement.

The MoU also opens avenues for collaborative policy development and joint work on education programs. Officials expect a deeper alignment around age assurance technologies and emerging regulatory challenges as online risks continue to evolve.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

UK government confirms crypto as protected personal property

A significant shift in property law has occurred in the United Kingdom, as digital assets are gaining formal recognition as personal property.

The Property Digital Assets Act has received Royal Assent, giving owners of cryptocurrency and non-fungible tokens clearer legal rights and stronger protection. Greater certainty over ownership aims to reduce disputes and strengthen trust in the sector.

The government aims to boost the country’s position as a global centre for legal innovation, rather than merely reacting to technological change. The new framework reassures fintech companies that England, Wales and Northern Ireland can support modern commercial activity.

As part of a wider growth plan, the change is expected to stimulate further investment in a legal services industry worth more than £ 40 billion annually.

Traditional law recognised only tangible items and legal rights, yet digital assets required distinct treatment.

The Act creates a new category, allowing certain digital assets to be treated like other property, including being inherited or recovered during bankruptcy. With cryptocurrency fraud on the rise, owners now have a more straightforward path to remedy when digital assets are stolen.

Legal certainty also simplifies commercial activity for firms handling crypto transactions. The move aligns digital assets with established forms of property rather than leaving them in an undefined space, which encourages adoption and reduces the likelihood of costly disagreements.

The government expects the new clarity to attract more businesses to the UK and reinforce the country’s role in shaping future digital regulation.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

NSA warns AI poses new risks for operational technology

The US National Security Agency (NSA), together with international partners including Australia’s ACSC, has issued guidance on the secure integration of AI into operational technology (OT).

The Principles for the Secure Integration of AI in OT warn that while AI can optimise critical infrastructure, it also introduces new risks for safety-critical environments. Although aimed at OT administrators, the guidance also highlights issues relevant to IT networks.

AI is increasingly deployed in sectors such as energy, water treatment, healthcare, and manufacturing to automate processes and enhance efficiency.

The NSA’s guidance, however, flags several potential threats, including adversarial prompt injection, data poisoning, AI drift, and reduced explainability, all of which can compromise safety and compliance.

Over-reliance on AI may also lead to human de-skilling, cognitive overload, and distraction, while AI hallucinations raise concerns about reliability in safety-critical settings.

Experts emphasise that AI cannot currently be trusted to make independent safety decisions in OT networks, where the margin for error is far smaller than in standard IT systems.

Sam Maesschalck, an OT engineer, noted that introducing AI without first addressing pre-existing infrastructure issues, such as insufficient data feeds or incomplete asset inventories, could undermine both security and operational efficiency.

The guidance aims to help organisations evaluate AI risks, clarify accountability, and prepare for potential misbehaviour, underlining the importance of careful planning before deploying AI in operationally critical environments.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Google drives health innovation through new EU AI initiative

At the European Health Summit in Brussels, Google presented new research suggesting that AI could help Europe overcome rising healthcare pressures.

The report, prepared by Implement Consulting Group for Google, argues that scientific productivity is improving again, rather than continuing a long period of stagnation. Early results already show shorter waiting times in emergency departments, offering practitioners more space to focus on patient needs.

Momentum at the Summit increased as Google announced new support for AI adoption in frontline care.

Five million dollars from Google.org will fund Bayes Impact to launch an EU-wide initiative known as ‘Impulse Healthcare’. The programme will allow nurses, doctors and administrators to design and test their own AI tools through an open-source platform.

By placing development in the hands of practitioners, the project aims to expand ideas that help staff reclaim valuable time during periods of growing demand.

Successful tools developed at a local level will be scaled across the EU, providing a path to more efficient workflows and enhanced patient care.

Google views these efforts as part of a broader push to rebuild capacity in Europe’s health systems.

AI-assisted solutions may reduce administrative burdens, support strained workforces and guide decisions through faster, data-driven insights, strengthening everyday clinical practice.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

ESMA could gain direct supervision over crypto firms

The European Commission has proposed giving the European Securities and Markets Authority (ESMA) expanded powers to oversee crypto and broader financial markets, aiming to close the regulatory gap with the United States.

The plan would give ESMA direct supervision of crypto service providers, trading venues, and central counterparties, while boosting its role in asset management coordination. Approval from the European Parliament and the Council is still required.

Calls for stronger oversight have grown following concerns over lenient national regimes, including Malta’s crypto licensing system. France, Austria, and Italy have called for ESMA to directly oversee major crypto firms, with France threatening to block cross-border licence passporting.

Revisions to the Markets in Crypto-Assets Regulation (MiCA) are also under discussion, with proposals for stricter rules on offshore crypto activities, improved cybersecurity oversight, and tighter regulations for token offerings.

Experts warn that centralising ESMA supervision may slow innovation, especially for smaller crypto and fintech startups reliant on national regulators. ESMA would need significant resources for the expanded mandate, which could slow decision-making across the EU.

The proposal aims to boost EU capital market competitiveness and increase wealth for citizens. EU stock exchanges currently account for just 73% of the bloc’s GDP, compared with 270% in the US, highlighting the need for a more integrated regulatory framework.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

Will the AI boom hold or collapse?

Global investment in AI has soared to unprecedented heights, yet the technology’s real-world adoption lags far behind the market’s feverish expectations. Despite trillions of dollars in valuations and a global AI market projected to reach nearly $5 trillion by 2033, mounting evidence suggests that companies struggle to translate AI pilots into meaningful results.

As Jovan Kurbalija argues in his recent analysis, hype has outpaced both technological limits and society’s ability to absorb rapid change, raising the question of whether the AI bubble is nearing a breaking point.

Kurbalija identifies several forces inflating the bubble, such as relentless media enthusiasm that fuels fear of missing out, diminishing returns on ever-larger computing power, and the inherent logical constraints of today’s large language models, which cannot simply be ‘scaled’ into human-level intelligence.

At the same time, organisations are slow to reorganise workflows, regulations, and skills around AI, resulting in high failure rates for corporate initiatives. A new competitive landscape, driven by ultra-low-cost open-source models such as China’s DeepSeek, further exposes the fragility of current proprietary spending and the vast discrepancies in development costs.

Looking forward, Kurbalija outlines possible futures ranging from a rational shift toward smaller, knowledge-centric AI systems to a world in which major AI firms become ‘too big to fail’, protected by government backstops similar to the 2008 financial crisis. Geopolitics may also justify massive public spending as the US and China frame AI leadership as a national security imperative.

Other scenarios include a consolidation of power among a handful of tech giants or a mild ‘AI winter’ in which investment cools and attention pivots to the next frontier technologies, such as quantum computing or immersive digital environments.

Regardless of which path emerges, the defining battle ahead will centre on the open-source versus proprietary AI debate. Both Washington and Beijing are increasingly embracing open models as strategic assets, potentially reshaping global standards and forcing big tech firms to rethink their closed ecosystems.

As Kurbalija concludes, the outcome will depend less on technical breakthroughs and more on societal choices, balancing openness, competition, and security in shaping whether AI becomes a sustainable foundation of economic life or the latest digital bubble to deflate under its own weight.

Would you like to learn more about AI, tech and digital diplomacyIf so, ask our Diplo chatbot!