Samsara turns operational data into real-world impact

Samsara has built a platform that helps companies with physical operations run more safely and efficiently. Founded in 2015 by MIT alumni John Bicket and Sanjit Biswas, the company connects workers, vehicles, and equipment through cloud-based analytics.

The platform combines sensors, AI cameras, GPS tracking, and real-time alerts to cut accidents, fuel use, and maintenance costs. Large companies across logistics, construction, manufacturing, and energy report cost savings and improved safety after adopting the system.

Samsara turns large volumes of operational data into actionable insights for frontline workers and managers. Tools like driver coaching, predictive maintenance, and route optimisation reduce risk at scale while recognising high-performing field workers.

The company is expanding its use of AI to manage weather risk, support sustainability, and enable the adoption of electric fleets. They position data-driven decision-making as central to modernising critical infrastructure worldwide.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

Microsoft launches Elevate for Educators programme

Elevate for Educators, launched by Microsoft, is a global programme designed to help teachers build the skills and confidence to use AI tools in the classroom. The initiative provides free access to training, credentials, and professional learning resources.

The programme connects educators to peer networks, self-paced courses, and AI-powered simulations. The aim is to support responsible AI adoption while improving teaching quality and classroom outcomes.

New educator credentials have been developed in partnership with ISTE and ASCD. Schools and education systems can also gain recognition for supporting professional development and demonstrating impact in classrooms.

AI-powered education tools within Microsoft 365 have been expanded to support lesson planning and personalised instruction. New features help teachers adapt materials to different learning needs and provide students with faster feedback.

College students will also receive free access to Microsoft 365 Premium and LinkedIn Premium Career for 12 months. The offer includes AI tools, productivity apps, and career resources to support future employment.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

Technology is reshaping smoke alarm safety

Smoke alarms remain critical in preventing fatal house fires, according to fire safety officials. Real-life incidents show how early warnings can allow families to escape rapidly spreading blazes.

Modern fire risks are evolving, with lithium-ion batteries and e-bikes creating fast and unpredictable fires. These incidents can release toxic gases and escalate before flames are clearly visible.

Traditional smoke alarm technology continues to perform reliably despite changes in household risks. At the same time, intelligent and AI-based systems are being developed to detect danger sooner.

Reducing false alarms has become a priority, as nuisance alerts often lead people to turn off devices. Fire experts stress that a maintained, certified smoke alarm is far safer than no smoke alarm at all.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

Cloudflare acquires Human Native to build a fair AI content licensing model

San Francisco-based company Cloudflare has acquired Human Native, an AI data marketplace designed to connect content creators with AI developers seeking high-quality training and inference material.

A move that reflects growing pressure to establish clearer economic rules for how online content is used by AI systems.

The acquisition is intended to help creators and publishers decide whether to block AI access entirely, optimise material for machine use, or license content for payment instead of allowing uncontrolled scraping.

Cloudflare says the tools developed through Human Native will support transparent pricing and fair compensation across the AI supply chain.

Human Native, founded in 2024 and backed by UK-based investors, focuses on structuring original content so it can be discovered, accessed and purchased by AI developers through standardised channels.

The team includes researchers and engineers with experience across AI research, design platforms and financial media.

Cloudflare argues that access to reliable and ethically sourced data will shape long-term competition in AI. By integrating Human Native into its wider platform, the company aims to support a more sustainable internet economy that balances innovation with creator rights.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

South Korea establishes legal framework for tokenised securities

South Korea has approved legislation establishing a legal framework for issuing and trading tokenised securities. Amendments recognise blockchain-based securities as legitimate, with rules taking effect in January 2027.

Eligible issuers can create tokenised debt and equity products using blockchain infrastructure, while brokerages and licensed intermediaries will facilitate trading.

Regulators aim to combine the efficiency of distributed ledgers with investor protections and expand the use of smart contracts, enabling previously restricted investments in real estate, art, or agriculture to reach a broader audience.

Implementation will be led by the Financial Services Commission, in collaboration with the Financial Supervisory Service, the Korea Securities Depository, and industry participants.

Consultation bodies will develop infrastructure such as ledger-based account management systems, while local firms, including Mirae Asset Securities and Hana Financial Group, are preparing platforms for the new rules.

Analysts project tokenised assets could reach $2 trillion globally by 2028, with South Korea’s market at $249 billion.

The legislation also complements South Korea’s efforts to regulate blockchain and curb cryptocurrency-related financial crime.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

AI users spend 40% of saved time fixing errors

A recent study from Workday reveals that 40% of the time saved by AI in the workplace is spent correcting errors, highlighting a growing productivity paradox. Frequent AI users are bearing the brunt, often double- or triple-checking outputs to ensure accuracy.

Despite widespread adoption- 87% of employees report using AI at least a few times per week, and 85% save one to seven hours weekly-much of that time is redirected to fixing low-quality results rather than achieving net gains in productivity.

The findings suggest that AI can increase workloads rather than streamline operations if not implemented carefully.

Experts argue that AI should enhance human work rather than replace it. Employees need tools that handle complex tasks reliably, allowing teams to focus on creativity, judgment, and strategic decision-making.

Upskilling staff to manage AI effectively is critical to realising sustainable productivity benefits.

The study also highlights the risk of organisations prioritising speed over quality. Many AI tools place trust and accuracy responsibilities on employees, creating hidden costs and risks for decision-making.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

EU allocates $356 million for AI and digital technologies

The European Commission has announced €307.3 million ($356 million) in new funding to advance digital technologies across the EU. The initiative aims to strengthen Europe’s innovation, competitiveness, and strategic digital autonomy.

A total of €221.8 million will support projects in AI, robotics, quantum technologies, photonics, and virtual worlds. One focus is the development of trustworthy AI services and innovative data solutions to enhance EU digital leadership.

More than €40 million has been allocated to the Open Internet Stack Initiative, which aims to advance end-user applications and core stack technologies, boosting European digital sovereignty. A second call of €85.5 million will target open strategic autonomy in emerging digital technologies and raw materials.

The funding is open to businesses, academic institutions, public administrations, and other entities from EU member states and partner countries. Priority areas include next-generation AI agents, industrial and service robotics, and new materials with enhanced sensing capabilities.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

Regulators press on with Grok investigations in Britain and Canada

Britain and Canada are continuing regulatory probes into xAI’s Grok chatbot, signalling that official scrutiny will persist despite the company’s announcement of new safeguards. Authorities say concerns remain over the system’s ability to generate explicit and non-consensual images.

xAI said it had updated Grok to block edits that place real people in revealing clothing and restricted image generation in jurisdictions where such content is illegal. The company did not specify which regions are affected by the new limits.

Reuters testing found Grok was still capable of producing sexualised images, including in Britain. Social media platform X and xAI did not respond to questions about how effective the changes have been.

UK regulator Ofcom said its investigation remains ongoing, despite welcoming xAI’s announcement. A privacy watchdog in Canada also confirmed it is expanding an existing probe into both X and xAI.

Pressure is growing internationally, with countries including France, India, and the Philippines raising concerns. British Technology Secretary Liz Kendall said the Online Safety Act gives the government tools to hold platforms accountable for harmful content.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

Japan and ASEAN agree to boost AI collaboration

Japan and the Association of Southeast Asian Nations (ASEAN) have agreed to collaborate on developing new AI models and preparing related legislation. The cooperation was formalised in a joint statement at a digital ministers’ meeting in Hanoi on Thursday.

Proposed by Minister Hayashi, the initiative aims to boost regional AI capabilities amid US and Chinese competition. Japan emphasised its ongoing commitment to supporting ASEAN’s technological development.

The partnership follows last October’s Japan-ASEAN summit, where Prime Minister Takaichi called for joint research in semiconductors and AI. The agreement aims to foster closer innovation ties and regional collaboration in strategic technology sectors.

The collaboration will engage public and private stakeholders to promote research, knowledge exchange, and capacity-building across ASEAN. Officials expect the partnership to speed AI adoption while maintaining regional regulations and ethical standards.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Council of Europe highlights legal frameworks for AI fairness

The Council of Europe recently hosted an online event to examine the challenges posed by algorithmic discrimination and explore ways to strengthen governance frameworks for AI and automated decision-making (ADM) systems.

Two new publications were presented, focusing on legal protections against algorithmic bias and policy guidelines for equality bodies and human rights institutions.

Algorithmic bias has been shown to exacerbate existing social inequalities. In employment, AI systems trained on historical data may unfairly favour male candidates or disadvantage minority groups.

Public authorities also use AI in law enforcement, migration, welfare, justice, education, and healthcare, where profiling, facial recognition, and other automated tools can carry discriminatory risks. Private-sector applications in banking, insurance, and personnel services similarly raise concerns.

Legal frameworks such as the EU AI Act (2024/1689) and the Council of Europe’s Framework Convention on AI, human rights, democracy, and the rule of law aim to mitigate these risks. The publications review how regulations protect against algorithmic discrimination and highlight remaining gaps.

National equality bodies and human rights structures play a key role in monitoring AI/ADM systems, ensuring compliance, and promoting human rights-based deployment.

The webinar highlighted practical guidance and examples for applying EU and Council of Europe rules to public sector AI initiatives, fostering more equitable and accountable systems.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot