Samsara turns operational data into real-world impact

Samsara has built a platform that helps companies with physical operations run more safely and efficiently. Founded in 2015 by MIT alumni John Bicket and Sanjit Biswas, the company connects workers, vehicles, and equipment through cloud-based analytics.

The platform combines sensors, AI cameras, GPS tracking, and real-time alerts to cut accidents, fuel use, and maintenance costs. Large companies across logistics, construction, manufacturing, and energy report cost savings and improved safety after adopting the system.

Samsara turns large volumes of operational data into actionable insights for frontline workers and managers. Tools like driver coaching, predictive maintenance, and route optimisation reduce risk at scale while recognising high-performing field workers.

The company is expanding its use of AI to manage weather risk, support sustainability, and enable the adoption of electric fleets. They position data-driven decision-making as central to modernising critical infrastructure worldwide.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

Indian companies remain committed to AI spending

Almost all Indian companies plan to sustain AI spending even without near-term financial returns. A BCG survey shows 97 percent will keep investing, higher than the 94 percent global rate.

Corporate AI budgets in India are expected to rise to about 1.7 percent of revenue in 2026. Leaders see AI as a long-term strategic priority rather than a short-term cost.

Around 88 percent of Indian executives express confidence in AI generating positive business outcomes. That is above the global average of 82 percent, reflecting strong optimism among local decision-makers.

Despite enthusiasm, fewer Indian CEOs personally lead AI strategy than their global peers, and workforce AI skills lag international benchmarks. Analysts say talent and leadership alignment remain key as spending grows.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

Matthew McConaughey moves decisively to protect AI likeness rights

Oscar-winning actor Matthew McConaughey has trademarked his image and voice to protect them from unauthorised use by AI platforms. His lawyers say the move is intended to safeguard consent and attribution in an evolving digital environment.

Several clips, including his well-known catchphrase from Dazed and Confused, have been registered with the United States Patent and Trademark Office. Legal experts say it is the first time an actor has used trademark law to address potential AI misuse of their likeness.

McConaughey’s legal team said there is no evidence of his image being manipulated by AI so far. The trademarks are intended to act as a preventative measure against unauthorised copying or commercial use.

The actor said he wants to ensure any future use of his voice or appearance is approved. Lawyers also said the approach could help capture value created through licensed AI applications.

Concerns over deepfakes and synthetic media are growing across the entertainment industry. Other celebrities have faced unauthorised AI-generated content, prompting calls for stronger legal protections.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

EU allocates $356 million for AI and digital technologies

The European Commission has announced €307.3 million ($356 million) in new funding to advance digital technologies across the EU. The initiative aims to strengthen Europe’s innovation, competitiveness, and strategic digital autonomy.

A total of €221.8 million will support projects in AI, robotics, quantum technologies, photonics, and virtual worlds. One focus is the development of trustworthy AI services and innovative data solutions to enhance EU digital leadership.

More than €40 million has been allocated to the Open Internet Stack Initiative, which aims to advance end-user applications and core stack technologies, boosting European digital sovereignty. A second call of €85.5 million will target open strategic autonomy in emerging digital technologies and raw materials.

The funding is open to businesses, academic institutions, public administrations, and other entities from EU member states and partner countries. Priority areas include next-generation AI agents, industrial and service robotics, and new materials with enhanced sensing capabilities.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

Ofcom probes AI companion chatbot over age checks

Ofcom has opened an investigation into Novi Ltd over age checks on its AI companion chatbot. The probe focuses on duties under the Online Safety Act.

Regulators will assess whether children can access pornographic content without effective age assurance. Sanctions could include substantial fines or business disruption measures under the UK’s Online Safety Bill.

In a separate case, Ofcom confirmed enforcement pressure led Snapchat to overhaul its illegal content risk assessment. Revised findings now require stronger protections for UK users.

Ofcom said accurate risk assessments underpin online safety regulation. Platforms must match safeguards to real world risks, particularly when AI and children are concerned.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

Regulators press on with Grok investigations in Britain and Canada

Britain and Canada are continuing regulatory probes into xAI’s Grok chatbot, signalling that official scrutiny will persist despite the company’s announcement of new safeguards. Authorities say concerns remain over the system’s ability to generate explicit and non-consensual images.

xAI said it had updated Grok to block edits that place real people in revealing clothing and restricted image generation in jurisdictions where such content is illegal. The company did not specify which regions are affected by the new limits.

Reuters testing found Grok was still capable of producing sexualised images, including in Britain. Social media platform X and xAI did not respond to questions about how effective the changes have been.

UK regulator Ofcom said its investigation remains ongoing, despite welcoming xAI’s announcement. A privacy watchdog in Canada also confirmed it is expanding an existing probe into both X and xAI.

Pressure is growing internationally, with countries including France, India, and the Philippines raising concerns. British Technology Secretary Liz Kendall said the Online Safety Act gives the government tools to hold platforms accountable for harmful content.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

Japan and ASEAN agree to boost AI collaboration

Japan and the Association of Southeast Asian Nations (ASEAN) have agreed to collaborate on developing new AI models and preparing related legislation. The cooperation was formalised in a joint statement at a digital ministers’ meeting in Hanoi on Thursday.

Proposed by Minister Hayashi, the initiative aims to boost regional AI capabilities amid US and Chinese competition. Japan emphasised its ongoing commitment to supporting ASEAN’s technological development.

The partnership follows last October’s Japan-ASEAN summit, where Prime Minister Takaichi called for joint research in semiconductors and AI. The agreement aims to foster closer innovation ties and regional collaboration in strategic technology sectors.

The collaboration will engage public and private stakeholders to promote research, knowledge exchange, and capacity-building across ASEAN. Officials expect the partnership to speed AI adoption while maintaining regional regulations and ethical standards.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Council of Europe highlights legal frameworks for AI fairness

The Council of Europe recently hosted an online event to examine the challenges posed by algorithmic discrimination and explore ways to strengthen governance frameworks for AI and automated decision-making (ADM) systems.

Two new publications were presented, focusing on legal protections against algorithmic bias and policy guidelines for equality bodies and human rights institutions.

Algorithmic bias has been shown to exacerbate existing social inequalities. In employment, AI systems trained on historical data may unfairly favour male candidates or disadvantage minority groups.

Public authorities also use AI in law enforcement, migration, welfare, justice, education, and healthcare, where profiling, facial recognition, and other automated tools can carry discriminatory risks. Private-sector applications in banking, insurance, and personnel services similarly raise concerns.

Legal frameworks such as the EU AI Act (2024/1689) and the Council of Europe’s Framework Convention on AI, human rights, democracy, and the rule of law aim to mitigate these risks. The publications review how regulations protect against algorithmic discrimination and highlight remaining gaps.

National equality bodies and human rights structures play a key role in monitoring AI/ADM systems, ensuring compliance, and promoting human rights-based deployment.

The webinar highlighted practical guidance and examples for applying EU and Council of Europe rules to public sector AI initiatives, fostering more equitable and accountable systems.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

Brazil excluded from WhatsApp rival AI chatbot ban

WhatsApp has excluded Brazil from its new restriction on third-party general-purpose chatbots, allowing AI providers to continue operating on the platform despite a broader policy shift affecting other markets.

The decision follows action by the competition authority of Brazil, which ordered Meta to suspend elements of the policy while assessing whether the rules unfairly disadvantage rival chatbot providers in favour of Meta AI.

Developers have been informed that services linked to Brazilian phone numbers do not need to stop responding to users or issue service warnings.

Elsewhere, WhatsApp has introduced a 90-day grace period starting in mid-January, requiring chatbot developers to halt responses and notify users that services will no longer function on the app.

The policy applies to tools such as ChatGPT and Grok, while customer service bots used by businesses remain unaffected.

Italy has already secured a similar exemption after regulatory scrutiny, while the EU has opened an antitrust investigation into the new rules.

Meta continues to argue that general-purpose AI chatbots place technical strain on systems designed for business messaging instead of acting as an open distribution platform for AI services.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

SRB GDPR case withdrawn from EU court

A high-profile EU court case on pseudonymised data has ended without a final ruling. The dispute involved the Single Resolution Board and the European Data Protection Supervisor.

The case focused on whether pseudonymised opinions qualify as personal data under the GDPR. Judges were also asked to assess reidentification risks and notification duties.

After intervention by the Court of Justice of the European Union, the matter returned to the General Court. Both parties later withdrew the case, leaving no binding judgement.

Legal experts say the CJEU’s guidance continues to shape enforcement practice. Regulators are expected to reflect those principles in updated EU pseudonymisation guidelines.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot