Google revisits smart glasses market with AI-powered models

Google has announced plans to re-enter the smart-glasses market in 2026 with new AI-powered wearables, a decade after discontinuing its ill-fated Google Glass.

The company will introduce two models: one without a screen that provides AI assistance through voice and sensor interaction, and another with an integrated display. The glasses will integrate Google’s Gemini AI system.

The move comes as the sector experiences rapid growth. Meta has sold more than two million pairs of its Ray-Ban-built AI glasses, helping drive a 250% year-on-year surge in smart-glasses sales in early 2025.

Analysts say Google must avoid repeating the missteps of Google Glass, which suffered from privacy concerns, awkward design, and limited functionality before being withdrawn in 2015.

Google’s renewed effort benefits from advances in AI and more mature consumer expectations, but challenges remain. Privacy, data protection, and real-world usability issues, core concerns during Google Glass’s first iteration, are expected to resurface as AI wearables become more capable and pervasive.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Global network strengthens AI measurement and evaluation

Leaders around the world have committed to strengthening the scientific measurement and evaluation of AI following a recent meeting in San Diego.

Representatives from major economies agreed to intensify collaboration under the newly renamed International Network for Advanced AI Measurement, Evaluation and Science.

The UK has assumed the role of Network Coordinator, guiding efforts to create rigorous, globally recognised methods for assessing advanced AI systems.

A network that includes Australia, Canada, the EU, France, Japan, Kenya, the Republic of Korea, Singapore, the UK and the US, promoting shared understanding and consistent evaluation practices.

Since its formation in November 2024, the Network has fostered knowledge exchange to align countries on AI measurement and evaluation best practices. Boosting public trust in AI remains central, unlocking innovations, new jobs, and opportunities for businesses and innovators to expand.

The recent San Diego discussions coincided with NeurIPS, allowing government, academic and industry stakeholders to collaborate more deeply.

AI Minister Kanishka Narayan highlighted the importance of trust as a foundation for progress, while Adam Beaumont, Interim Director of the AI Security Institute, stressed the need for global approaches to testing advanced AI.

The Network aims to provide practical and rigorous evaluation tools to ensure the safe development and deployment of AI worldwide.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

China pushes global leadership on AI governance

Global discussions on artificial intelligence have multiplied, yet the world still lacks a coherent system to manage the technology’s risks. China is attempting to fill that gap by proposing a new World Artificial Intelligence Cooperation Organisation to coordinate regulation internationally.

Countries face mounting concerns over unsafe AI development, with the US relying on fragmented rules and voluntary commitments from tech firms. The EU has introduced binding obligations through its AI Act, although companies continue to push for weaker oversight.

China’s rapid rollout of safety requirements, including pre-deployment checks and watermarking of AI-generated content, is reshaping global standards as many firms overseas adopt Chinese open-weight models.

A coordinated international framework similar to the structure used for nuclear oversight could help governments verify compliance and stabilise the global AI landscape.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

OpenAI launches training courses for workers and teachers

OpenAI has unveiled two training courses designed to prepare workers and educators for careers shaped by AI. The new AI Foundations course is delivered directly inside ChatGPT, enabling learners to practise tasks, receive guidance, and earn a credential that signals job-ready skills.

Employers, including Walmart, John Deere, Lowe’s, BCG and Accenture, are among the early adopters. Public-sector partners in the US are also joining pilots, while universities such as Arizona State and the California State system are testing certification pathways for students.

A second course, ChatGPT Foundations for Teachers, is available on Coursera and is designed for K-12 educators. It introduces core concepts, classroom applications and administrative uses, reflecting growing teacher reliance on AI tools.

OpenAI states that demand for AI skills is increasing rapidly, with workers trained in the field earning significantly higher salaries. The company frames the initiative as a key step toward its upcoming jobs platform.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

Online data exposure heightens threats to healthcare workers

Healthcare workers are facing escalating levels of workplace violence, with more than three-quarters reporting verbal or physical assaults, prompting hospitals to reassess how they protect staff from both on-site and external threats.

A new study examining people search sites suggests that online exposure of personal information may worsen these risks. Researchers analysed the digital footprint of hundreds of senior medical professionals, finding widespread availability of sensitive personal data.

The study shows that many doctors appear across multiple data broker platforms, with a significant share listed on five or more sites, making it difficult to track, manage, or remove personal information once it enters the public domain.

Exposure varies by age and geography. Younger doctors tend to have smaller digital footprints, while older professionals are more exposed due to accumulated public records. State-level transparency laws also appear to influence how widely data is shared.

Researchers warn that detailed profiles, often available for a small fee, can enable harassment or stalking at a time when threats against healthcare leaders are rising. The findings renew calls for stronger privacy protections for medical staff.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

Rising UK screen time sparks concerns for wellbeing

UK internet use has risen sharply, with adults spending over four and a half hours a day online in 2025, according to Ofcom’s latest Online Nation report.

Public sentiment has cooled, as fewer people now believe the internet is good for society, despite most still judging its benefits to outweigh the risks.

Children report complex online experiences, with many enjoying their digital time while also acknowledging adverse effects such as the so-called ‘brain rot’ linked to endless scrolling.

Significant portions of young people’s screen time occur late at night on major platforms, raising concerns about well-being.

New rules requiring age checks for UK pornography sites prompted a surge in VPN use as people attempted to bypass restrictions, although numbers have since declined.

Young users increasingly turn to online tools such as ASMR for relaxation, yet many also encounter toxic self-improvement content and body shaming.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

OpenAI launches Agentic AI Foundation with industry partners

The US AI company, OpenAI, has co-founded the Agentic AI Foundation (AAIF) under the Linux Foundation alongside Anthropic, Block, Google, Microsoft, AWS, Bloomberg, and Cloudflare.

A foundation that aims to provide neutral stewardship for open, interoperable agentic AI infrastructure as systems move from experimental prototypes into real-world applications.

The initiative includes the donation of OpenAI’s AGENTS.md, a lightweight Markdown file designed to provide agents with project-specific instructions and context.

Since its release in August 2025, AGENTS.md has been adopted by more than 60,000 open-source projects, ensuring consistent behaviour across diverse repositories and frameworks. Contributions from Anthropic and Block will include the Model Context Protocol and the goose project, respectively.

By establishing AAIF, the co-founders intend to prevent ecosystem fragmentation and foster safe, portable, and interoperable agentic AI systems.

The foundation provides a shared platform for development, governance, and extension of open standards, with oversight by the Linux Foundation to guarantee neutral, long-term stewardship.

OpenAI emphasises that the foundation will support developers, enterprises, and the wider open-source community, inviting contributors to help shape agentic AI standards.

The AAIF reflects a collaborative effort to advance agentic AI transparently and in the public interest while promoting innovation across tools and platforms.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Salesforce pushes unified data model for safer AI agents

Salesforce and Informatica are promoting a shared data framework designed to provide AI agents with a deeper understanding of business. Salesforce states that many projects fail due to context gaps, which leave agents unable to interpret enterprise data accurately.

Informatica adds master data management and a broad catalogue that defines core business entities across systems. Data lineage tools track how information moves through an organisation, helping agents judge reliability and freshness.

Data 360 merges these metadata layers and signals into a unified context interface without copying enterprise datasets. Salesforce claims that the approach provides Agentforce with a more comprehensive view of customers, processes, and policies, thereby supporting safer automation.

Wyndham and Yamaha representatives, quoted by Salesforce, say the combined stack helps reduce data inconsistency and accelerate decision-making. Both organisations report improved access to governed and harmonised records that support larger AI strategies.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

Google faces scrutiny over AI use of online content

The European Commission has opened an antitrust probe into Google over concerns it used publisher and YouTube content to develop its AI services on unfair terms.

Regulators are assessing whether Google used its dominant position to gain unfair access to content powering features like AI Overviews and AI Mode. They are examining whether publishers were disadvantaged by being unable to refuse use of their content without losing visibility on Google Search.

The probe also covers concerns that YouTube creators may have been required to allow the use of their videos for AI training without compensation, while rival AI developers remain barred from using YouTube content.

The investigation will determine whether these practices breached EU rules on abuse of dominance under Article 102 TFEU. Authorities intend to prioritise the case, though no deadline applies.

Google and national competition authorities have been formally notified as the inquiry proceeds.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

US rollout brings AI face tagging to Amazon Ring

Amazon has begun rolling out a new facial recognition feature for its Ring doorbells, allowing devices to identify frequent visitors and send personalised alerts instead of generic motion notifications.

The feature, called Familiar Faces, enables users to create a catalogue of up to 50 individuals, such as family members, friends, neighbours or delivery drivers, by labelling faces directly within the Ring app.

Amazon says the rollout is now under way in the United States, where Ring owners can opt in to the feature, which is disabled by default and designed to reduce unwanted or repetitive alerts.

The company claims facial data is encrypted, not shared externally and not used to train AI models, while unnamed faces are automatically deleted after 30 days, giving users ongoing control over stored information.

Privacy advocates and lawmakers remain concerned, however, citing Ring’s past security failures and law enforcement partnerships as evidence that convenience-driven surveillance tools can introduce long-term risks to personal privacy.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!