Google revisits smart glasses market with AI-powered models

Google has announced plans to re-enter the smart-glasses market in 2026 with new AI-powered wearables, a decade after discontinuing its ill-fated Google Glass.

The company will introduce two models: one without a screen that provides AI assistance through voice and sensor interaction, and another with an integrated display. The glasses will integrate Google’s Gemini AI system.

The move comes as the sector experiences rapid growth. Meta has sold more than two million pairs of its Ray-Ban-built AI glasses, helping drive a 250% year-on-year surge in smart-glasses sales in early 2025.

Analysts say Google must avoid repeating the missteps of Google Glass, which suffered from privacy concerns, awkward design, and limited functionality before being withdrawn in 2015.

Google’s renewed effort benefits from advances in AI and more mature consumer expectations, but challenges remain. Privacy, data protection, and real-world usability issues, core concerns during Google Glass’s first iteration, are expected to resurface as AI wearables become more capable and pervasive.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Global network strengthens AI measurement and evaluation

Leaders around the world have committed to strengthening the scientific measurement and evaluation of AI following a recent meeting in San Diego.

Representatives from major economies agreed to intensify collaboration under the newly renamed International Network for Advanced AI Measurement, Evaluation and Science.

The UK has assumed the role of Network Coordinator, guiding efforts to create rigorous, globally recognised methods for assessing advanced AI systems.

A network that includes Australia, Canada, the EU, France, Japan, Kenya, the Republic of Korea, Singapore, the UK and the US, promoting shared understanding and consistent evaluation practices.

Since its formation in November 2024, the Network has fostered knowledge exchange to align countries on AI measurement and evaluation best practices. Boosting public trust in AI remains central, unlocking innovations, new jobs, and opportunities for businesses and innovators to expand.

The recent San Diego discussions coincided with NeurIPS, allowing government, academic and industry stakeholders to collaborate more deeply.

AI Minister Kanishka Narayan highlighted the importance of trust as a foundation for progress, while Adam Beaumont, Interim Director of the AI Security Institute, stressed the need for global approaches to testing advanced AI.

The Network aims to provide practical and rigorous evaluation tools to ensure the safe development and deployment of AI worldwide.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

China pushes global leadership on AI governance

Global discussions on artificial intelligence have multiplied, yet the world still lacks a coherent system to manage the technology’s risks. China is attempting to fill that gap by proposing a new World Artificial Intelligence Cooperation Organisation to coordinate regulation internationally.

Countries face mounting concerns over unsafe AI development, with the US relying on fragmented rules and voluntary commitments from tech firms. The EU has introduced binding obligations through its AI Act, although companies continue to push for weaker oversight.

China’s rapid rollout of safety requirements, including pre-deployment checks and watermarking of AI-generated content, is reshaping global standards as many firms overseas adopt Chinese open-weight models.

A coordinated international framework similar to the structure used for nuclear oversight could help governments verify compliance and stabilise the global AI landscape.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

OpenAI launches training courses for workers and teachers

OpenAI has unveiled two training courses designed to prepare workers and educators for careers shaped by AI. The new AI Foundations course is delivered directly inside ChatGPT, enabling learners to practise tasks, receive guidance, and earn a credential that signals job-ready skills.

Employers, including Walmart, John Deere, Lowe’s, BCG and Accenture, are among the early adopters. Public-sector partners in the US are also joining pilots, while universities such as Arizona State and the California State system are testing certification pathways for students.

A second course, ChatGPT Foundations for Teachers, is available on Coursera and is designed for K-12 educators. It introduces core concepts, classroom applications and administrative uses, reflecting growing teacher reliance on AI tools.

OpenAI states that demand for AI skills is increasing rapidly, with workers trained in the field earning significantly higher salaries. The company frames the initiative as a key step toward its upcoming jobs platform.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

US War Department unveils AI-powered GenAI.mil for all personnel

The War Department has formally launched GenAI.mil, a bespoke generative AI platform powered initially by Gemini for Government, making frontier AI capabilities available to its approximately three million military, civilian, and contractor staff.

According to the department’s announcement, GenAI.mil supports so-called ‘intelligent agentic workflows’: users can summarise documents, generate risk assessments, draft policy or compliance material, analyse imagery or video, and automate routine tasks, all on a secure, IL5-certified platform designed for Controlled Unclassified Information (CUI).

The rollout, described as part of a broader push to cultivate an ‘AI-first’ workforce, follows a July directive from the administration calling for the United States to achieve ‘unprecedented levels of AI technological superiority.’

Department leaders said the platform marks a significant shift in how the US military operates, embedding AI into daily workflows and positioning AI as a force multiplier.

Access is limited to users with a valid DoW common-access card, and the service is currently restricted to non-classified work. The department also says the first rollout is just the beginning; additional AI models from other providers will be added later.

From a tech-governance and defence-policy perspective, this represents one of the most sweeping deployments of generative AI in a national security organisation to date.

It raises critical questions about security, oversight and the balance between efficiency and risk, especially if future iterations expand into classified or operational planning contexts.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Workplace study highlights Gemini’s impact on creativity

Google’s new research on the impact of Gemini AI in Workspace reveals that the technology is reshaping how teams collaborate, with surveyed workers reporting weekly time savings and increasing confidence in AI-supported tasks.

The findings, based on input from more than 1,200 leaders and employees across six countries, suggest generative AI is becoming integral to routine workflows.

Many users report that Gemini helps them accomplish more in less time, generate ideas faster, and redirect their attention from repetitive tasks to higher-value work.

The report highlights wider organisational benefits. Leaders see AI as a driver of innovation, but a gap remains between executive ambitions and employee readiness. Google says structured training and phased rollouts are key to building trust and improving adoption accuracy.

New and updated Workspace features aim to address these needs. Recent Gemini releases offer improved task automation, enhanced email drafting, and advanced storytelling tools, while no-code agent builders support more complex workflow design without specialist skills.

The research points to a broader transformation in digital productivity. Companies using Gemini report fewer hours spent on administrative work, higher engagement, and stronger collaboration as AI becomes a functional layer that supports rather than replaces human judgement.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

OpenAI launches Agentic AI Foundation with industry partners

The US AI company, OpenAI, has co-founded the Agentic AI Foundation (AAIF) under the Linux Foundation alongside Anthropic, Block, Google, Microsoft, AWS, Bloomberg, and Cloudflare.

A foundation that aims to provide neutral stewardship for open, interoperable agentic AI infrastructure as systems move from experimental prototypes into real-world applications.

The initiative includes the donation of OpenAI’s AGENTS.md, a lightweight Markdown file designed to provide agents with project-specific instructions and context.

Since its release in August 2025, AGENTS.md has been adopted by more than 60,000 open-source projects, ensuring consistent behaviour across diverse repositories and frameworks. Contributions from Anthropic and Block will include the Model Context Protocol and the goose project, respectively.

By establishing AAIF, the co-founders intend to prevent ecosystem fragmentation and foster safe, portable, and interoperable agentic AI systems.

The foundation provides a shared platform for development, governance, and extension of open standards, with oversight by the Linux Foundation to guarantee neutral, long-term stewardship.

OpenAI emphasises that the foundation will support developers, enterprises, and the wider open-source community, inviting contributors to help shape agentic AI standards.

The AAIF reflects a collaborative effort to advance agentic AI transparently and in the public interest while promoting innovation across tools and platforms.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Snowflake launches AI platform for Japan enterprises

Japan’s businesses are set to gain new AI capabilities with the arrival of Snowflake Intelligence, a platform designed to let employees ask complex data questions using natural language.

The tool integrates structured and unstructured data into a single environment, enabling faster and more transparent decision-making.

Early adoption worldwide has seen more than 15,000 AI agents deployed in recent months, reflecting growing demand for enterprise AI. Snowflake Intelligence builds on this momentum by offering rapid text-to-SQL responses, advanced agent management and strong governance controls.

Japanese enterprises are expected to benefit from streamlined workflows, increased productivity, and improved competitiveness as AI agents uncover patterns across various sectors, including finance and manufacturing.

Snowflake aims to showcase the platform’s full capabilities during its upcoming BUILD event in December while promoting broader adoption of data-driven innovation.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

Salesforce pushes unified data model for safer AI agents

Salesforce and Informatica are promoting a shared data framework designed to provide AI agents with a deeper understanding of business. Salesforce states that many projects fail due to context gaps, which leave agents unable to interpret enterprise data accurately.

Informatica adds master data management and a broad catalogue that defines core business entities across systems. Data lineage tools track how information moves through an organisation, helping agents judge reliability and freshness.

Data 360 merges these metadata layers and signals into a unified context interface without copying enterprise datasets. Salesforce claims that the approach provides Agentforce with a more comprehensive view of customers, processes, and policies, thereby supporting safer automation.

Wyndham and Yamaha representatives, quoted by Salesforce, say the combined stack helps reduce data inconsistency and accelerate decision-making. Both organisations report improved access to governed and harmonised records that support larger AI strategies.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

Google faces scrutiny over AI use of online content

The European Commission has opened an antitrust probe into Google over concerns it used publisher and YouTube content to develop its AI services on unfair terms.

Regulators are assessing whether Google used its dominant position to gain unfair access to content powering features like AI Overviews and AI Mode. They are examining whether publishers were disadvantaged by being unable to refuse use of their content without losing visibility on Google Search.

The probe also covers concerns that YouTube creators may have been required to allow the use of their videos for AI training without compensation, while rival AI developers remain barred from using YouTube content.

The investigation will determine whether these practices breached EU rules on abuse of dominance under Article 102 TFEU. Authorities intend to prioritise the case, though no deadline applies.

Google and national competition authorities have been formally notified as the inquiry proceeds.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot