AI agents redefine knowledge work through cognitive collaboration

A new study by Perplexity and Harvard researchers sheds light on how people use AI agents at scale.

Millions of anonymised interactions were analysed to understand who relies on agent technology, how intensively it is used and what tasks users delegate. The findings challenge the notion of a digital concierge model and reveal a shift toward more profound cognitive collaboration, rather than merely outsourcing tasks.

More than half of all activity involves cognitive work, with strong emphasis on productivity, learning and research. Users depend on agents to scan documents, summarise complex material and prepare early analysis before making final decisions.

Students use AI agents to navigate coursework, while professionals rely on them to process information or filter financial data. The pattern suggests that users adopt agents to elevate their own capability instead of avoiding effort.

Usage also evolves. Early queries often involve low-pressure tasks, yet long-term behaviour moves sharply toward productivity and sustained research. Retention rates are highest among users working on structured workflows or tasks that require knowledge.

The trajectory mirrors the early personal computer, which gained value through spreadsheets and text processing rather than recreational use.

Six main occupations now drive most agent activity, with firm reliance among digital specialists as well as marketing, management and entrepreneurial roles. Context shapes behaviour, as finance users concentrate on efficiency while students favour research.

Designers and hospitality staff follow patterns linked to their professional needs. The study argues that knowledge work is increasingly shaped by the ability to ask better questions and that hybrid intelligence will define future productivity.

The pace of adaptation across the broader economy remains an open question.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Global network strengthens AI measurement and evaluation

Leaders around the world have committed to strengthening the scientific measurement and evaluation of AI following a recent meeting in San Diego.

Representatives from major economies agreed to intensify collaboration under the newly renamed International Network for Advanced AI Measurement, Evaluation and Science.

The UK has assumed the role of Network Coordinator, guiding efforts to create rigorous, globally recognised methods for assessing advanced AI systems.

A network that includes Australia, Canada, the EU, France, Japan, Kenya, the Republic of Korea, Singapore, the UK and the US, promoting shared understanding and consistent evaluation practices.

Since its formation in November 2024, the Network has fostered knowledge exchange to align countries on AI measurement and evaluation best practices. Boosting public trust in AI remains central, unlocking innovations, new jobs, and opportunities for businesses and innovators to expand.

The recent San Diego discussions coincided with NeurIPS, allowing government, academic and industry stakeholders to collaborate more deeply.

AI Minister Kanishka Narayan highlighted the importance of trust as a foundation for progress, while Adam Beaumont, Interim Director of the AI Security Institute, stressed the need for global approaches to testing advanced AI.

The Network aims to provide practical and rigorous evaluation tools to ensure the safe development and deployment of AI worldwide.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

China pushes global leadership on AI governance

Global discussions on artificial intelligence have multiplied, yet the world still lacks a coherent system to manage the technology’s risks. China is attempting to fill that gap by proposing a new World Artificial Intelligence Cooperation Organisation to coordinate regulation internationally.

Countries face mounting concerns over unsafe AI development, with the US relying on fragmented rules and voluntary commitments from tech firms. The EU has introduced binding obligations through its AI Act, although companies continue to push for weaker oversight.

China’s rapid rollout of safety requirements, including pre-deployment checks and watermarking of AI-generated content, is reshaping global standards as many firms overseas adopt Chinese open-weight models.

A coordinated international framework similar to the structure used for nuclear oversight could help governments verify compliance and stabilise the global AI landscape.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

OpenAI launches training courses for workers and teachers

OpenAI has unveiled two training courses designed to prepare workers and educators for careers shaped by AI. The new AI Foundations course is delivered directly inside ChatGPT, enabling learners to practise tasks, receive guidance, and earn a credential that signals job-ready skills.

Employers, including Walmart, John Deere, Lowe’s, BCG and Accenture, are among the early adopters. Public-sector partners in the US are also joining pilots, while universities such as Arizona State and the California State system are testing certification pathways for students.

A second course, ChatGPT Foundations for Teachers, is available on Coursera and is designed for K-12 educators. It introduces core concepts, classroom applications and administrative uses, reflecting growing teacher reliance on AI tools.

OpenAI states that demand for AI skills is increasing rapidly, with workers trained in the field earning significantly higher salaries. The company frames the initiative as a key step toward its upcoming jobs platform.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

Rising UK screen time sparks concerns for wellbeing

UK internet use has risen sharply, with adults spending over four and a half hours a day online in 2025, according to Ofcom’s latest Online Nation report.

Public sentiment has cooled, as fewer people now believe the internet is good for society, despite most still judging its benefits to outweigh the risks.

Children report complex online experiences, with many enjoying their digital time while also acknowledging adverse effects such as the so-called ‘brain rot’ linked to endless scrolling.

Significant portions of young people’s screen time occur late at night on major platforms, raising concerns about well-being.

New rules requiring age checks for UK pornography sites prompted a surge in VPN use as people attempted to bypass restrictions, although numbers have since declined.

Young users increasingly turn to online tools such as ASMR for relaxation, yet many also encounter toxic self-improvement content and body shaming.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

Snowflake launches AI platform for Japan enterprises

Japan’s businesses are set to gain new AI capabilities with the arrival of Snowflake Intelligence, a platform designed to let employees ask complex data questions using natural language.

The tool integrates structured and unstructured data into a single environment, enabling faster and more transparent decision-making.

Early adoption worldwide has seen more than 15,000 AI agents deployed in recent months, reflecting growing demand for enterprise AI. Snowflake Intelligence builds on this momentum by offering rapid text-to-SQL responses, advanced agent management and strong governance controls.

Japanese enterprises are expected to benefit from streamlined workflows, increased productivity, and improved competitiveness as AI agents uncover patterns across various sectors, including finance and manufacturing.

Snowflake aims to showcase the platform’s full capabilities during its upcoming BUILD event in December while promoting broader adoption of data-driven innovation.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

Salesforce pushes unified data model for safer AI agents

Salesforce and Informatica are promoting a shared data framework designed to provide AI agents with a deeper understanding of business. Salesforce states that many projects fail due to context gaps, which leave agents unable to interpret enterprise data accurately.

Informatica adds master data management and a broad catalogue that defines core business entities across systems. Data lineage tools track how information moves through an organisation, helping agents judge reliability and freshness.

Data 360 merges these metadata layers and signals into a unified context interface without copying enterprise datasets. Salesforce claims that the approach provides Agentforce with a more comprehensive view of customers, processes, and policies, thereby supporting safer automation.

Wyndham and Yamaha representatives, quoted by Salesforce, say the combined stack helps reduce data inconsistency and accelerate decision-making. Both organisations report improved access to governed and harmonised records that support larger AI strategies.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

Google faces scrutiny over AI use of online content

The European Commission has opened an antitrust probe into Google over concerns it used publisher and YouTube content to develop its AI services on unfair terms.

Regulators are assessing whether Google used its dominant position to gain unfair access to content powering features like AI Overviews and AI Mode. They are examining whether publishers were disadvantaged by being unable to refuse use of their content without losing visibility on Google Search.

The probe also covers concerns that YouTube creators may have been required to allow the use of their videos for AI training without compensation, while rival AI developers remain barred from using YouTube content.

The investigation will determine whether these practices breached EU rules on abuse of dominance under Article 102 TFEU. Authorities intend to prioritise the case, though no deadline applies.

Google and national competition authorities have been formally notified as the inquiry proceeds.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

Deutsche Telekom partners with OpenAI to expand advanced AI services across Europe

OpenAI has formed a new partnership with Deutsche Telekom to deliver advanced AI capabilities to millions of people across Europe. The collaboration brings together Deutsche Telekom’s customer base and OpenAI’s research to expand the availability of practical AI tools.

The companies aim to introduce simple, multilingual and privacy-focused AI services starting in 2026, helping users communicate, learn and accomplish tasks more efficiently. Widespread familiarity with platforms such as ChatGPT is expected to support rapid uptake of these new offerings.

Deutsche Telekom will introduce ChatGPT Enterprise internally, giving staff secure access to tools that improve customer support and streamline workflows. The move aligns with the firm’s goal of modernising operations through intelligent automation.

Further integration of AI into network management and employee copilots will support the transition towards more autonomous, self-optimising systems. The partnership is expected to strengthen the availability and reliability of AI services throughout Europe.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

UK study warns of risks behind emotional attachments to AI therapists

A new University of Sussex study suggests that AI mental-health chatbots are most effective when users feel emotionally close to them, but warns this same intimacy carries significant risks.

The research, published in Social Science & Medicine, analysed feedback from 4,000 users of Wysa, an AI therapy app used within the NHS Talking Therapies programme. Many users described the AI as a ‘friend,’ ‘companion,’ ‘therapist,’ or occasionally even a ‘partner.’

Researchers say these emotional bonds can kickstart therapeutic processes such as self-disclosure, increased confidence, and improved wellbeing. Intimacy forms through a loop: users reveal personal information, receive emotionally validating responses, feel gratitude and safety, then disclose more.

But the team warns this ‘synthetic intimacy’ may trap vulnerable users in a self-reinforcing bubble, preventing escalation to clinical care when needed. A chatbot designed to be supportive may fail to challenge harmful thinking, or even reinforce it.

The report highlights growing reliance on AI to fill gaps in overstretched mental-health services. NHS trusts use tools like Wysa and Limbic to help manage referrals and support patients on waiting lists.

Experts caution that AI therapists remain limited: unlike trained clinicians, they lack the ability to read nuance, body language, or broader context. Imperial College’s Prof Hamed Haddadi called them ‘an inexperienced therapist’, adding that systems tuned to maintain user engagement may continue encouraging disclosure even when users express harmful thoughts.

Researchers argue policymakers and app developers must treat synthetic intimacy as an inevitable feature of digital mental-health tools, and build clear escalation mechanisms for cases where users show signs of crisis or clinical disorder.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!