AI agents redefine knowledge work through cognitive collaboration

A new study by Perplexity and Harvard researchers sheds light on how people use AI agents at scale.

Millions of anonymised interactions were analysed to understand who relies on agent technology, how intensively it is used and what tasks users delegate. The findings challenge the notion of a digital concierge model and reveal a shift toward more profound cognitive collaboration, rather than merely outsourcing tasks.

More than half of all activity involves cognitive work, with strong emphasis on productivity, learning and research. Users depend on agents to scan documents, summarise complex material and prepare early analysis before making final decisions.

Students use AI agents to navigate coursework, while professionals rely on them to process information or filter financial data. The pattern suggests that users adopt agents to elevate their own capability instead of avoiding effort.

Usage also evolves. Early queries often involve low-pressure tasks, yet long-term behaviour moves sharply toward productivity and sustained research. Retention rates are highest among users working on structured workflows or tasks that require knowledge.

The trajectory mirrors the early personal computer, which gained value through spreadsheets and text processing rather than recreational use.

Six main occupations now drive most agent activity, with firm reliance among digital specialists as well as marketing, management and entrepreneurial roles. Context shapes behaviour, as finance users concentrate on efficiency while students favour research.

Designers and hospitality staff follow patterns linked to their professional needs. The study argues that knowledge work is increasingly shaped by the ability to ask better questions and that hybrid intelligence will define future productivity.

The pace of adaptation across the broader economy remains an open question.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Global network strengthens AI measurement and evaluation

Leaders around the world have committed to strengthening the scientific measurement and evaluation of AI following a recent meeting in San Diego.

Representatives from major economies agreed to intensify collaboration under the newly renamed International Network for Advanced AI Measurement, Evaluation and Science.

The UK has assumed the role of Network Coordinator, guiding efforts to create rigorous, globally recognised methods for assessing advanced AI systems.

A network that includes Australia, Canada, the EU, France, Japan, Kenya, the Republic of Korea, Singapore, the UK and the US, promoting shared understanding and consistent evaluation practices.

Since its formation in November 2024, the Network has fostered knowledge exchange to align countries on AI measurement and evaluation best practices. Boosting public trust in AI remains central, unlocking innovations, new jobs, and opportunities for businesses and innovators to expand.

The recent San Diego discussions coincided with NeurIPS, allowing government, academic and industry stakeholders to collaborate more deeply.

AI Minister Kanishka Narayan highlighted the importance of trust as a foundation for progress, while Adam Beaumont, Interim Director of the AI Security Institute, stressed the need for global approaches to testing advanced AI.

The Network aims to provide practical and rigorous evaluation tools to ensure the safe development and deployment of AI worldwide.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Online data exposure heightens threats to healthcare workers

Healthcare workers are facing escalating levels of workplace violence, with more than three-quarters reporting verbal or physical assaults, prompting hospitals to reassess how they protect staff from both on-site and external threats.

A new study examining people search sites suggests that online exposure of personal information may worsen these risks. Researchers analysed the digital footprint of hundreds of senior medical professionals, finding widespread availability of sensitive personal data.

The study shows that many doctors appear across multiple data broker platforms, with a significant share listed on five or more sites, making it difficult to track, manage, or remove personal information once it enters the public domain.

Exposure varies by age and geography. Younger doctors tend to have smaller digital footprints, while older professionals are more exposed due to accumulated public records. State-level transparency laws also appear to influence how widely data is shared.

Researchers warn that detailed profiles, often available for a small fee, can enable harassment or stalking at a time when threats against healthcare leaders are rising. The findings renew calls for stronger privacy protections for medical staff.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

US War Department unveils AI-powered GenAI.mil for all personnel

The War Department has formally launched GenAI.mil, a bespoke generative AI platform powered initially by Gemini for Government, making frontier AI capabilities available to its approximately three million military, civilian, and contractor staff.

According to the department’s announcement, GenAI.mil supports so-called ‘intelligent agentic workflows’: users can summarise documents, generate risk assessments, draft policy or compliance material, analyse imagery or video, and automate routine tasks, all on a secure, IL5-certified platform designed for Controlled Unclassified Information (CUI).

The rollout, described as part of a broader push to cultivate an ‘AI-first’ workforce, follows a July directive from the administration calling for the United States to achieve ‘unprecedented levels of AI technological superiority.’

Department leaders said the platform marks a significant shift in how the US military operates, embedding AI into daily workflows and positioning AI as a force multiplier.

Access is limited to users with a valid DoW common-access card, and the service is currently restricted to non-classified work. The department also says the first rollout is just the beginning; additional AI models from other providers will be added later.

From a tech-governance and defence-policy perspective, this represents one of the most sweeping deployments of generative AI in a national security organisation to date.

It raises critical questions about security, oversight and the balance between efficiency and risk, especially if future iterations expand into classified or operational planning contexts.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

OpenAI launches Agentic AI Foundation with industry partners

The US AI company, OpenAI, has co-founded the Agentic AI Foundation (AAIF) under the Linux Foundation alongside Anthropic, Block, Google, Microsoft, AWS, Bloomberg, and Cloudflare.

A foundation that aims to provide neutral stewardship for open, interoperable agentic AI infrastructure as systems move from experimental prototypes into real-world applications.

The initiative includes the donation of OpenAI’s AGENTS.md, a lightweight Markdown file designed to provide agents with project-specific instructions and context.

Since its release in August 2025, AGENTS.md has been adopted by more than 60,000 open-source projects, ensuring consistent behaviour across diverse repositories and frameworks. Contributions from Anthropic and Block will include the Model Context Protocol and the goose project, respectively.

By establishing AAIF, the co-founders intend to prevent ecosystem fragmentation and foster safe, portable, and interoperable agentic AI systems.

The foundation provides a shared platform for development, governance, and extension of open standards, with oversight by the Linux Foundation to guarantee neutral, long-term stewardship.

OpenAI emphasises that the foundation will support developers, enterprises, and the wider open-source community, inviting contributors to help shape agentic AI standards.

The AAIF reflects a collaborative effort to advance agentic AI transparently and in the public interest while promoting innovation across tools and platforms.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

EIB survey shows EU firms lead in investment, innovation and green transition

European firms continue to invest actively despite a volatile global environment, demonstrating resilience, innovation, and commitment to sustainability, according to the European Investment Bank (EIB) Group’s 2025 Investment Survey.

Across the EU, companies are expanding capacity, adopting advanced digital technologies, and pursuing green investment to strengthen competitiveness.

Spanish firms, for example, are optimistic about their sector, prioritising capacity growth, using generative AI, and investing in energy efficiency and climate risk insurance.

Digital transformation is accelerating across the continent. Austrian and Finnish firms stand out for their extensive adoption of generative AI and multiple advanced digital tools, while Belgian companies excel in integrating digital technologies alongside green initiatives.

Czech firms devote a larger share of investment to capacity expansion and innovation, with high engagement in international trade and strategic use of digital solutions. These trends are highlighted in country-level EIB reports and reflect broader European patterns.

The green transition remains central to corporate strategies. Many firms actively reduce emissions, improve energy efficiency, and view sustainability as a business opportunity rather than a regulatory burden.

In Belgium, investments in energy efficiency and waste reduction are among the highest in the EU, while nearly all Finnish companies report taking measures to reduce greenhouse gases.

Across Europe, firms increasingly combine environmental action with innovation to maintain competitiveness and resilience.

Challenges persist, including skills shortages, uncertainty, high energy costs, and regulatory complexity. Despite these obstacles, European businesses continue to innovate, expand, and embrace international trade.

EIB surveys show that firms are leveraging technology and green investments not only to navigate economic uncertainty but also to position themselves for long-term growth and strategic advantage in a changing global landscape.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Microsoft commits $17.5 billion to AI in India

The US tech giant, Microsoft, has announced its largest investment in Asia, committing US$17.5 billion to India over four years to expand cloud and AI infrastructure, workforce skilling, and operations nationwide.

An announcement that follows the US$3 billion investment earlier in 2025 and aims to support India’s ambition to become a global AI leader.

The investment focuses on three pillars: hyperscale infrastructure, sovereign-ready solutions, and workforce development. A new hyperscale data centre in Hyderabad, set to go live by mid-2026, will become Microsoft’s largest in India.

Expansion of existing data centres in Chennai, Hyderabad and Pune will improve resilience and low-latency performance for enterprises, startups, and public sector organisations.

Microsoft will integrate AI into national platforms, including e-Shram and the National Career Service, benefiting over 310 million informal workers. AI-enabled features include multilingual access, predictive analytics, automated résumé creation, and personalised pathways toward formal employment.

Skilling initiatives will be doubled to reach 20 million Indians by 2030, building an AI-ready workforce that can shape the country’s digital future.

Sovereign Public and Private Cloud solutions will provide secure, compliant environments for Indian organisations, supporting both connected and disconnected operations.

Microsoft 365 Copilot will process data entirely within India by the end of 2025, enhancing governance, compliance, and performance across regulated sectors. These initiatives aim to position India as a global AI hub powered by scale, skilling, and digital sovereignty.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Australia enforces under-16 social media ban as new rules took effect

Australia has finally introduced the world’s first nationwide prohibition on social media use for under-16s, forcing platforms to delete millions of accounts and prevent new registrations.

Instagram, TikTok, Facebook, YouTube, Snapchat, Reddit, Twitch, Kick and Threads are removing accounts held by younger users. At the same time, Bluesky has agreed to apply the same standard despite not being compelled to do so. The only central platform yet to confirm compliance is X.

The measure follows weeks of age-assurance checks, which have not been flawless, with cases of younger teenagers passing facial-verification tests designed to keep them offline.

Families are facing sharply different realities. Some teenagers feel cut off from friends who managed to bypass age checks, while others suddenly gain a structure that helps reduce unhealthy screen habits.

A small but vocal group of parents admit they are teaching their children how to use VPNs and alternative methods instead of accepting the ban, arguing that teenagers risk social isolation when friends remain active.

Supporters of the legislation counter that Australia imposes clear age limits in other areas of public life for reasons of well-being and community standards, and the same logic should shape online environments.

Regulators are preparing to monitor the transition closely.

The eSafety Commissioner will demand detailed reports from every platform covered by the law, including the volume of accounts removed, evidence of efforts to stop circumvention and assessments of whether reporting and appeals systems are functioning as intended.

Companies that fail to take reasonable steps may face significant fines. A government-backed academic advisory group will study impacts on behaviour, well-being, learning and unintended shifts towards more dangerous corners of the internet.

Global attention is growing as several countries weigh similar approaches. Denmark, Norway and Malaysia have already indicated they may replicate Australia’s framework, and the EU has endorsed the principle in a recent resolution.

Interest from abroad signals a broader debate about how societies should balance safety and autonomy for young people in digital spaces, instead of relying solely on platforms to set their own rules.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

G7 ministers meet in Montreal to boost industrial cooperation

Canada has opened the G7 Industry, Digital and Technology Ministers’ Meeting in Montreal, bringing together ministers, industry leaders, and international delegates to address shared industrial and technological challenges.

The meeting is being led by Industry Minister Melanie Joly and AI and Digital Innovation Minister Evan Solomon, with discussions centred on strengthening supply chains, accelerating innovation, and boosting industrial competitiveness across advanced economies.

Talks will focus on building resilient economies, expanding trusted digital infrastructure, and supporting growth while aligning industrial policy with economic security and national security priorities shared among G7 members.

The agenda builds on outcomes from the recent G7 leaders’ summit in Kananaskis, Canada, including commitments on quantum technologies, critical minerals cooperation, and a shared statement on AI and prosperity.

Canadian officials said closer coordination among trusted partners is essential amid global uncertainty and rapid technological change, positioning innovation-driven industry as a long-term foundation for economic growth, productivity, and shared prosperity.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

Google faces renewed EU scrutiny over AI competition

The European Commission has opened a formal antitrust investigation into whether AI features embedded in online search are being used to unfairly squeeze competitors in newly emerging digital markets shaped by generative AI.

The probe targets Alphabet-owned Google, focusing on allegations that the company imposes restrictive conditions on publishers and content creators while giving its own AI-driven services preferential placement over rival technologies and alternative search offerings.

Regulators are examining products such as AI Overviews and AI Mode, assessing how publisher content is reused within AI-generated summaries and whether media organisations are compensated in a clear, fair, and transparent manner.

EU competition chief Teresa Ribera said the European Commission’s action reflects a broader effort to protect online media and preserve competitive balance as artificial intelligence increasingly shapes how information is produced, discovered, and monetised.

The case adds to years of scrutiny by the European Commission over Google’s search and advertising businesses, even as the company proposes changes to its ad tech operations and continues to challenge earlier antitrust rulings.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!