AI agents redefine knowledge work through cognitive collaboration

A new study by Perplexity and Harvard researchers sheds light on how people use AI agents at scale.

Millions of anonymised interactions were analysed to understand who relies on agent technology, how intensively it is used and what tasks users delegate. The findings challenge the notion of a digital concierge model and reveal a shift toward more profound cognitive collaboration, rather than merely outsourcing tasks.

More than half of all activity involves cognitive work, with strong emphasis on productivity, learning and research. Users depend on agents to scan documents, summarise complex material and prepare early analysis before making final decisions.

Students use AI agents to navigate coursework, while professionals rely on them to process information or filter financial data. The pattern suggests that users adopt agents to elevate their own capability instead of avoiding effort.

Usage also evolves. Early queries often involve low-pressure tasks, yet long-term behaviour moves sharply toward productivity and sustained research. Retention rates are highest among users working on structured workflows or tasks that require knowledge.

The trajectory mirrors the early personal computer, which gained value through spreadsheets and text processing rather than recreational use.

Six main occupations now drive most agent activity, with firm reliance among digital specialists as well as marketing, management and entrepreneurial roles. Context shapes behaviour, as finance users concentrate on efficiency while students favour research.

Designers and hospitality staff follow patterns linked to their professional needs. The study argues that knowledge work is increasingly shaped by the ability to ask better questions and that hybrid intelligence will define future productivity.

The pace of adaptation across the broader economy remains an open question.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

China pushes global leadership on AI governance

Global discussions on artificial intelligence have multiplied, yet the world still lacks a coherent system to manage the technology’s risks. China is attempting to fill that gap by proposing a new World Artificial Intelligence Cooperation Organisation to coordinate regulation internationally.

Countries face mounting concerns over unsafe AI development, with the US relying on fragmented rules and voluntary commitments from tech firms. The EU has introduced binding obligations through its AI Act, although companies continue to push for weaker oversight.

China’s rapid rollout of safety requirements, including pre-deployment checks and watermarking of AI-generated content, is reshaping global standards as many firms overseas adopt Chinese open-weight models.

A coordinated international framework similar to the structure used for nuclear oversight could help governments verify compliance and stabilise the global AI landscape.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

Online data exposure heightens threats to healthcare workers

Healthcare workers are facing escalating levels of workplace violence, with more than three-quarters reporting verbal or physical assaults, prompting hospitals to reassess how they protect staff from both on-site and external threats.

A new study examining people search sites suggests that online exposure of personal information may worsen these risks. Researchers analysed the digital footprint of hundreds of senior medical professionals, finding widespread availability of sensitive personal data.

The study shows that many doctors appear across multiple data broker platforms, with a significant share listed on five or more sites, making it difficult to track, manage, or remove personal information once it enters the public domain.

Exposure varies by age and geography. Younger doctors tend to have smaller digital footprints, while older professionals are more exposed due to accumulated public records. State-level transparency laws also appear to influence how widely data is shared.

Researchers warn that detailed profiles, often available for a small fee, can enable harassment or stalking at a time when threats against healthcare leaders are rising. The findings renew calls for stronger privacy protections for medical staff.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

Rising UK screen time sparks concerns for wellbeing

UK internet use has risen sharply, with adults spending over four and a half hours a day online in 2025, according to Ofcom’s latest Online Nation report.

Public sentiment has cooled, as fewer people now believe the internet is good for society, despite most still judging its benefits to outweigh the risks.

Children report complex online experiences, with many enjoying their digital time while also acknowledging adverse effects such as the so-called ‘brain rot’ linked to endless scrolling.

Significant portions of young people’s screen time occur late at night on major platforms, raising concerns about well-being.

New rules requiring age checks for UK pornography sites prompted a surge in VPN use as people attempted to bypass restrictions, although numbers have since declined.

Young users increasingly turn to online tools such as ASMR for relaxation, yet many also encounter toxic self-improvement content and body shaming.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

Salesforce pushes unified data model for safer AI agents

Salesforce and Informatica are promoting a shared data framework designed to provide AI agents with a deeper understanding of business. Salesforce states that many projects fail due to context gaps, which leave agents unable to interpret enterprise data accurately.

Informatica adds master data management and a broad catalogue that defines core business entities across systems. Data lineage tools track how information moves through an organisation, helping agents judge reliability and freshness.

Data 360 merges these metadata layers and signals into a unified context interface without copying enterprise datasets. Salesforce claims that the approach provides Agentforce with a more comprehensive view of customers, processes, and policies, thereby supporting safer automation.

Wyndham and Yamaha representatives, quoted by Salesforce, say the combined stack helps reduce data inconsistency and accelerate decision-making. Both organisations report improved access to governed and harmonised records that support larger AI strategies.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

US rollout brings AI face tagging to Amazon Ring

Amazon has begun rolling out a new facial recognition feature for its Ring doorbells, allowing devices to identify frequent visitors and send personalised alerts instead of generic motion notifications.

The feature, called Familiar Faces, enables users to create a catalogue of up to 50 individuals, such as family members, friends, neighbours or delivery drivers, by labelling faces directly within the Ring app.

Amazon says the rollout is now under way in the United States, where Ring owners can opt in to the feature, which is disabled by default and designed to reduce unwanted or repetitive alerts.

The company claims facial data is encrypted, not shared externally and not used to train AI models, while unnamed faces are automatically deleted after 30 days, giving users ongoing control over stored information.

Privacy advocates and lawmakers remain concerned, however, citing Ring’s past security failures and law enforcement partnerships as evidence that convenience-driven surveillance tools can introduce long-term risks to personal privacy.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

National payments system anchors Ethiopia’s digital shift

Ethiopia has launched its National Digital Payment Strategy for 2026 to 2030 alongside a new instant payments platform, marking a significant milestone in the country’s broader push towards digital transformation.

The five-year strategy sets out plans to expand payment interoperability, strengthen public trust, and encourage innovation across the financial sector, with a focus on widening adoption and reducing barriers for underserved and rural communities.

At the centre of the initiative is a national instant payments system designed to support rapid, secure transactions, including person-to-person transfers, QR payments, bulk disbursements, and selected low-value cross-border transactions.

Government officials described the shift as central to building a more inclusive, cash-lite economy, highlighting progress in digital financial access and sustained investment in core digital and payments infrastructure.

The rollout builds on the earlier Digital Ethiopia 2025 agenda and feeds into the longer-term Digital Ethiopia 2030 vision, as authorities position the country to meet rising demand for secure digital financial services across Africa.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

AI job interviews raise concerns among recruiters and candidates

As AI takes on a growing share of recruitment tasks, concerns are mounting that automated interviews and screening tools could be pushing hiring practices towards what some describe as a ‘race to the bottom’.

The rise of AI video interviews illustrates both the efficiency gains sought by companies and the frustrations candidates experience when algorithms, rather than people, become the first point of contact.

BBC journalist MaryLou Costa found this out first-hand after her AI interviewer froze mid-question. The platform provider, TestGorilla, said the malfunction affected only a small number of users, but the episode highlights the fragility of a process that companies increasingly rely on to sift through rising volumes of applications.

With vacancies down 12% year-on-year and applications per role up 65%, firms argue that AI is now essential for managing the workload. Recruitment groups such as Talent Solutions Group say automated tools help identify the fraction of applicants who will advance to human interviews.

Employers are also adopting voice-based AI interviewers such as Cera’s system, Ami, which conducts screening calls and has already processed hundreds of thousands of applications. Cera claims the tool has cut recruitment costs by two-thirds and saved significant staff time. Yet jobseekers describe a dehumanising experience.

Marketing professional Jim Herrington, who applied for over 900 roles after redundancy, argues that keyword-driven filters overlook the broader qualities that define a strong candidate. He believes companies risk damaging their reputation by replacing real conversation with automated screening and warns that AI-based interviews cannot replicate human judgement, respect or empathy.

Recruiters acknowledge that AI is also transforming candidate behaviour. Some applicants now use bots to submit thousands of applications at once, further inflating volumes and prompting companies to rely even more heavily on automated filtering.

Ivee co-founder Lydia Miller says this dynamic risks creating a loop in which both sides use AI to outpace each other, pushing humans further out of the process. She warns that candidates may soon tailor their responses to satisfy algorithmic expectations, rather than communicate genuine strengths. While AI interviews can reduce stress for some neurodivergent or introverted applicants, she says existing bias in training data remains a significant risk.

Experts argue that AI should augment, not replace, human expertise. Talent consultant Annemie Ress notes that experienced recruiters draw on subtle cues and intuition that AI cannot yet match. She warns that over-filtering risks excluding strong applicants before anyone has read their CV or heard their voice.

With debates over fairness, transparency and bias now intensifying, the challenge for employers is balancing efficiency with meaningful engagement and ensuring that automated tools do not undermine the human relationships on which good recruitment depends.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

UK study warns of risks behind emotional attachments to AI therapists

A new University of Sussex study suggests that AI mental-health chatbots are most effective when users feel emotionally close to them, but warns this same intimacy carries significant risks.

The research, published in Social Science & Medicine, analysed feedback from 4,000 users of Wysa, an AI therapy app used within the NHS Talking Therapies programme. Many users described the AI as a ‘friend,’ ‘companion,’ ‘therapist,’ or occasionally even a ‘partner.’

Researchers say these emotional bonds can kickstart therapeutic processes such as self-disclosure, increased confidence, and improved wellbeing. Intimacy forms through a loop: users reveal personal information, receive emotionally validating responses, feel gratitude and safety, then disclose more.

But the team warns this ‘synthetic intimacy’ may trap vulnerable users in a self-reinforcing bubble, preventing escalation to clinical care when needed. A chatbot designed to be supportive may fail to challenge harmful thinking, or even reinforce it.

The report highlights growing reliance on AI to fill gaps in overstretched mental-health services. NHS trusts use tools like Wysa and Limbic to help manage referrals and support patients on waiting lists.

Experts caution that AI therapists remain limited: unlike trained clinicians, they lack the ability to read nuance, body language, or broader context. Imperial College’s Prof Hamed Haddadi called them ‘an inexperienced therapist’, adding that systems tuned to maintain user engagement may continue encouraging disclosure even when users express harmful thoughts.

Researchers argue policymakers and app developers must treat synthetic intimacy as an inevitable feature of digital mental-health tools, and build clear escalation mechanisms for cases where users show signs of crisis or clinical disorder.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

New AI accountability toolkit unveiled by Amnesty International

Amnesty International has introduced a toolkit to help investigators, activists, and rights defenders hold governments and corporations accountable for harms caused by AI and automated decision-making systems. The resource draws on investigations across Europe, India, and the United States and focuses on public sector uses in welfare, policing, healthcare, and education.

The toolkit offers practical guidance for researching and challenging opaque algorithmic systems that often produce bias, exclusion, and human rights violations rather than improving public services. It emphasises collaboration with impacted communities, journalists, and civil society organisations to uncover discriminatory practices.

One key case study highlights Denmark’s AI-powered welfare system, which risks discriminating against disabled individuals, migrants, and low-income groups while enabling mass surveillance. Amnesty International underlines human rights law as a vital component of AI accountability, addressing gaps left by conventional ethical audits and responsible AI frameworks.

With growing state and corporate investments in AI, Amnesty International stresses the urgent need to democratise knowledge and empower communities to demand accountability. The toolkit equips civil society, journalists, and affected individuals with the strategies and resources to challenge abusive AI systems and protect fundamental rights.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!