Digital records gain official status in Uzbekistan

Uzbekistan has granted full legal validity to online personal data stored on the my.gov.uz Unified Interactive Public Services Portal, placing it on equal footing with traditional documents.

The measure, in force from 1 November, supports the country’s digital transformation by simplifying how citizens interact with state bodies.

Personal information can now be accessed, shared and managed entirely through the portal instead of relying on printed certificates.

State institutions are no longer permitted to request paper versions of records that are already available online, which is expected to reduce queues and alleviate the administrative burden faced by the public.

Officials in Uzbekistan anticipate that centralising personal data on one platform will save time and resources for both citizens and government agencies. The reform aims to streamline public services, remove redundant steps and improve overall efficiency across state procedures.

Government bodies have encouraged citizens to use the portal’s functions more actively and follow official channels for updates on new features and improvements.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Salesforce unveils eVerse for dependable enterprise AI

The US cloud-based software company, Salesforce and its Research AI department, have unveiled eVerse, a new environment designed to train voice and text agents through synthetic data generation, stress testing and reinforcement learning.

In an aim to resolve a growing reliability problem known as jagged intelligence, where systems excel at complex reasoning yet falter during simple interactions.

The company views eVerse as a key requirement for creating an Agentic Enterprise, where human staff and digital agents work together smoothly and dependably.

eVerse supports continuous improvement by generating large volumes of simulated interactions, measuring performance and adjusting behaviour over time, rather than waiting for real-world failures.

A platform that played a significant role in the development of Agentforce Voice, giving AI agents the capacity to cope with unpredictable calls involving noise, varied accents and weak connections.

Thousands of simulated conversations enabled teams to identify problems early and deliver stronger performance.

The technology is also being tested with UCSF Health, where clinical experts are working with Salesforce to refine agents that support billing services. Only a portion of healthcare queries can typically be handled automatically, as much of the knowledge remains undocumented.

eVerse enhances coverage by enabling agents to adapt to complex cases through reinforcement learning, thereby improving performance across both routine and sophisticated tasks.

Salesforce describes eVerse as a milestone in a broader effort to achieve Enterprise General Intelligence. The goal is a form of AI designed for dependable business use, instead of the more creative outputs that dominate consumer systems.

It also argues that trust and consistency will shape the next stage of enterprise adoption and that real-world complexity must be mirrored during development to guarantee reliable deployment.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Meta pushes deeper into robotics with key hardware move

Meta is expanding its robotics ambitions by appointing Li-Chen Miller, previously head of its smart glasses portfolio, as the first product manager for Reality Labs’ robotics division. Her transfer marks a significant shift in Meta’s hardware priorities following the launch of its latest augmented reality devices.

The company is reportedly developing a humanoid assistant known internally as Metabot within the same organisation that oversees its AR and VR platforms. Former Cruise executive Marc Whitten leads the robotics group, supported by veteran engineer Ning Li and renowned MIT roboticist Sangbae Kim.

Miller’s move emphasises Meta’s aim to merge its AI expertise with physical robotics. The new team collaborates with the firm’s Superintelligence Lab, which is building a ‘world model’ capable of powering dextrous motion and real-time reasoning.

Analysts see the strategy as Meta’s attempt to future-proof its ecosystem and diversify Reality Labs, which continues to post heavy losses. The company’s growing investment in humanoid design could bring home-use robots closer to reality, blending social AI with the firm’s long-term vision for the metaverse.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

China targets deepfake livestreams of public figures

Chinese cyberspace authorities announced a crackdown on AI deepfakes impersonating public figures in livestream shopping. Regulators said platforms have removed thousands of posts and sanctioned numerous accounts for misleading users.

Officials urged platforms to conduct cleanups and hold marketers accountable for deceptive promotions. Reported actions include removing over 8,700 items and dealing with more than 11,000 impersonation accounts.

Measures build on wider campaigns against AI misuse, including rules targeting deep synthesis and labelling obligations. Earlier efforts focused on curbing rumours, impersonation and harmful content across short videos and e-commerce.

Chinese authorities pledged a continued high-pressure stance to safeguard consumers and protect celebrity likenesses online. Platforms risk penalties if complaint handling and takedowns fail to deter repeat infringements in livestream commerce.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

New guidelines by Apple curb how apps send user data to external AI systems

Apple has updated its App Review Guidelines to require developers to disclose and obtain permission before sharing personal data with third-party AI systems. The company says the change enhances user control as AI features become more prevalent across apps.

The revision arrives ahead of Apple’s planned 2026 release of an AI-enhanced Siri, expected to take actions across apps and rely partly on Google’s Gemini technology. Apple is also moving to ensure external developers do not pass personal data to AI providers without explicit consent.

Previously, rule 5.1.2(i) already limited the sharing of personal information without permission. The update adds explicit language naming third-party AI as a category that requires disclosure, reflecting growing scrutiny of how apps use machine learning and generative models.

The shift could affect developers who use external AI systems for features such as personalisation or content generation. Enforcement details remain unclear, as the term ‘AI’ encompasses a broad range of technologies beyond large language models.

Apple released several other guideline updates alongside the AI change, including support for its new Mini Apps Programme and amendments involving creator tools, loan products, and regulated services such as crypto exchanges.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

LinkedIn introduces AI-powered people search for faster networking

LinkedIn has launched an AI-powered people search feature, allowing users to find relevant professionals using plain language instead of traditional keywords and filters. The new tool surfaces experts based on experience and skills rather than exact job titles or company names.

The feature uses advanced AI and LinkedIn’s professional data to match users with the right people at the right time. It transforms connections into actionable opportunities, helping members discover mentors, collaborators, or industry specialists more efficiently.

Previously, searches required highly specific information, making it difficult to identify the right professional. The new conversational approach simplifies the process, making LinkedIn a more intuitive and powerful platform for networking, career planning, and business growth.

AI-powered people search is currently available to Premium subscribers in the US, with plans for expansion in the coming months. LinkedIn plans to expand the feature globally, helping professionals connect, collaborate, and find opportunities more quickly.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Firefox expands AI features with full user choice

Mozilla has outlined its vision for integrating AI into Firefox in a way that protects user choice instead of limiting it. The company argues that AI should be built like the open web, allowing people and developers to use tools on their own terms rather than being pushed into a single ecosystem.

Recent features such as the AI sidebar chatbot and Shake to Summarise on iOS reflect that approach.

The next step is an ‘AI Window’, a controlled space inside Firefox that lets users chat with an AI assistant while browsing. The feature is entirely optional, offers full control, and can be switched off at any time. Mozilla has opened a waitlist so users can test the feature early and help shape its development.

Mozilla believes browsers must adapt as AI becomes a more common interface to the web. The company argues that remaining independent allows it to prioritise transparency, accountability and user agency instead of the closed models promoted by competitors.

The goal is an assistant that enhances browsing and guides users outward to the wider internet rather than trapping them in isolated conversations.

Community involvement remains central to Mozilla’s work. The organisation is encouraging developers and users to contribute ideas and support open-source projects as it works to ensure Firefox stays fast, secure and private while embracing helpful forms of AI.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Purdue and Google collaborate to advance AI research and education

Purdue University and Google are expanding their partnership to integrate AI into education and research, preparing the next generation of leaders while advancing technological innovation.

The collaboration was highlighted at the AI Frontiers summit in Indianapolis on 13 November. The event brought together university, industry, and government leaders to explore AI’s impact across sectors such as health care, manufacturing, agriculture, and national security.

Leaders from both organisations emphasised the importance of placing AI tools in the hands of students, faculty, and staff. Purdue plans a working AI competency requirement for incoming students in fall 2026, ensuring all graduates gain practical experience with AI tools, pending Board approval.

The partnership also builds on projects such as analysing data to improve road safety.

Purdue’s Institute for Physical Artificial Intelligence (IPAI), the nation’s first institute dedicated to AI in the physical world, plays a central role in the collaboration. The initiative focuses on physical AI, quantum science, semiconductors, and computing to equip students for AI-driven industries.

Google and Purdue emphasised responsible innovation and workforce development as critical goals of the partnership.

Industry leaders, including Waymo, Google Public Sector, and US Senator Todd Young, discussed how AI technologies like autonomous drones and smart medical devices are transforming key sectors.

The partnership demonstrates the potential of public-private collaboration to accelerate AI research and prepare students for the future of work.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

Agentic AI drives a new identity security crisis

New research from Rubrik Zero Labs warns that agentic AI is reshaping the identity landscape faster than organisations can secure it.

The study reveals a surge in non-human identities created through automation and API driven workflows, with numbers now exceeding human users by a striking margin.

Most firms have already introduced AI agents into their identity systems or plan to do so, yet many struggle to govern the growing volume of machine credentials.

Experts argue that identity has become the primary attack surface as remote work, cloud adoption and AI expansion remove traditional boundaries. Threat actors increasingly rely on valid credentials instead of technical exploits, which makes weaknesses in identity governance far more damaging.

Rubrik’s researchers and external analysts agree that a single compromised key or forgotten agent account can provide broad access to sensitive environments.

Industry specialists highlight that agentic AI disrupts established IAM practices by blurring distinctions between human and machine activity.

Organisations often cannot determine whether a human or an automated agent performed a critical action, which undermines incident investigations and weakens zero-trust strategies. Poor logging, weak lifecycle controls and abandoned machine identities further expand the attack surface.

Rubrik argues that identity resilience is becoming essential, since IAM tools alone cannot restore trust after a breach. Many firms have already switched IAM providers, reflecting widespread dissatisfaction with current safeguards.

Analysts recommend tighter control of agent creation, stronger credential governance and a clearer understanding of how AI-driven identities reshape operational and security risks.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Anthropic uncovers a major AI-led cyberattack

The US R&D firm, Anthropic, has revealed details of the first known cyber espionage operation largely executed by an autonomous AI system.

Suspicious activity detected in September 2025 led to an investigation that uncovered an attack framework, which used Claude Code as an automated agent to infiltrate about thirty high-value organisations across technology, finance, chemicals and government.

The attackers relied on recent advances in model intelligence, agency and tool access.

By breaking tasks into small prompts and presenting Claude as a defensive security assistant instead of an offensive tool, they bypassed safeguards and pushed the model to analyse systems, identify weaknesses, write exploit code and harvest credentials.

The AI completed most of the work with only a few moments of human direction, operating at a scale and speed that human hackers would struggle to match.

Anthropic responded by banning accounts, informing affected entities and working with authorities as evidence was gathered. The company argues that the case shows how easily sophisticated operations can now be carried out by less-resourced actors who use agentic AI instead of traditional human teams.

Errors such as hallucinated credentials remain a limitation, yet the attack marks a clear escalation in capability and ambition.

The firm maintains that the same model abilities exploited by the attackers are needed for cyber defence. Greater automation in threat detection, vulnerability analysis and incident response is seen as vital.

Safeguards, stronger monitoring and wider information sharing are presented as essential steps for an environment where adversaries are increasingly empowered by autonomous AI.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!