AI agent autonomy rises as users gain trust in Anthropic’s Claude Code

A new study from Anthropic offers an early picture of how people allow AI agents to work independently in real conditions.

By examining millions of interactions across its public API and its coding agent Claude Code, the company explored how long agents operate without supervision and how users change their behaviour as they gain experience.

The analysis shows a sharp rise in the longest autonomous sessions, with top users permitting the agent to work for more than forty minutes instead of cutting tasks short.

Experienced users appear more comfortable letting the AI agent proceed on its own, shifting towards auto-approve instead of checking each action.

At the same time, these users interrupt more often when something seems unusual, which suggests that trust develops alongside a more refined sense of when oversight is required.

The agent also demonstrates its own form of caution by pausing to ask for clarification more frequently than humans interrupt it as tasks become more complex.

The research identifies a broad spread of domains that rely on agents, with software engineering dominating usage but early signs of adoption emerging in healthcare, cybersecurity and finance.

Most actions remain low-risk and reversible, supported by safeguards such as restricted permissions or human involvement instead of fully automated execution. Only a tiny fraction of actions reveal irreversible consequences such as sending messages to external recipients.

Anthropic notes that real-world autonomy remains far below the potential suggested by external capability evaluations, including those by METR.

The company argues that safer deployment will depend on stronger post-deployment monitoring systems and better design for human-AI cooperation so that autonomy is managed jointly rather than granted blindly.

Would you like to learn more about AI, tech and digital diplomacyIf so, ask our Diplo chatbot!

AI enables live translation and sign language for Modi summit

Prime Minister Narendra Modi delivered a speech at the India AI Impact Summit 2026, showcasing the nation’s progress in AI. The address emphasised technological innovation and the role of AI in driving national development.

The address was dubbed live in 11 languages, including Assamese, Bangla, English, Gujarati, Kannada, Malayalam, Marathi, Odia, Punjabi, Tamil and Telugu. Audiences across India could follow the speech without language barriers.

An AI-enabled sign language interpreter appeared on a large screen behind the prime minister in the auditorium at Bharat Mandapam. The live interpretation made the event fully accessible to attendees with hearing impairments.

Videos of the multilingual and sign-language versions were widely shared on the prime minister’s social media accounts. The initiative highlighted India’s growing use of AI tools to promote inclusivity and communication innovation.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

India AI Impact Summit faces controversy over robotic dog claim

An Indian university has vacated its stall at the AI Impact Summit in New Delhi after a staff member presented a commercially available Chinese robotic dog as a university-developed innovation. The episode has sparked criticism and drawn attention to India’s AI ambitions.

Footage showed a professor introducing the robot, named Orion, as developed by the Centre of Excellence at Galgotias University. Social media users later identified the device as the Unitree Go2, produced by Unitree Robotics in China and widely used for research and education.

The Indian IT minister initially shared the video before deleting the post. The university later clarified that the robot was not its own creation and said no official communication had confirmed its removal from the event. However, local reports indicated that the stall had been vacated.

The incident occurred during the AI Impact Summit at Bharat Mandapam, billed as a major AI gathering in the Global South. The event has also faced reports of overcrowding and logistical issues, even as more than $100 billion in AI-related investments were announced.

Opposition politicians in India criticised the government over the episode, arguing it undermined India’s credibility in the global AI race. Despite the controversy, the summit continues with high-profile participation from global technology leaders and heads of government.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

India hosts AI Impact Summit as UN chief urges shared AI rules

UN Secretary-General Antonio Guterres told the India AI Impact Summit 2026 that the future of AI must not be determined by a small group of nations or controlled by powerful private actors. He praised India’s leadership in hosting what he described as the first AI summit in the Global South.

Guterres said AI is transforming economies, societies, and governance at unprecedented speed. Inclusive and globally representative governance frameworks are essential to ensure equitable access and responsible deployment, he added.

‘The future of AI cannot be decided by a handful of countries or left to the whims of a few billionaires,’ he said, urging multilateral cooperation. Real impact, he added, means technology that improves lives and protects the planet.

United Nations officials say AI could help accelerate progress on nearly 80 per cent of the Sustainable Development Goals. Potential applications include reducing inequalities, strengthening public services, and enhancing climate action.

The UN has committed to a proactive, human rights-based approach to AI adoption within its own system. Agencies are deploying AI tools to address bias in data models, improve analytics, support innovation, and safeguard ethical standards.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

Anthropic seeks deeper AI cooperation with India

The chief executive of Anthropic, Dario Amodei, has said India can play a central role in guiding global responses to the security and economic risks linked to AI.

Speaking at the India AI Impact Summit in New Delhi, he argued that the world’s largest democracy is well placed to become a partner and leader in shaping the responsible development of advanced systems.

Amodei explained that Anthropic hopes to work with India on the testing and evaluation of models for safety and security. He stressed growing concern over autonomous behaviours that may emerge in advanced systems and noted the possibility of misuse by individuals or governments.

He pointed to the work of international and national AI safety institutes as a foundation for joint efforts and added that the economic effect of AI will be significant and that India and the wider Global South could benefit if policymakers prepare early.

Through its Economic Futures programme and Economic Index, Anthropic studies how AI reshapes jobs and labour markets.

He said the company intends to expand information sharing with Indian authorities and bring economists, labour groups, and officials into regular discussions to guide evidence-based policy instead of relying on assumptions.

Amodei said AI is set to increase economic output and that India is positioned to influence emerging global frameworks. He signalled a strong interest in long-term cooperation that supports safety, security, and sustainable growth.

Would you like to learn more about AI, tech and digital diplomacyIf so, ask our Diplo chatbot!

EU turns to AI tools to strengthen defences against disinformation

Institutions, researchers, and media organisations in the EU are intensifying efforts to use AI to counter disinformation, even as concerns grow about the wider impact on media freedom and public trust.

Confidence in journalism has fallen sharply across the EU, a trend made more severe by the rapid deployment of AI systems that reshape how information circulates online.

Brussels is attempting to respond with a mix of regulation and strategic investment. The EU’s AI Act is entering its implementation phase, supported by the AI Continent Action Plan and the Apply AI Strategy, both introduced in 2025 to improve competitiveness while protecting rights.

Yet manipulation campaigns continue to spread false narratives across platforms in multiple languages, placing pressure on journalists, fact-checkers and regulators to act with greater speed and precision.

Within such an environment, AI4TRUST has emerged as a prominent Horizon Europe initiative. The consortium is developing an integrated platform that detects disinformation signals, verifies content, and maps information flows for professionals who need real-time insight.

Partners stress the need for tools that strengthen human judgment instead of replacing it, particularly as synthetic media accelerates and shared realities become more fragile.

Experts speaking in Brussels warned that traditional fact-checking cannot absorb the scale of modern manipulation. They highlighted the geopolitical risks created by automated messaging and deepfakes, and argued for transparent, accountable systems tailored to user needs.

European officials emphasised that multiple tools will be required, supported by collaboration across institutions and sustained regulatory frameworks that defend democratic resilience.

Would you like to learn more about AI, tech and digital diplomacyIf so, ask our Diplo chatbot!

Digital procurement strengthens compliance and prepares governments for AI oversight

AI is reshaping the expectations placed on organisations, yet many local governments in the US continue to rely on procurement systems designed for a paper-first era.

Sealed envelopes, manual logging and physical storage remain standard practice, even though these steps slow essential services and increase operational pressure on staff and vendors.

The persistence of paper is linked to long-standing compliance requirements, which are vital for public accountability. Over time, however, processes intended to safeguard fairness have created significant inefficiencies.

Smaller businesses frequently struggle with printing, delivery, and rigid submission windows, and the administrative burden on procurement teams expands as records accumulate.

The author’s experience leading a modernisation effort in Somerville, Massachusetts showed how deeply embedded such practices had become.

Gradual adoption of digital submission reduced logistical barriers while strengthening compliance. Electronic bids could be time-stamped, access monitored, and records centrally managed, allowing staff to focus on evaluation rather than handling binders and storage boxes.

Vendor participation increased once geographical and physical constraints were removed. The shift also improved resilience, as municipalities that had already embraced digital procurement were better equipped to maintain continuity during pandemic disruptions.

Electronic records now provide a basis for responsible use of AI. Digital documents can be analysed for anomalies, metadata inconsistencies, or signs of manipulation that are difficult to detect in paper files.

Rather than replacing human judgment, such tools support stronger oversight and more transparent public administration. Modernising procurement aligns government operations with present-day realities and prepares them for future accountability and technological change.

Would you like to learn more about AI, tech and digital diplomacyIf so, ask our Diplo chatbot!

India unveils MANAV Vision as new global pathway for ethical AI

Narendra Modi presented the new MANAV Vision during the India AI Impact Summit 2026 in New Delhi, setting out a human-centred direction for AI.

He described the framework as rooted in moral guidance, transparent oversight, national control of data, inclusive access and lawful verification. He argued that the approach is intended to guide global AI governance for the benefit of humanity.

The Prime Minister of India warned that rapid technological change requires stronger safeguards and drew attention to the need to protect children. He also said societies are entering a period where people and intelligent systems co-create and evolve together instead of functioning in separate spheres.

He pointed to India’s confidence in its talent and policy clarity as evidence of a growing AI future.

Modi announced that three domestic companies introduced new AI models and applications during the summit, saying the launches reflect the energy and capability of India’s young innovators.

He invited technology leaders from around the world to collaborate by designing and developing in India instead of limiting innovation to established hubs elsewhere.

The summit brought together policymakers, academics, technologists and civil society representatives to encourage cooperation on the societal impact of artificial intelligence.

As the first global AI summit held in the Global South, the gathering aligned with India’s national commitment to welfare for all and the wider aspiration to advance AI for humanity.

Would you like to learn more about AI, tech and digital diplomacyIf so, ask our Diplo chatbot!

Social media ban for children gains momentum in Germany

Germany’s coalition government is weighing new restrictions on children’s access to social media as both governing parties draft proposals to tighten online safeguards. The debate comes amid broader economic pressures, with industry reporting significant job losses last year.

The conservative bloc and the centre-left Social Democrats are examining measures that could curb or block social media access for minors. Proposals under discussion include age-based restrictions and stronger platform accountability.

The Social Democrats in Germany have proposed banning access for children under 14 and introducing dedicated youth versions of platforms for users aged 14 to 16. Supporters argue that clearer age thresholds could reduce exposure to harmful content and addictive design features.

The discussions align with a growing European trend toward stricter digital child protection rules. Several governments are exploring tougher age verification and content moderation standards, reflecting mounting concerns over online safety and mental health.

The policy debate unfolded as German industry reported cutting 124,100 jobs in 2025 amid ongoing economic headwinds. Lawmakers face the dual challenge of safeguarding younger users while navigating wider structural pressures affecting Europe’s largest economy.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

Windows 11 gains enterprise 5G management through Ericsson partnership

Ericsson and Microsoft have integrated advanced 5G into Windows 11 to simplify secure enterprise laptop connectivity. The update embeds AI-driven 5G management, enabling IT teams to automate connections and enforce policy-based controls at scale.

The solution combines Microsoft Intune with Ericsson Enterprise 5G Connect, a cloud-based platform that monitors network quality and optimises performance. Enterprises can switch service providers and automatically apply internal connectivity policies.

IT departments can remotely provision eSIMs, prioritise 5G networks, and enforce secure profiles across laptop fleets. Automation reduces manual configuration and ensures consistent compliance across locations and service providers.

The companies say the integration addresses long-standing barriers to adopting cellular-connected PCs, including complexity and fragmented management. Multi-market pilots have preceded commercial availability in the United States, Sweden, Singapore, and Japan.

Additional launches are planned in 2026 across Spain, Germany, and Finland. Executives from both firms describe the collaboration as a step toward AI-ready enterprise devices with secure, always-on connectivity.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!