Anthropic launches Bengaluru office to drive responsible AI in India

AI firm Anthropic, the company behind the Claude AI chatbot, is opening its first office in India, choosing Bengaluru as its base.

A move that follows OpenAI’s recent expansion into New Delhi, underlining India’s growing importance as a hub for AI development and adoption.

CEO Dario Amodei said India’s combination of vast technical talent and the government’s commitment to equitable AI progress makes it an ideal location.

The Bengaluru office will focus on developing AI solutions tailored to India’s needs in education, healthcare, and agriculture sectors.

Amodei is visiting India to strengthen ties with enterprises, nonprofits, and startups and promote responsible AI use that is aligned with India’s digital growth strategy.

Anthropic plans further expansion in the Indo-Pacific region, following its Tokyo launch, later in the year.

Chief Commercial Officer Paul Smith noted the rising demand among Indian companies for trustworthy, scalable AI systems. Anthropic’s Claude models are already accessible in India through its API, Amazon Bedrock, and Google Cloud Vertex AI.

The company serves more than 300,000 businesses worldwide, with nearly 80 percent of usage outside the US.

India has become the second-largest market for Claude, with developers using it for tasks such as mobile UI design and web app debugging.

Anthropic also enhances Claude’s multilingual capabilities in major Indic languages, including Hindi, Bengali, and Tamil, to support education and public sector projects.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

OpenAI unveils AgentKit for faster AI agent creation

OpenAI has launched AgentKit, a new suite of developer tools designed to simplify AI-powered agents’ creation, deployment, and optimisation. The platform unifies workflows that previously required multiple systems, offering a faster and more visual way to build intelligent applications.

AgentKit’s AI includes Agent Builder, Connector Registry, ChatKit, and advanced evaluation tools. Developers can now design multi-agent workflows on a visual canvas, manage data connections across workspaces, and integrate chat-based agents directly into apps and websites.

Early users such as Ramp and LY Corporation built working agents in just a few hours, cutting development cycles by up to 70%. Companies including Canva and HubSpot have used ChatKit to embed conversational support agents, transforming customer experience and developer engagement.

New evaluation features and reinforcement fine-tuning allow users to test, grade, and improve agents’ reasoning abilities. AgentKit is now available to developers and enterprises through OpenAI’s API and ChatGPT Enterprise, with a wider rollout expected later this year.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

New report finds IT leaders unprepared for evolving cyber threats

A new global survey by 11:11 Systems highlights growing concerns among IT leaders over cyber incident recovery. More than 800 senior IT professionals across North America, Europe, and the Asia Pacific report a rising strain from evolving threats, staffing gaps, and limited clean-room infrastructure.

Over 80% of respondents experienced at least one major cyberattack in the past year, with more than half facing multiple incidents. Nearly half see recovery planning complexity as their top challenge, while over 80% say their organisations are overconfident in their recovery capabilities.

The survey also reveals that 74% believe integrating AI could increase cyberattack vulnerability. Despite this, 96% plan to invest in cyber incident recovery within the next 12 months, underlining its growing importance in budget strategies.

The financial stakes are high. Over 80% of respondents reported spending at least six figures during just one hour of downtime, with the top 5% incurring losses of over one million dollars per hour. Yet 30% of businesses do not test their recovery plans annually, despite these risks.

11:11 Systems’ CTO Justin Giardina said organisations must adopt a proactive, AI-driven approach to recovery. He emphasised the importance of advanced platforms, secure clean rooms, and tailored expertise to enhance cyber resilience and expedite recovery after incidents.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

Scammers use AI to fake British boutiques

Fraudsters are using AI-generated images and back stories to pose as British family businesses, luring shoppers into buying cheap goods from Asia. Websites claiming to be long-standing local boutiques have been linked to warehouses in China and Hong Kong.

Among them is C’est La Vie, which presented itself as a Birmingham jeweller run by a couple called Eileen and Patrick. The supposed owners appeared in highly convincing AI-generated photos, while customers later discovered their purchases were shipped from China.

Victims described feeling cheated after receiving poor-quality jewellery and clothes that bore no resemblance to the advertised items. More than 500 complaints on Trustpilot accuse such companies of exploiting fabricated stories to appear authentic.

Consumer experts at Which? warn that AI tools now enable scammers to create fake brands at an unprecedented scale. The ASA has called on social media platforms to act, as many victims were targeted through Facebook ads.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

Employees embrace AI but face major training and trust gaps

SnapLogic has published new research highlighting how AI adoption reshapes daily work across industries while exposing trust, training, and leadership strategy gaps.

The study finds that 78% of employees already use AI in their roles, with half using autonomous AI agents. Workers interact with AI almost daily and save over three hours per week. However, 94% say they face barriers to practical use, with concerns over data privacy and security topping the list.

Based on a survey of 3,000 US, UK, and German employees, the research finds widespread but uneven AI support. Training is a significant gap, with only 63% receiving company-led education. Many rely on trial and error, and managers are more likely to be trained than non-managers.

Generational and hierarchical differences are also evident. Seventy percent of managers express strong confidence in AI, compared with 43% of non-managers. Half believe they will be managed by AI agents rather than people in the future, and many expect to be handled by AI themselves.

SnapLogic’s CTO, Jeremiah Stone, says the agile enterprise is about easing workloads and sparking creativity, not replacing people. The findings underscore the need for companies to align strategy, training, and trust to realise AI’s potential in the workplace fully.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

Google unveils CodeMender, an AI agent that repairs code vulnerabilities

Google researchers have unveiled CodeMender, an AI-powered agent designed to automatically detect and fix software vulnerabilities.

The tool aims to improve code security by generating and applying patches that address critical flaws, allowing developers to focus on building reliable software instead of manually locating and repairing weaknesses.

Built on the Gemini Deep Think models, CodeMender operates autonomously, identifying vulnerabilities, reasoning about the underlying code, and validating patches to ensure they are correct and do not introduce regressions.

Over the past six months, it has contributed 72 security fixes to open source projects, including those with millions of lines of code.

The system combines advanced program analysis with multi-agent collaboration to strengthen its decision-making. It employs techniques such as static and dynamic analysis, fuzzing and differential testing to trace the root causes of vulnerabilities.

Each proposed fix undergoes rigorous validation before being reviewed by human developers to guarantee quality and compliance with coding standards.

According to Google, CodeMender’s dual approach (reactively patching new flaws and proactively rewriting code to eliminate entire vulnerability classes) represents a major step forward in AI-driven cybersecurity.

The company says the tool’s success demonstrates how AI can transform the maintenance and protection of modern software systems.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Brazil advances first national cybersecurity law

Brazil is preparing to pass its first national cybersecurity law, aiming to centralise oversight and strengthen protection for citizens and companies. The Cybersecurity Legal Framework would establish a new National Cybersecurity Authority to coordinate defence efforts across government and industry.

The legislation comes after a series of high-profile cyberattacks disrupted hospitals and exposed millions of personal records, highlighting gaps in Brazil’s digital defences. The authority would create nationwide standards, replacing fragmented rules currently managed by individual ministries and agencies.

Under the bill, public procurement will require compliance with official security standards, and suppliers will share responsibility for incidents. Companies meeting the rules could be listed as trusted providers, potentially boosting competitiveness in both public and private sectors.

The framework also includes incentives: financing through the National Public Security Fund and priority for locally developed technologies. While the bill still awaits approval in Congress, its adoption would make Brazil one of Latin America’s first countries with a comprehensive cybersecurity law.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

Anthropic’s Claude to power Deloitte’s new enterprise AI expansion

Deloitte entered a new enterprise AI partnership with Anthropic shortly after refunding the Australian government for a report that included inaccurate AI-generated information.

The A$439,000 (US$290,618) contract was intended for an independent review but contained fabricated citations to non-existent academic sources. Deloitte has since repaid the final instalment, and the government of Australia has released a corrected version of the report.

Despite the controversy, Deloitte is expanding its use of AI by integrating Anthropic’s Claude chatbot across its global workforce of nearly half a million employees.

A collaboration will focus on developing AI-driven tools for compliance, automation and data analysis, especially in highly regulated industries such as finance and healthcare.

The companies also plan to design AI agent personas tailored to Deloitte’s various departments to enhance productivity and decision-making. Financial terms of the agreement were not disclosed.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

ChatGPT reaches 800 million weekly users as OpenAI’s value hits $500 billion

OpenAI CEO Sam Altman has announced that ChatGPT now reaches 800 million weekly active users, reflecting rapid growth across consumers, developers, enterprises and governments.

The figure marks another milestone for the company, which reported 700 million weekly users in August and 500 million at the end of March.

Altman shared the news during OpenAI’s Dev Day keynote, noting that four million developers are now building with OpenAI tools. He said ChatGPT processes more than six billion tokens per minute through its API, signalling how deeply integrated it has become across digital ecosystems.

The event also introduced new tools for building apps directly within ChatGPT and creating more advanced agentic systems. Altman states these will support a new generation of interactive and personalised applications.

OpenAI, still legally a nonprofit, was recently valued at $500 billion following a private stock sale worth $6.6 billion.

Its growing portfolio now includes the Sora video-generation tool, a new social platform, and a commerce partnership with Stripe, consolidating its status as the world’s most valuable private company.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

India’s competition watchdog urges AI self-audits to prevent market distortions

The Competition Commission of India (CCI) has urged companies to self-audit their AI systems to prevent anti-competitive practices and ensure responsible autonomy.

A call came as part of the CCI’s market study on AI, emphasising the risks of opacity and algorithmic collusion while highlighting AI’s potential to enhance innovation and productivity.

The study warned that dominant firms could exploit their control over data, infrastructure, and proprietary models to reinforce market power, creating barriers to entry. It also noted that opaque AI systems in user sectors may lead to tacit algorithmic coordination in pricing and strategy, undermining fair competition.

The regulatory approach of India, the CCI said, aims to balance technological progress with accountability through a co-regulatory framework that promotes both competition and innovation.

Additionally, the Commission plans to strengthen its technical capacity, establish a digital markets think tank and host a conference on AI and regulatory challenges.

A report recommended a six-step self-audit framework for enterprises, requiring evaluation of AI systems against competition risks, senior management oversight and clear accountability in high-risk deployments.

It also highlighted AI’s pro-competitive effects, particularly for MSMEs, which benefit from improved efficiency and greater access to digital markets.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!