Anthropic launches Bengaluru office to drive responsible AI in India

AI firm Anthropic, the company behind the Claude AI chatbot, is opening its first office in India, choosing Bengaluru as its base.

A move that follows OpenAI’s recent expansion into New Delhi, underlining India’s growing importance as a hub for AI development and adoption.

CEO Dario Amodei said India’s combination of vast technical talent and the government’s commitment to equitable AI progress makes it an ideal location.

The Bengaluru office will focus on developing AI solutions tailored to India’s needs in education, healthcare, and agriculture sectors.

Amodei is visiting India to strengthen ties with enterprises, nonprofits, and startups and promote responsible AI use that is aligned with India’s digital growth strategy.

Anthropic plans further expansion in the Indo-Pacific region, following its Tokyo launch, later in the year.

Chief Commercial Officer Paul Smith noted the rising demand among Indian companies for trustworthy, scalable AI systems. Anthropic’s Claude models are already accessible in India through its API, Amazon Bedrock, and Google Cloud Vertex AI.

The company serves more than 300,000 businesses worldwide, with nearly 80 percent of usage outside the US.

India has become the second-largest market for Claude, with developers using it for tasks such as mobile UI design and web app debugging.

Anthropic also enhances Claude’s multilingual capabilities in major Indic languages, including Hindi, Bengali, and Tamil, to support education and public sector projects.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

ChatGPT introduces new generation of interactive apps

A new generation of interactive apps is arriving in ChatGPT, allowing users to engage with tools like Canva, Spotify, and Booking.com directly through conversation. The apps appear naturally during chats, enabling users to create, learn, and explore within the same interface.

Developers can now build their own ChatGPT apps using the newly launched Apps SDK, released in preview as an open standard based on the Model Context Protocol. The SDK includes documentation, examples, and testing tools, with app submissions and monetisation to follow later this year.

Over 800 million ChatGPT users can now access these apps on Free, Go, Plus and Pro plans, excluding EU regions for the moment. Early partners include Booking.com, Coursera, Canva, Figma, Expedia, Spotify, and Zillow, with more to follow later in the year.

Apps respond to natural language and integrate interactive features such as maps, playlists, and slides directly in chat. ChatGPT can even suggest relevant apps during conversations- for instance, showing Zillow listings when discussing home purchases or prompting Spotify for a party playlist.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Beware the language of human flourishing in AI regulation

TechPolicy.Press recently published ‘Confronting Empty Humanism in AI Policy’, a thought piece by Matt Blaszczyk exploring how human-centred and humanistic language in AI policy is widespread, but often not backed by meaningful legal or regulatory substance.

Blaszczyk observes that figures such as Peter Thiel contribute to a discourse that questions the very value of human existence, but equally worrying are the voices using humanist, democratic, and romantic rhetoric to preserve the status quo. These narratives can be weaponised by actors seeking to reassure the public while avoiding strong regulation.

The article analyses executive orders, AI action plans, and regulatory proposals that promise human flourishing or protect civil liberties, but often do so under deregulatory frameworks or with voluntary oversight.

For example, the EU AI Act is praised, yet criticised for gaps and loopholes; many ‘human-in-the-loop’ provisions risk making humans mere rubber stampers.

Blaszczyk suggests that nominal humanism is used as a rhetorical shield. Humans are placed formally at the centre of laws and frameworks, copyright, free speech, democratic values, but real influence, rights protection, and liability often remain minimal.

He warns that without enforcement, oversight and accountability, human-centred AI policies risk becoming slogans rather than safeguards.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Google unveils CodeMender, an AI agent that repairs code vulnerabilities

Google researchers have unveiled CodeMender, an AI-powered agent designed to automatically detect and fix software vulnerabilities.

The tool aims to improve code security by generating and applying patches that address critical flaws, allowing developers to focus on building reliable software instead of manually locating and repairing weaknesses.

Built on the Gemini Deep Think models, CodeMender operates autonomously, identifying vulnerabilities, reasoning about the underlying code, and validating patches to ensure they are correct and do not introduce regressions.

Over the past six months, it has contributed 72 security fixes to open source projects, including those with millions of lines of code.

The system combines advanced program analysis with multi-agent collaboration to strengthen its decision-making. It employs techniques such as static and dynamic analysis, fuzzing and differential testing to trace the root causes of vulnerabilities.

Each proposed fix undergoes rigorous validation before being reviewed by human developers to guarantee quality and compliance with coding standards.

According to Google, CodeMender’s dual approach (reactively patching new flaws and proactively rewriting code to eliminate entire vulnerability classes) represents a major step forward in AI-driven cybersecurity.

The company says the tool’s success demonstrates how AI can transform the maintenance and protection of modern software systems.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Policy hackathon shapes OpenAI proposals ahead of EU AI strategy

OpenAI has published 20 policy proposals to speed up AI adoption across the EU. Released shortly before the European Commission’s Apply AI Strategy, the report outlines practical steps for member states, businesses, and the public sector to bridge the gap between ambition and deployment.

The proposals originate from Hacktivate AI, a Brussels hackathon with 65 participants from EU institutions, governments, industry, and academia. They focus on workforce retraining, SME support, regulatory harmonisation, and public sector collaboration, highlighting OpenAI’s growing policy role in Europe.

Key ideas include Individual AI Learning Accounts to support workers, an AI Champions Network to mobilise SMEs, and a European GovAI Hub to share resources with public institutions. OpenAI’s Martin Signoux said the goal was to bridge the divide between strategy and action.

Europe already represents a major market for OpenAI tools, with widespread use among developers and enterprises, including Sanofi, Parloa, and Pigment. Yet adoption remains uneven, with IT and finance leading, manufacturing catching up, and other sectors lagging behind, exposing a widening digital divide.

The European Commission is expected to unveil its Apply AI Strategy within days. OpenAI’s proposals act as a direct contribution to the policy debate, complementing previous initiatives such as its EU Economic Blueprint and partnerships with governments in Germany and Greece.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

India’s competition watchdog urges AI self-audits to prevent market distortions

The Competition Commission of India (CCI) has urged companies to self-audit their AI systems to prevent anti-competitive practices and ensure responsible autonomy.

A call came as part of the CCI’s market study on AI, emphasising the risks of opacity and algorithmic collusion while highlighting AI’s potential to enhance innovation and productivity.

The study warned that dominant firms could exploit their control over data, infrastructure, and proprietary models to reinforce market power, creating barriers to entry. It also noted that opaque AI systems in user sectors may lead to tacit algorithmic coordination in pricing and strategy, undermining fair competition.

The regulatory approach of India, the CCI said, aims to balance technological progress with accountability through a co-regulatory framework that promotes both competition and innovation.

Additionally, the Commission plans to strengthen its technical capacity, establish a digital markets think tank and host a conference on AI and regulatory challenges.

A report recommended a six-step self-audit framework for enterprises, requiring evaluation of AI systems against competition risks, senior management oversight and clear accountability in high-risk deployments.

It also highlighted AI’s pro-competitive effects, particularly for MSMEs, which benefit from improved efficiency and greater access to digital markets.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Italy passes Europe’s first national AI law

Italy has become the first EU country to pass a national AI law, introducing detailed rules to govern the development and use of AI technologies across key sectors such as health, work, and justice.

The law, approved by the Senate on 17 September and in effect on 10 October, defines new national authorities responsible for oversight, including the Agency for Digital Italy and the National Cybersecurity Agency. Both bodies will supervise compliance, security, and responsible use of AI systems.

In healthcare, the law simplifies data-sharing for scientific research by allowing the secondary use of anonymised or pseudonymised patient data. New rules also ensure transparency and consent when AI is used by minors under 14.

The law introduces criminal penalties for those who use AI-generated images or videos to cause harm or deception. The Italian approach combines regulation with innovation, seeking to protect citizens while promoting responsible growth in AI development.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

OpenAI backs policy push for Europe’s AI uptake

OpenAI and Allied for Startups have released Hacktivate AI, a set of 20 ideas to speed up AI adoption across Europe ahead of the Commission’s Apply AI Strategy.

The report emerged from a Brussels policy hackathon with 65 participants from EU bodies, governments, enterprises and startups, proposing measures such as an Individual AI Learning Account, an AI Champions Network for SMEs, a European GovAI Hub and relentless harmonisation.

OpenAI highlights strong European demand and uneven workplace uptake, citing sector gaps and the need for targeted support, while pointing to initiatives like OpenAI Academy to widen skills.

Broader policy momentum is building, with the EU preparing an Apply AI Strategy to boost homegrown tools and cut dependencies, reinforcing the push for practical deployment across public services and industry.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

Thousands affected by AI-linked data breach in New South Wales

A major data breach has affected the Northern Rivers Resilient Homes Program in New South Wales.

Authorities confirmed that personal information was exposed after a former contractor uploaded data to the AI platform ChatGPT between 12 and 15 March 2025.

The leaked file contained over 12,000 records, with details including names, addresses, contact information and health data. Up to 3,000 individuals may be impacted.

While there is no evidence yet that the information has been accessed by third parties, the NSW Reconstruction Authority (RA) and Cyber Security NSW have launched a forensic investigation.

Officials apologised for the breach and pledged to notify all affected individuals in the coming week. ID Support NSW is offering free advice and resources, while compensation will be provided for any costs linked to replacing compromised identity documents.

The RA has also strengthened its internal policies to prevent unauthorised use of AI platforms. An independent review of the incident is underway to determine how the breach occurred and why notification took several months.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Labour market stability persists despite the rise of AI

Public fears of AI rapidly displacing workers have not yet materialised in the US labour market.

A new study finds that the overall occupational mix has shifted only slightly since the launch of generative AI in November 2022, with changes resembling past technological transitions such as the rise of computers and the internet.

The pace of disruption is not significantly faster than historical benchmarks.

Industry-level data show some variation, particularly in information services, finance, and professional sectors, but trends were already underway before AI tools became widely available.

Similarly, younger workers have not seen a dramatic divergence in opportunities compared with older graduates, suggesting that AI’s impact on early careers remains modest and difficult to isolate.

Exposure, automation, and augmentation metrics offer little evidence of widespread displacement. OpenAI’s exposure data and Anthropic’s usage data suggest stability in the proportion of workers most affected by AI, including those unemployed.

Even in roles theoretically vulnerable to automation, there has been no measurable increase in job losses.

The study concludes that AI’s labour effects are gradual rather than immediate. Historical precedent suggests that large-scale workforce disruption unfolds over decades, not months. Researchers plan to monitor the data to track whether AI’s influence becomes more visible over time.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!