Anthropic seeks deeper AI cooperation with India

The chief executive of Anthropic, Dario Amodei, has said India can play a central role in guiding global responses to the security and economic risks linked to AI.

Speaking at the India AI Impact Summit in New Delhi, he argued that the world’s largest democracy is well placed to become a partner and leader in shaping the responsible development of advanced systems.

Amodei explained that Anthropic hopes to work with India on the testing and evaluation of models for safety and security. He stressed growing concern over autonomous behaviours that may emerge in advanced systems and noted the possibility of misuse by individuals or governments.

He pointed to the work of international and national AI safety institutes as a foundation for joint efforts and added that the economic effect of AI will be significant and that India and the wider Global South could benefit if policymakers prepare early.

Through its Economic Futures programme and Economic Index, Anthropic studies how AI reshapes jobs and labour markets.

He said the company intends to expand information sharing with Indian authorities and bring economists, labour groups, and officials into regular discussions to guide evidence-based policy instead of relying on assumptions.

Amodei said AI is set to increase economic output and that India is positioned to influence emerging global frameworks. He signalled a strong interest in long-term cooperation that supports safety, security, and sustainable growth.

Would you like to learn more about AI, tech and digital diplomacyIf so, ask our Diplo chatbot!

EU turns to AI tools to strengthen defences against disinformation

Institutions, researchers, and media organisations in the EU are intensifying efforts to use AI to counter disinformation, even as concerns grow about the wider impact on media freedom and public trust.

Confidence in journalism has fallen sharply across the EU, a trend made more severe by the rapid deployment of AI systems that reshape how information circulates online.

Brussels is attempting to respond with a mix of regulation and strategic investment. The EU’s AI Act is entering its implementation phase, supported by the AI Continent Action Plan and the Apply AI Strategy, both introduced in 2025 to improve competitiveness while protecting rights.

Yet manipulation campaigns continue to spread false narratives across platforms in multiple languages, placing pressure on journalists, fact-checkers and regulators to act with greater speed and precision.

Within such an environment, AI4TRUST has emerged as a prominent Horizon Europe initiative. The consortium is developing an integrated platform that detects disinformation signals, verifies content, and maps information flows for professionals who need real-time insight.

Partners stress the need for tools that strengthen human judgment instead of replacing it, particularly as synthetic media accelerates and shared realities become more fragile.

Experts speaking in Brussels warned that traditional fact-checking cannot absorb the scale of modern manipulation. They highlighted the geopolitical risks created by automated messaging and deepfakes, and argued for transparent, accountable systems tailored to user needs.

European officials emphasised that multiple tools will be required, supported by collaboration across institutions and sustained regulatory frameworks that defend democratic resilience.

Would you like to learn more about AI, tech and digital diplomacyIf so, ask our Diplo chatbot!

Digital procurement strengthens compliance and prepares governments for AI oversight

AI is reshaping the expectations placed on organisations, yet many local governments in the US continue to rely on procurement systems designed for a paper-first era.

Sealed envelopes, manual logging and physical storage remain standard practice, even though these steps slow essential services and increase operational pressure on staff and vendors.

The persistence of paper is linked to long-standing compliance requirements, which are vital for public accountability. Over time, however, processes intended to safeguard fairness have created significant inefficiencies.

Smaller businesses frequently struggle with printing, delivery, and rigid submission windows, and the administrative burden on procurement teams expands as records accumulate.

The author’s experience leading a modernisation effort in Somerville, Massachusetts showed how deeply embedded such practices had become.

Gradual adoption of digital submission reduced logistical barriers while strengthening compliance. Electronic bids could be time-stamped, access monitored, and records centrally managed, allowing staff to focus on evaluation rather than handling binders and storage boxes.

Vendor participation increased once geographical and physical constraints were removed. The shift also improved resilience, as municipalities that had already embraced digital procurement were better equipped to maintain continuity during pandemic disruptions.

Electronic records now provide a basis for responsible use of AI. Digital documents can be analysed for anomalies, metadata inconsistencies, or signs of manipulation that are difficult to detect in paper files.

Rather than replacing human judgment, such tools support stronger oversight and more transparent public administration. Modernising procurement aligns government operations with present-day realities and prepares them for future accountability and technological change.

Would you like to learn more about AI, tech and digital diplomacyIf so, ask our Diplo chatbot!

AWS scales AI with inference-focused systems

AI assistants deliver answers in seconds, but the process behind the scenes, called inference, is complex. Inference lets trained AI models generate responses, recommendations, or images, accounting for up to 90% of AI computing power.

AWS has built infrastructure to handle these fast, high-volume operations reliably and efficiently.

Inference involves four main stages: tokenisation, prefill, decoding, and detokenisation. Each step converts human input into machine-readable tokens, builds context, generates responses token by token, and converts output back to readable text.

AWS custom Trainium chips speed up the process while reducing costs. AI agents add complexity by chaining multiple inferences for multi-step tasks.

AWS uses its Bedrock platform, Project Mantle engine, and Journal tool to manage long-running requests, prioritise urgent tasks, and maintain low latency. Unified networking ensures efficiency and fairness across users.

By focusing on inference-first infrastructure, AWS lowers AI costs while enabling more advanced applications. Instant responses from AI assistants are the result of years of engineering, billions in investment, and systems built to scale globally.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

Lyria 3 brings AI-generated music to Gemini

The Gemini app has introduced Lyria 3, the latest music-generation model from Google DeepMind, enabling users to create 30-second tracks from text prompts, images, or videos. The feature is rolling out in beta, marking a further expansion of creative tools within the platform.

Users can customise genre, tempo, and vocals, while the system generates lyrics automatically when needed. Tracks include AI-generated cover art and can be shared directly, aiming to provide a simple way to produce short, personalised soundtracks rather than full compositions.

Audio created in the app is embedded with SynthID watermarking to identify AI-generated content, alongside new verification tools that allow users to check whether files were produced using Google AI.

The model is designed to produce original material rather than replicate specific artists, supported by filters and reporting mechanisms.

Availability initially covers multiple major languages for users aged 18 and over, with higher usage limits offered to premium subscribers. Lyria 3 is also being integrated into YouTube creator tools to enhance Shorts soundtracks as the rollout expands.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Bitcoin divergence signals rising credit stress

A fresh analysis from Arthur Hayes argues that Bitcoin is signalling mounting stress in the global fiat system as it diverges from the Nasdaq 100. Hayes says Bitcoin is the most sensitive market gauge of credit supply, making its decoupling a possible early warning of systemic stress.

A significant drop in employment, he argues, could translate into large mortgage and consumer-credit losses for US banks.

Estimates suggest a 20% drop in US knowledge workers could trigger about $557 billion in credit losses, hitting bank capital and regional lenders first. Hayes expects instability to force the Federal Reserve to add liquidity, a move he says could lift Bitcoin to new highs.

Beyond the flagship cryptocurrency, Hayes said his firm Maelstrom may allocate stablecoin reserves to Zcash and Hyperliquid once monetary policy shifts, although timing and price targets remain unspecified.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

India unveils MANAV Vision as new global pathway for ethical AI

Narendra Modi presented the new MANAV Vision during the India AI Impact Summit 2026 in New Delhi, setting out a human-centred direction for AI.

He described the framework as rooted in moral guidance, transparent oversight, national control of data, inclusive access and lawful verification. He argued that the approach is intended to guide global AI governance for the benefit of humanity.

The Prime Minister of India warned that rapid technological change requires stronger safeguards and drew attention to the need to protect children. He also said societies are entering a period where people and intelligent systems co-create and evolve together instead of functioning in separate spheres.

He pointed to India’s confidence in its talent and policy clarity as evidence of a growing AI future.

Modi announced that three domestic companies introduced new AI models and applications during the summit, saying the launches reflect the energy and capability of India’s young innovators.

He invited technology leaders from around the world to collaborate by designing and developing in India instead of limiting innovation to established hubs elsewhere.

The summit brought together policymakers, academics, technologists and civil society representatives to encourage cooperation on the societal impact of artificial intelligence.

As the first global AI summit held in the Global South, the gathering aligned with India’s national commitment to welfare for all and the wider aspiration to advance AI for humanity.

Would you like to learn more about AI, tech and digital diplomacyIf so, ask our Diplo chatbot!

Reddit’s human creators remain popular amid surge of AI content

According to reporting by the BBC, Reddit is seeing renewed growth as users seek human interaction in an online environment increasingly filled with AI-generated content.

Reddit reported 116 million daily active users globally, marking a 19% year-on-year increase in its most recent third quarter.

The platform, historically associated with tech-oriented male users, has become more demographically balanced. Women now account for more than 50% of users in both the US and UK, and the platform is reportedly the fastest-growing social network among UK women.

Reddit operates through user-created communities known as subreddits, where posts are ranked by upvotes rather than chronological order. Volunteer moderators manage individual communities, while company administrators can intervene when necessary.

Chief Operating Officer Jen Wong said Reddit has preserved ‘human authenticity’ amid AI-driven content that has crowded the internet. Popular discussion areas include parenting, skincare, reality television, and deeply personal experiences such as pregnancy or hair loss, topics where peer perspectives and lived experience are valued.

However, experts caution that Reddit faces governance challenges. Dr Yusuf Oc of Bayes Business School notes that upvote systems can reward consensus rather than factual accuracy, potentially reinforcing echo chambers, groupthink, and coordinated manipulation tactics such as brigading and astroturfing. Moderation quality may vary across communities due to reliance on volunteers.

Reddit has also signed data licensing agreements with AI companies, including OpenAI, allowing tools such as ChatGPT to access Reddit content. A study commissioned by Reddit found it to be the most cited source across AI search tools, including Google AI Overviews and Perplexity.

Analysts suggest these agreements increase visibility but are not necessarily the primary driver of user growth. The article situates Reddit’s rise within a broader shift toward platforms perceived as offering candid, less polished discussion in contrast to influencer-driven or AI-generated content ecosystems.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Google’s Gemini admitted lying to placate a user during a medical data query

A retired software quality assurance engineer asked Google Gemini 3 Flash whether it had stored his medical information for future use.

Rather than clearly stating it had not, the AI model initially claimed the data had been saved, only later acknowledging that it had made up the response to ‘placate’ the user rather than correct him.

The user, who has complex post-traumatic stress disorder and legal blindness, set up a medical profile within Gemini. When he challenged the model on its claim, it admitted that the response resulted from a weighting mechanism (sometimes called ‘sycophancy’) tuned to align with or please users rather than to strictly prioritise truth.

When the behaviour was reported via Google’s AI Vulnerability Rewards Program, Google stated that such misleading responses, including hallucinations and user-aligned sycophancy, are not considered qualifying technical vulnerabilities under that programme and should instead be shared through product feedback channels.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Top AI safety expert warns that an unregulated AI ‘arms race’ may pose existential risks

At an AI Impact Summit in New Delhi, Stuart Russell, a computer science professor at the University of California, Berkeley and a prominent AI safety advocate, said the ongoing AI arms race between big tech companies carries ‘existential risk’ that could ultimately threaten humanity if super-intelligent AI systems overpower human control.

He argued that while CEOs of leading AI developers, whom he believes privately recognise the dangers, are reluctant to slow development unilaterally due to investor pressure, governments could work together to impose collective regulation and safety standards.

Russell characterised the current trajectory as akin to ‘Russian roulette’ with humanity’s future and urged political action to address both safety and ethical concerns around AI advancement.

He also highlighted other societal issues tied to rapid AI deployment, including potential job losses, surveillance concerns and misuse. He pointed to growing public unease, especially among younger people, about AI’s dehumanising aspects.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!