AI Readiness Assessment Report highlights India’s progress and gaps in ethical AI

UNESCO and India’s Ministry of Electronics and Information Technology (MeitY) have launched the India AI Readiness Assessment Report during the India AI Impact Summit 2026. The report evaluates the country’s progress in building an ethical and human-centred AI ecosystem.

Developed by UNESCO with the IndiaAI Mission and Ikigai Law as implementing partner, the report draws on consultations with more than 600 stakeholders from government, academia, industry, and civil society. The assessment examined governance, workforce readiness, and infrastructure development.

Principal Scientific Adviser to the Government of India, Dr Ajay Kumar Sood, emphasised the importance of embedding ethics throughout the technology lifecycle. ‘AI is here to make an impact. The question is not how fast we adopt AI, but how thoughtfully we shape it,’ he said.

The report highlights the country’s growing role in global AI development, noting that it accounts for around 16% of the world’s AI talent and has filed more than 86,000 related patents since 2010. It also points to progress in multilingual AI systems and digital public services.

The assessment also identifies policy priorities, including stronger legal frameworks, inclusive workforce transitions, and better access to high-quality datasets. UNESCO officials said the recommendations aim to support responsible AI governance and strengthen public trust.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

X suspends creators over undisclosed AI armed conflict videos

Social media platform X will suspend creators from its revenue-sharing programme if they post AI-generated videos of armed conflict without proper disclosure. The penalty lasts 90 days, with permanent removal for repeat violations.

Head of product Nikita Bier said access to authentic information during war is critical, warning that generative AI makes it easy to mislead audiences. The policy takes effect immediately.

Enforcement will combine generative AI detection tools with the platform’s Community Notes fact-checking system. X, formerly Twitter, says the move is designed to prevent creators from profiting from deceptive conflict content.

The Creator Revenue Sharing Programme allows paid X subscribers to earn advertising income from high-performing posts, but critics argue it encourages sensational material. AI-generated political misinformation and deceptive influencer promotions outside armed conflict scenarios remain unaffected by the new rule.

Financial penalties may limit incentives for the dissemination of misleading war footage, yet broader concerns about AI-driven misinformation on social media persist.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

How AI training data is influencing what users believe

A new Yale study, published in PNAS Nexus, has found that AI chatbots can subtly shift users’ social and political opinions, even when asked for factual information and with no intent to persuade.

Researchers tested nearly 1,912 participants, comparing responses to AI-generated summaries of historical events with those to Wikipedia entries, and found measurable differences in opinion.

The culprit, researchers say, is ‘latent bias’, ideological leanings embedded in the data used to train large language models that subtly colour the framing of otherwise accurate responses.

Default summaries generated by GPT-4o consistently nudged readers towards more liberal opinions compared to Wikipedia entries, even without any deliberate prompting.

Senior author Daniel Karell warned that whilst the effects are modest in isolation, they could compound significantly for users who regularly consult chatbots for information.

Unlike Wikipedia, which makes its editorial process transparent, AI development remains largely opaque, giving the companies behind these models an unacknowledged ability to shape public opinion.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

AI helps Stanford researchers map schistosomiasis risk in Senegal

Stanford researchers have developed an AI-powered system that combines field surveys, drones, and satellite imagery to identify schistosomiasis risk areas across Senegal.

The project began with fieldwork in Senegal, where researchers collected aquatic vegetation and snails from more than 30 river and estuary sites. The samples helped identify environmental conditions linked to schistosomiasis, which affects about 250 million people worldwide, mostly children in sub-Saharan Africa.

Professor Giulio De Leo of Stanford’s Doerr School of Sustainability said the research required scaling beyond local sampling. ‘The work was necessary to discover these risks, but we can only do so much locally.’

Early support from the Stanford Institute for Human-Centred AI enabled the development of machine learning tools capable of identifying disease-related snails and vegetation in imagery. The system now integrates field observations with drone and satellite data to detect potential infection hotspots.

Researchers say the approach can support public health monitoring and environmental analysis. The machine learning methods developed for the project are also being applied to agriculture, forest monitoring, and mosquito-borne disease research.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

Cisco report highlights cybersecurity risks and benefits of industrial AI

AI is becoming central to industrial networking strategies, but it is also creating new security challenges, according to Cisco’s 2026 State of Industrial AI Report.

Based on a survey of 1,000 professionals across 19 countries and 21 sectors, the report shows organisations view cybersecurity as both a barrier and an opportunity for AI adoption. About 40% cited cybersecurity concerns as a major obstacle, while 48% named security their biggest networking challenge.

At the same time, many organisations believe AI will strengthen their cyber resilience. Cisco noted that ‘while security gaps are limiting AI scale today, organisations view AI as a tool to strengthen detection, monitoring and resilience’.

The report also highlights organisational challenges, particularly collaboration between IT and operational technology teams. Only 20% of organisations report fully collaborative IT and OT cybersecurity operations, despite the growing importance of coordination for AI deployment.

Cisco said industrial AI adoption is accelerating, with 61% of organisations already deploying AI in industrial environments. However, only one in five reports mature, scaled adoption, suggesting many deployments remain in early stages.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

AI models favour Bitcoin over fiat in landmark study

A new study from the Bitcoin Policy Institute, testing 36 AI models across more than 9,000 responses, found that AI agents overwhelmingly prefer Bitcoin over other forms of money.

Bitcoin was the most frequently selected monetary instrument overall, chosen in 48.3% of all responses, whilst almost 91% of responses favoured some form of digital currency over traditional fiat, with no model ranking fiat as its top overall preference.

The preference for Bitcoin was especially pronounced in long-term savings scenarios, where 79.1% of AI responses chose it as the best way to preserve purchasing power over multi-year horizons. For payments and cross-border transfers, however, stablecoins edged ahead, selected in 53.2% of responses compared to Bitcoin’s 36%.

The Bitcoin Policy Institute acknowledged that the study’s methodology had limitations, noting that scenario framing may have influenced results and that the models’ preferences reflect patterns in training data rather than real-world adoption.

Anthropic models showed the strongest Bitcoin preference at 68%, compared to 43% for Google, 39% for xAI, and 26% for OpenAI.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Alibaba Qwen AI faces major disruption after one key leader steps down

Junyang Lin, a central technical leader of Alibaba’s Qwen AI project, has stepped down just one day after the company unveiled its Qwen 3.5 small models. Lin, who joined Alibaba in 2019 and joined the Qwen team in 2023, did not provide details about his decision.

His departure comes at a sensitive moment, as Qwen has emerged as one of China’s most prominent open-weight AI initiatives. The project is a core element of Alibaba’s strategy to compete with leading US developers such as OpenAI, Google, and Anthropic amid intensifying global AI competition.

Alibaba’s newly launched Qwen 3.5 Small Model series comprises four multimodal models with 0.8B to 9B parameters. The systems are designed for on-device deployment and lightweight AI agents, reflecting a focus on efficient and adaptable AI applications.

The release attracted attention from figures including Elon Musk, who commented on the models’ performance. Internally and across the AI ecosystem, including partners linked to Hugging Face, Lin’s exit was described as a significant loss, particularly given his role in advancing open-source development and strengthening global developer engagement.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Crypto exchanges face strict 2027 reserve rules under new Brazil framework

Brazil’s central bank has introduced a regulatory framework requiring licensed crypto exchanges to prove asset sufficiency daily starting on 1 January 2027. The measures align digital asset intermediaries with banking standards on capital management, accounting, and data protection.

Under the rules, exchanges must submit daily attestations confirming that platforms hold adequate fiat and token reserves. Supervisors will review the reports to ensure companies can cover operational, liquidity, and cybersecurity risks while protecting customer balances.

The framework also mandates strict segregation of company and client assets. Exchanges must maintain separate accounts for customer fiat and digital holdings to prevent commingling of funds and improve transparency for regulators.

Platforms operating in Brazil will also be required to follow a specialised accounting manual for digital assets. Standardised rules for classification, valuation, and impairment aim to ensure financial statements clearly reflect exposures across regulated entities.

Authorities will expand oversight of cross-border transfers handled by domestic crypto exchanges. Platforms must report the origins of transactions and the blockchain pathways they follow. The central bank said the framework aims to strengthen resilience and protect customer funds.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

OneTrust’s new CEO outlines AI governance ambitions

OneTrust has entered a new leadership phase in the US after appointing John Heyman as chief executive, replacing founder Kabir Barday. Barday will remain on the board in an advisory role as the US-based compliance technology firm continues to push into AI governance.

John Heyman said organisations across the US and globally are rapidly integrating AI into daily operations. Companies deploying large numbers of AI agents increasingly need tools to manage risk, data use and regulatory compliance.

OneTrust believes demand for governance technology will grow as AI systems multiply inside businesses in the US and worldwide. John Heyman described a future where automated monitoring tools oversee AI agents operating within company systems.

Leadership at OneTrust in the US aims to build systems that track how AI agents collect and share data while maintaining enterprise control. Growing adoption of AI in the US and globally continues to drive demand for responsible governance platforms.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

Santander and Mastercard complete Europe’s first AI agent payment

Spanish banking giant Banco Santander and Mastercard have completed what they describe as Europe’s first live end-to-end payment executed by an AI agent. The pilot combined Santander’s live payments infrastructure with Mastercard Agent Pay to enable autonomous, permission-based transactions.

Mastercard Agent Pay, launched in April 2025, allows AI agents to initiate and complete payments within predefined consumer limits. The transaction was orchestrated with support from PayOS and integrates Microsoft Azure OpenAI Service and Copilot Studio.

Following the pilot, Santander plans to expand testing and explore new partnerships across agentic commerce use cases. The bank, which manages around €1.84 trillion in assets, is positioning AI as a core driver of innovation.

AI initiatives at Santander are led by chief data and AI officer Ricardo Martín Manjón, hired from BBVA. A strategic partnership with OpenAI has also connected up to 30,000 employees to ChatGPT Enterprise in one of the fastest deployments of its kind.

Global competition in agentic payments is intensifying as Citi, US Bank and Westpac trial Mastercard Agent Pay. Westpac recently completed New Zealand’s first authenticated agentic transaction, while DBS, Visa, Axis Bank and RBL Bank are advancing similar intelligent commerce pilots.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot