Global South at the heart of India AI plan

India has unveiled the New Delhi Frontier AI Impact Commitments, a new initiative aimed at promoting inclusive and responsible AI, particularly across the Global South. The announcement was made by Union Minister for Electronics and Information Technology Ashwini Vaishnaw at the opening of the India AI Impact Summit 2026.

Vaishnaw described India’s AI strategy as focused on democratisation, scale, and technological sovereignty. He outlined a comprehensive approach spanning the whole AI ecosystem, including applications, models, computing infrastructure, talent, and energy, with a strong emphasis on practical use in sectors such as healthcare, agriculture, education, and public services.

Framing AI as a transformative technology, the minister stressed that its benefits must reach the widest possible population. He called for a human-centric approach that prioritises safety and dignity, while also addressing risks linked to rapid technological change.

The voluntary commitments bring together Indian innovators such as Sarvam, BharatGen, Gnani.ai, and Soket alongside leading global AI companies. Together, they aim to ensure that AI systems are developed and deployed in ways that reflect equity, cultural diversity, and local realities.

One of the core pledges focuses on improving understanding of how AI is used in the real world. Participating organisations will share anonymised and aggregated insights to help policymakers assess AI’s impact on jobs, skills, productivity, and economic transformation, supporting more informed decision-making.

Another key commitment seeks to strengthen multilingual and context-sensitive AI evaluation. By developing datasets and benchmarks in underrepresented languages and cultural settings, the initiative aims to improve system performance for diverse populations and expand access to high-quality AI tools globally.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

Adoption of agentic AI slowed by data readiness and governance gaps

Agentic AI is emerging as a new stage of enterprise automation, enabling systems to reason, plan, and act across workflows. Adoption, however, remains uneven, with far fewer organisations scaling deployments beyond pilots.

Unlike traditional analytics or generative tools, agentic systems make decisions rather than simply producing insights. Without sufficient context, they struggle to align actions with real business conditions, revealing a persistent context gap.

Recent survey data highlights this disconnect. Although executives express confidence in AI ambitions, significant shares cite data readiness, infrastructure, and skills as barriers. Many identify AI as central to strategy, yet only a limited proportion tie deployments to measurable business outcomes.

Effective agentic AI depends on layered data foundations. Public data provides baseline capability, organisational data enables operational competence, and third-party context supports differentiation. Weak governance or integration can undermine autonomy at scale.

Enterprises that align data governance, enrichment, and AI oversight are more likely to scale beyond pilots. Progress depends less on model sophistication than on trusted data foundations that support transparency and measurable outcomes.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

MIT study finds AI chatbots underperform for vulnerable users

Research from the MIT Centre for Constructive Communication (CCC) finds that leading AI chatbots often provide lower-quality responses to users with lower English proficiency, less education, or who are outside the US.

Models tested include GPT-4, Claude 3 Opus, and Llama 3, which sometimes refuse to answer or respond condescendingly. Using TruthfulQA and SciQ datasets, researchers added user biographies to simulate differences in education, language, and country.

Accuracy fell sharply among non-native English speakers and less-educated users, with the most significant drop among those affected by both; users from countries like Iran also received lower-quality responses.

Refusal behaviour was notable. Claude 3 Opus declined 11% of questions for less-educated, non-native English speakers versus 3.6% for control users. Manual review showed 43.7% of refusals contained condescending language.

Some users were denied access to specific topics even though they answered correctly for others.

The study echoes human sociocognitive biases, in which non-native speakers are often perceived as less competent. Researchers warn AI personalisation could worsen inequities, providing marginalised users with subpar or misleading information when they need it most.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

Gemini 3.1 Pro brings advanced logic to developers and consumers

Google has launched Gemini 3.1 Pro, an upgraded AI model for solving complex science, research, and engineering challenges. Following the Gemini 3 Deep Think release, the update adds enhanced core reasoning for consumer, developer, and enterprise applications.

Developers can access 3.1 Pro in preview via the Gemini API, Google AI Studio, Gemini CLI, Antigravity, and Android Studio, while enterprise users can use it through Vertex AI and Gemini Enterprise.

Consumers can now try the upgrade through the Gemini app and NotebookLM, with higher limits for Google AI Pro and Ultra plan users.

Benchmarks show significant improvements in logic and problem-solving. On the ARC-AGI-2 benchmark, 3.1 Pro scored 77.1%, more than doubling the reasoning performance of its predecessor.

The upgrade is intended to make AI reasoning more practical, offering tools to visualise complex topics, synthesise data, and enhance creative projects.

Feedback from Gemini 3 Pro users has driven the rapid development of 3.1 Pro. The preview release allows Google to validate improvements and continue refining advanced agentic workflows before the model becomes widely available.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

Microsoft outlines challenges in verifying AI-generated media

In an era of deepfakes and AI-manipulated content, determining what is real online has become increasingly complex. Microsoft’s report Media Integrity and Authentication reviews current verification methods, their limits, and ways to boost trust in digital media.

The study emphasises that no single solution can prevent digital deception. Techniques such as provenance tracking, watermarking, and digital fingerprinting can provide useful context about a media file’s origin, creation tools, and whether it has been altered.

Microsoft has pioneered these technologies, cofounding the Coalition for Content Provenance and Authenticity (C2PA) to standardise media authentication globally.

The report also addresses the risks of sociotechnical attacks, where even subtle edits can manipulate authentication results to mislead the public.

Researchers explored how provenance information can remain durable and reliable across different environments, from high-security systems to offline devices, highlighting the challenge of maintaining consistent verification.

As AI-generated or edited content becomes commonplace, secure media provenance is increasingly important for news outlets, public figures, governments, and businesses.

Reliable provenance helps audiences spot manipulated content, with ongoing research guiding clearer, practical verification displays for the public.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

Reload launches Epic to bring shared memory and structure to AI agents

Founders of the Reload platform say AI is moving from simple automation toward something closer to teamwork.

Newton Asare and Kiran Das noticed that AI agents were completing tasks normally handled by employees, which pushed them to design a system that treats digital workers as part of a company’s structure instead of disposable tools.

Their platform, Reload, offers a way for organisations to manage these agents across departments, assign responsibilities and monitor performance. The firm has secured 2.275 million dollars in new funding led by Anthemis with several other investors joining the round.

The shift toward agent-driven development exposed a recurring limitation. Most agents retain only short-term memory, which means they often lose context about a product or forget why a task matters.

Reload’s answer is Epic, a new product built on its platform that acts as an architect alongside coding agents. Epic defines requirements and constraints at the start of a project, then continuously preserves the shared understanding that agents need as software evolves.

Epic integrates with popular AI-assisted code editors such as Cursor and Windsurf, allowing developers to keep a consistent system memory without changing their workflow.

The tool generates key project artefacts from the outset, including data models and technical decisions, then carries them forward even when teams switch agents. It creates a single source of truth so that engineers and digital workers develop against the same structure.

Competing systems such as LongChain and CrewAI also offer support for managing agents, but Reload argues that Epic’s ability to maintain project-level context sets it apart.

Asare and Das, who already built and sold a previous company together, plan to use the fresh capital to grow their team and expand the infrastructure needed for a future in which human workers manage AI employees instead of the other way around.

Would you like to learn more about AI, tech and digital diplomacyIf so, ask our Diplo chatbot!

Greece positions itself as a global AI bridge

The PM of Greece, Kyriakos Mitsotakis, took part in the India AI Impact Summit in New Delhi as part of a two-day visit that highlighted the country’s ambition to deepen its presence in global technology governance.

A gathering that focuses on creating a coherent international approach to AI under the theme ‘People-Planet-Progress’, with an emphasis on practical outcomes instead of abstract commitments.

Greece presents itself as a link between Europe and the Global South, seeking a larger role in debates over AI policy and geoeconomic strategy.

Mitsotakis is joined by Minister of Digital Governance Dimitris Papastergiou, underscoring Athens’ intention to strengthen partnerships that support technological development.

During the visit, Mitsotakis attended an official dinner hosted by Narendra Modi.

On Thursday, he will address the summit at Bharat Mandapam before holding a scheduled meeting with his Indian counterpart, reinforcing efforts to expand cooperation between Greece and India in emerging technologies.

Would you like to learn more about AI, tech and digital diplomacyIf so, ask our Diplo chatbot!

UNESCO expands multilingual learning through LearnBig

The LearnBig digital application is expanding access to learning, with UNESCO supporting educational materials in national and local languages instead of relying solely on dominant teaching languages.

A project that aligns with International Mother Language Day and reflects long-standing research showing that children learn more effectively when taught in languages they understand from an early age.

The programme supports communities along the Thailand–Myanmar border, where children gain literacy and numeracy skills in both Thai and their mother tongues.

Young learners can make more substantial academic progress with this approach, which allows them to remain connected to their cultural identity rather than being pushed into unfamiliar linguistic environments. More than 2,000 digital books are available in languages such as Karen, Myanmar, and Pattani Malay.

LearnBig was developed within the ‘Mobile Literacy for Out-of-School Children’ programme, backed by partners including Microsoft, True Corporation, POSCO 1% Foundation and the Ministry of Education of Thailand.

An initiative by UNESCO that has reached more than 526,000 learners, with young people in Yala using tablets to access digital books, while learners in Mae Hong Son study through content presented in their local languages.

The project illustrates the potential of digital innovation to bridge linguistic, social, and geographic divides.

By supporting children who often fall outside formal education systems, LearnBig demonstrates how technology can help build a more inclusive and equitable learning environment rather than reinforcing existing barriers.

Would you like to learn more about AI, tech and digital diplomacyIf so, ask our Diplo chatbot!

Microsoft and OpenAI fund UK AI alignment project

OpenAI and Microsoft have joined the UK’s AI Security Institute, pledging funding to its Alignment Project, an international effort focused on ensuring advanced AI systems are safe, secure, and act as intended.

Their contributions bring total funding to over £27 million, supporting some 60 research projects across eight countries.

AI alignment aims to steer AI systems to behave predictably and prevent unintended or harmful outcomes. The project provides grants, computing resources, and mentorship, boosting public trust in AI while supporting productivity, medical progress, and new job opportunities.

UK Deputy Prime Minister David Lammy and AI Minister Kanishka Narayan highlighted the importance of safe AI adoption. Lammy said strong safety foundations help the UK harness AI’s benefits, while Narayan stressed that public confidence is key to unlocking its full potential.

The Alignment Project operates with a global coalition including the Canadian Institute for Advanced Research, Amazon Web Services, Anthropic, and other partners.

By combining independent research teams, grant funding, and access to infrastructure, the initiative aims to keep increasingly capable AI systems reliable and controllable as they are deployed worldwide.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

AI model improves long-range space weather forecasts

Scientists from Southwest Research Institute and the National Center for Atmospheric Research, supported by the National Science Foundation, have created an experimental tool that could extend space weather forecasts from hours to several weeks.

Longer lead times would help operators protect satellites, navigation systems, and power infrastructure from solar disturbances. Research focuses on predicting where flare-producing solar active regions form.

By analysing magnetic data captured by the Solar Dynamics Observatory, scientists reconstructed hidden magnetic conditions beneath the Sun’s surface, showing that these regions follow structured magnetic bands rather than appearing randomly.

PINNBARDS, a physics-informed AI model, connects surface observations with deep tachocline dynamics that drive solar magnetic evolution. Better modelling could provide earlier warnings of solar flares and coronal mass ejections, helping protect communications and astronaut safety.

Funding from NASA and Stanford University supported the work. Researchers describe it as a foundation for next-generation forecasting systems capable of anticipating extreme solar activity with greater accuracy.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot