India hosts AI Impact Summit as UN chief urges shared AI rules

UN Secretary-General Antonio Guterres told the India AI Impact Summit 2026 that the future of AI must not be determined by a small group of nations or controlled by powerful private actors. He praised India’s leadership in hosting what he described as the first AI summit in the Global South.

Guterres said AI is transforming economies, societies, and governance at unprecedented speed. Inclusive and globally representative governance frameworks are essential to ensure equitable access and responsible deployment, he added.

‘The future of AI cannot be decided by a handful of countries or left to the whims of a few billionaires,’ he said, urging multilateral cooperation. Real impact, he added, means technology that improves lives and protects the planet.

United Nations officials say AI could help accelerate progress on nearly 80 per cent of the Sustainable Development Goals. Potential applications include reducing inequalities, strengthening public services, and enhancing climate action.

The UN has committed to a proactive, human rights-based approach to AI adoption within its own system. Agencies are deploying AI tools to address bias in data models, improve analytics, support innovation, and safeguard ethical standards.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

Anthropic seeks deeper AI cooperation with India

The chief executive of Anthropic, Dario Amodei, has said India can play a central role in guiding global responses to the security and economic risks linked to AI.

Speaking at the India AI Impact Summit in New Delhi, he argued that the world’s largest democracy is well placed to become a partner and leader in shaping the responsible development of advanced systems.

Amodei explained that Anthropic hopes to work with India on the testing and evaluation of models for safety and security. He stressed growing concern over autonomous behaviours that may emerge in advanced systems and noted the possibility of misuse by individuals or governments.

He pointed to the work of international and national AI safety institutes as a foundation for joint efforts and added that the economic effect of AI will be significant and that India and the wider Global South could benefit if policymakers prepare early.

Through its Economic Futures programme and Economic Index, Anthropic studies how AI reshapes jobs and labour markets.

He said the company intends to expand information sharing with Indian authorities and bring economists, labour groups, and officials into regular discussions to guide evidence-based policy instead of relying on assumptions.

Amodei said AI is set to increase economic output and that India is positioned to influence emerging global frameworks. He signalled a strong interest in long-term cooperation that supports safety, security, and sustainable growth.

Would you like to learn more about AI, tech and digital diplomacyIf so, ask our Diplo chatbot!

EU turns to AI tools to strengthen defences against disinformation

Institutions, researchers, and media organisations in the EU are intensifying efforts to use AI to counter disinformation, even as concerns grow about the wider impact on media freedom and public trust.

Confidence in journalism has fallen sharply across the EU, a trend made more severe by the rapid deployment of AI systems that reshape how information circulates online.

Brussels is attempting to respond with a mix of regulation and strategic investment. The EU’s AI Act is entering its implementation phase, supported by the AI Continent Action Plan and the Apply AI Strategy, both introduced in 2025 to improve competitiveness while protecting rights.

Yet manipulation campaigns continue to spread false narratives across platforms in multiple languages, placing pressure on journalists, fact-checkers and regulators to act with greater speed and precision.

Within such an environment, AI4TRUST has emerged as a prominent Horizon Europe initiative. The consortium is developing an integrated platform that detects disinformation signals, verifies content, and maps information flows for professionals who need real-time insight.

Partners stress the need for tools that strengthen human judgment instead of replacing it, particularly as synthetic media accelerates and shared realities become more fragile.

Experts speaking in Brussels warned that traditional fact-checking cannot absorb the scale of modern manipulation. They highlighted the geopolitical risks created by automated messaging and deepfakes, and argued for transparent, accountable systems tailored to user needs.

European officials emphasised that multiple tools will be required, supported by collaboration across institutions and sustained regulatory frameworks that defend democratic resilience.

Would you like to learn more about AI, tech and digital diplomacyIf so, ask our Diplo chatbot!

Digital procurement strengthens compliance and prepares governments for AI oversight

AI is reshaping the expectations placed on organisations, yet many local governments in the US continue to rely on procurement systems designed for a paper-first era.

Sealed envelopes, manual logging and physical storage remain standard practice, even though these steps slow essential services and increase operational pressure on staff and vendors.

The persistence of paper is linked to long-standing compliance requirements, which are vital for public accountability. Over time, however, processes intended to safeguard fairness have created significant inefficiencies.

Smaller businesses frequently struggle with printing, delivery, and rigid submission windows, and the administrative burden on procurement teams expands as records accumulate.

The author’s experience leading a modernisation effort in Somerville, Massachusetts showed how deeply embedded such practices had become.

Gradual adoption of digital submission reduced logistical barriers while strengthening compliance. Electronic bids could be time-stamped, access monitored, and records centrally managed, allowing staff to focus on evaluation rather than handling binders and storage boxes.

Vendor participation increased once geographical and physical constraints were removed. The shift also improved resilience, as municipalities that had already embraced digital procurement were better equipped to maintain continuity during pandemic disruptions.

Electronic records now provide a basis for responsible use of AI. Digital documents can be analysed for anomalies, metadata inconsistencies, or signs of manipulation that are difficult to detect in paper files.

Rather than replacing human judgment, such tools support stronger oversight and more transparent public administration. Modernising procurement aligns government operations with present-day realities and prepares them for future accountability and technological change.

Would you like to learn more about AI, tech and digital diplomacyIf so, ask our Diplo chatbot!

PostFinance expands digital asset range to 22 cryptocurrencies

Swiss lender PostFinance has broadened its digital-asset offering to 22 cryptocurrencies, adding Algorand, Arbitrum, NEAR Protocol, Stellar, USDC, and Sui to its platform. The expansion strengthens its position as one of the most comprehensive retail crypto offerings among Swiss banks.

Direct cryptocurrency access was introduced in early 2024, making the institution the first systemically important bank in Switzerland to provide such services. Further additions followed mid-year, reflecting growing client demand for regulated exposure to digital assets.

More than 36,000 custody accounts have been opened since launch, generating over 565,000 trades. According to Alexander Thoma, the bank continues to broaden its selection as customers increasingly prefer to manage crypto through their primary banking provider.

Trading is available via e-finance and the PostFinance app, with a minimum entry level of $50 for both savings plans and individual orders, a move aimed at lowering barriers and widening retail participation.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

Bitcoin divergence signals rising credit stress

A fresh analysis from Arthur Hayes argues that Bitcoin is signalling mounting stress in the global fiat system as it diverges from the Nasdaq 100. Hayes says Bitcoin is the most sensitive market gauge of credit supply, making its decoupling a possible early warning of systemic stress.

A significant drop in employment, he argues, could translate into large mortgage and consumer-credit losses for US banks.

Estimates suggest a 20% drop in US knowledge workers could trigger about $557 billion in credit losses, hitting bank capital and regional lenders first. Hayes expects instability to force the Federal Reserve to add liquidity, a move he says could lift Bitcoin to new highs.

Beyond the flagship cryptocurrency, Hayes said his firm Maelstrom may allocate stablecoin reserves to Zcash and Hyperliquid once monetary policy shifts, although timing and price targets remain unspecified.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

India unveils MANAV Vision as new global pathway for ethical AI

Narendra Modi presented the new MANAV Vision during the India AI Impact Summit 2026 in New Delhi, setting out a human-centred direction for AI.

He described the framework as rooted in moral guidance, transparent oversight, national control of data, inclusive access and lawful verification. He argued that the approach is intended to guide global AI governance for the benefit of humanity.

The Prime Minister of India warned that rapid technological change requires stronger safeguards and drew attention to the need to protect children. He also said societies are entering a period where people and intelligent systems co-create and evolve together instead of functioning in separate spheres.

He pointed to India’s confidence in its talent and policy clarity as evidence of a growing AI future.

Modi announced that three domestic companies introduced new AI models and applications during the summit, saying the launches reflect the energy and capability of India’s young innovators.

He invited technology leaders from around the world to collaborate by designing and developing in India instead of limiting innovation to established hubs elsewhere.

The summit brought together policymakers, academics, technologists and civil society representatives to encourage cooperation on the societal impact of artificial intelligence.

As the first global AI summit held in the Global South, the gathering aligned with India’s national commitment to welfare for all and the wider aspiration to advance AI for humanity.

Would you like to learn more about AI, tech and digital diplomacyIf so, ask our Diplo chatbot!

AI climate benefits overstated says new civil society report

Environmental groups, including Beyond Fossil Fuels and Stand.earth, have published a report challenging claims that AI will meaningfully address climate change. The analysis argues that rapid data centre expansion is being justified by overstated promises of ‘AI for climate’ benefits.

Researchers found that many cited emissions reductions relate to older forms of machine learning rather than energy-intensive generative AI systems. At the same time, rising electricity demand from large-scale AI deployment is driving increased fossil fuel use.

The report also questions evidence presented by corporations and institutions such as the International Energy Agency, stating that projected climate gains are often weak or exaggerated. Companies are reported to be drifting away from climate targets even when renewable energy offsets are included.

Campaigners say framing AI as a climate solution risks distracting from corporate decisions that increase pollution and digital infrastructure growth. They call for stronger accountability and clearer scrutiny of environmental claims linked to emerging technologies.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

Rwanda and Anthropic sign AI partnership

Anthropic and the Government of Rwanda have signed a three-year Memorandum of Understanding to expand AI deployment across health, education and public sector services in Rwanda. The agreement marks Anthropic’s first multi-sector government partnership in Africa.

In Rwanda’s health system, Anthropic will support national priorities, including efforts to eliminate cervical cancer and reduce malaria and maternal mortality. Rwanda’s Ministry of Health will work with Anthropic to integrate AI tools aligned with national objectives.

Public sector developer teams in Rwanda will gain access to Claude and Claude Code, alongside training, API credits and technical support. The partnership also formalises an education programme launched in 2025 that provided 2,000 Claude Pro licences to educators in Rwanda.

Officials in Rwanda have said the collaboration focuses on capacity development, responsible deployment and local autonomy. Anthropic stated that investment in skills and infrastructure in Rwanda aims to enable safe and independent use of AI by teachers, health workers and public servants.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

The reality behind AI hype

As governments and tech leaders gather at global forums such as the AI Impact Summit in New Delhi, one assumption dominates discussion: the more computing power poured into AI, the better it will become. In his blog ‘‘The elephant in the AI room’: Does more computing power really bring more useful AI?’, Jovan Kurbalija questions whether that belief is as solid as it seems.

For years, the AI race has been driven by the idea that ever-larger models and vast GPU farms are the key to progress. That logic has justified enormous energy consumption and multi-billion-dollar investments in data centres. But Kurbalija argues that bigger is not always better, especially when everyday tasks often require far less computational firepower than frontier models provide.

He points out that most people rely on a limited vocabulary and a small set of reasoning tools in their daily work. Smaller, specialised AI systems can already draft emails, summarise meetings, or classify documents effectively. The push for trillion-parameter models, he suggests, may reflect ambition more than necessity.

There are also technical limits to consider. Adding more computing power can lead to diminishing returns, and some prominent researchers doubt that simply scaling up large language models will lead to human-level intelligence. More hardware, Kurbalija notes, does not automatically solve deeper conceptual challenges in AI design.

The economic picture is equally complex. Training cutting-edge proprietary models can cost hundreds of millions of dollars, while newer open-source systems have been developed at a fraction of that price. If cheaper models can deliver similar performance, questions arise about the sustainability of current spending and whether investors are backing efficiency or hype.

Beyond cost and performance lies a broader ethical issue. Even if massive computing power could eventually produce superintelligent systems, the key question is whether society truly needs them. Kurbalija warns that technological possibilities should not be confused with social desirability, and that innovation without a clear purpose can create new risks.

Rather than escalating an arms race for ever-larger models, the blog calls for a shift toward needs-driven design. Right-sized tools, viable business models, and ethical clarity about AI’s role in society may prove more valuable than raw computing muscle.

In challenging the prevailing narrative, Kurbalija urges policymakers and industry leaders to rethink whether the future of AI depends on scale alone or on smarter priorities.

Would you like to learn more about AI, tech and digital diplomacyIf so, ask our Diplo chatbot!