New Cisco study shows most companies aren’t AI-ready

Most firms are still struggling to turn AI pilots into measurable value, Cisco’s 2025 AI Readiness Index finds. Only 13% are ‘AI-ready’, having scaled deployments with results. The rest face gaps in data, security and governance.

Southeast Asia outperforms the global average at 16% ready. Indonesia reaches 23% and Thailand 21%, ahead of Europe at 11% and the Americas at 14%. Cisco says lower tech debt helps some emerging markets leapfrog.

Infrastructure debt is mounting: limited GPU capacity, fragmented data and constrained networks slow progress. Just 34% say their tech stack can adapt and scale for evolving compute needs. Most remain stuck in pilots.

Adoption plans are ambitious: 83% intend to deploy AI agents, with almost 40% expecting them to support staff within a year. Yet only one in three have change-management programmes, risking stalled workplace integration.

The leaders pair strong digital foundations with clear governance and cybersecurity embedded by design. Cisco urges broader collaboration among industry, government and tech firms, arguing that trust, regulation and investment will determine who monetises AI first.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

Growth of AI increases water and energy demands

AI data centres in Scotland use enough tap water to fill over 27 million half-litre bottles annually, BBC News reports. The number of centres has quadrupled since 2021, with AI growth increasing energy and water use, though it remains a small fraction of the national supply.

Scottish Water urges developers to adopt closed-loop cooling or treated wastewater instead of relying only on mains water. Open-loop systems, still used in many centres, consume vast amounts of water, but closed-loop alternatives can reduce demand, though they may increase energy usage.

Experts warn that AI data centres have a significant carbon footprint as well. Analysis from the University of Glasgow estimates the energy use of Scottish centres could equate to each person in the country driving an extra 145 kilometres per year.

Academic voices have called for greater transparency from tech companies and suggested carbon targets and potential penalties to ensure sustainable growth.

The Scottish government and industry stakeholders are promoting ‘green’ AI development, citing Scotland’s cool climate, renewable energy resources, and local expertise. Developers are urged to balance AI expansion with Scotland’s net zero and resource sustainability goals.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

Scaling a cell ‘language’ model yields new immunotherapy leads

Yale University and Google unveiled Cell2Sentence-Scale 27B, a 27-billion-parameter model built on Gemma to decode the ‘language’ of cells. The system generated a novel hypothesis about cancer cell behaviour, and CEO Sundar Pichai called it ‘an exciting milestone’ for AI in science.

The work targets a core problem in immunotherapy: many tumours are ‘cold’ and evade immune detection. Making them visible requires boosting antigen presentation. C2S-Scale sought a ‘conditional amplifier’ drug that boosts signals only in immune-context-positive settings.

Smaller models lacked the reasoning to solve the problem, but scaling to 27B parameters unlocked the capability. The team then simulated 4,000 drugs across patient samples. The model flagged context-specific boosters of antigen presentation, with 10–30% already known and the rest entirely novel.

Researchers emphasise that conditional amplification aims to raise immune signals only where key proteins are present. That could reduce off-target effects and make ‘cold’ tumours discoverable. The result hints at AI-guided routes to more precise cancer therapies.

Google has released C2S-Scale 27B on GitHub and Hugging Face for the community to explore. The approach blends large-scale language modelling with cell biology, signalling a new toolkit for hypothesis generation, drug prioritisation, and patient-relevant testing.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

Starcloud launches data centres into space

A new era in data technology is emerging as Starcloud, a member of NVIDIA’s Inception startup program, prepares to send its first AI-driven satellite into orbit next month.

The mission marks the debut of NVIDIA’s H100 GPU in space and represents a decisive step toward the creation of large-scale orbital data centres designed to meet the planet’s soaring demand for AI.

By operating data centres in space, Starcloud aims to cut energy costs by tenfold and significantly reduce carbon emissions. The vacuum of space will serve as a natural cooling system, while constant exposure to solar energy will eliminate the need for batteries or backup power.

According to CEO Philip Johnston, the only environmental cost will come from the launch itself, resulting in substantial carbon savings over the data centre’s lifetime.

Starcloud’s technology could transform how Earth observation data is processed. Instead of transmitting raw information back to the ground, satellites will analyse it in real time, improving responses to wildfires, weather changes, and agricultural needs.

The company plans to run Google’s open AI model Gemma on its satellite and eventually integrate NVIDIA’s next-generation Blackwell GPUs, boosting computing power even further.

Johnston predicts that within a decade, most new data centres will be built in orbit. If achieved, Starcloud’s innovation could mark the beginning of a sustainable digital revolution powered by the stars instead of the grid.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Report warns of AI-driven divide in higher education

A new report from the Higher Education Policy Institute warns of an urgent need to improve AI literacy among staff and students in the UK. The study argues that without coordinated investment in training and policy, higher education risks deepening digital divides and losing relevance in an AI-driven world.

British report contributors say universities must move beyond acknowledging AI’s presence and instead adopt structured strategies for skill development. Kate Borthwick adds that both staff and students require ongoing education to manage how AI reshapes teaching, assessment, and research.

The publication highlights growing disparities in access and use of generative AI based on gender, wealth, and academic discipline. In a chapter written by ChatGPT, the report suggests universities create AI advisory teams within research offices and embed AI training into staff development programmes.

Elsewhere, Ant Bagshaw from the Australian Public Policy Institute warns that generative AI could lead to cuts in professional services staff as universities seek financial savings. He acknowledges the transition will be painful but argues that it could drive a more efficient and focused higher education sector.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

Vietnam unveils draft AI law inspired by EU model

Vietnam is preparing to become one of Asia’s first nations with a dedicated AI law, following the release of a draft bill that mirrors key elements of the EU’s AI Act. The proposal aims to consolidate rules for AI use, strengthen rights protections and promote innovation.

The law introduces a four-tier system for classifying risks, from banned applications such as manipulative facial recognition to low-risk uses subject to voluntary standards. High-risk systems, including those in healthcare or finance, would require registration, oversight and incident reporting to a national database.

Under the law, companies deploying powerful general-purpose AI models must meet strict transparency, safety and intellectual property standards. The law would create a National AI Commission and a National AI Development Fund to support local research, sandboxes and tax incentives for emerging businesses.

Violations involving unsafe AI systems could lead to revenue-based fines and suspensions. The phased rollout begins in January 2026, with full compliance for high-risk systems expected by mid-2027. The government of Vietnam says the initiative reflects its ambition to build a trustworthy AI ecosystem.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

UK government uses AI to boost efficiency and save taxpayer money

The UK government has developed an AI tool, named ‘Consult’, which analysed over 50,000 responses to the Independent Water Commission review in just two hours. The system matched human accuracy and could save 75,000 days of work annually, worth £20 million in staffing costs.

Consult sorted responses into key themes at a cost of just £240, with experts needing only 22 hours to verify the results. The AI agreed with human experts 83% of the time, versus 55% between humans, letting officials focus on policy instead of administrative work.

The technology has also been used to analyse consultations for the Scottish government on non-surgical cosmetics and the Digital Inclusion Action Plan. Part of the Humphrey suite, the tool helps government act faster and deliver better value for taxpayers.

Digital Government Minister Ian Murray highlighted the potential of AI to deliver efficient services and save costs. Engineers are using insights from Consult and Redbox to develop new tools, including GOV.UK Chat, a generative AI chatbot soon to be trialled in the GOV.UK App.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Quebec man fined for using AI-generated evidence in court

A Quebec court has fined Jean Laprade C$5,000 (US$3,562) for submitting AI-generated content as part of his legal defence. Justice Luc Morin described the move as ‘highly reprehensible,’ warning that it could undermine the integrity of the judicial system.

The case concerned a dispute over a contract for three helicopters and an airplane in Guinea, where a clerical error awarded Laprade a more valuable aircraft than agreed. He resisted attempts by aviation companies to recover it, and a 2021 Paris arbitration ruling ordered him to pay C$2.7 million.

Laprade submitted fabricated AI-generated materials, including non-existent legal citations and inconsistent conclusions, in an attempt to strengthen his defence.

The judge emphasised that AI-generated information must be carefully controlled by humans, and the filing of legal documents remains a solemn responsibility. Morin acknowledged the growing influence of AI in courts but stressed the dangers of misuse.

While noting Laprade’s self-representation, the judge condemned his use of ‘hallucinated’ AI evidence and warned of future challenges from AI in courts.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Humanity AI launches $500M initiative to build a people-centred future

A coalition of ten leading philanthropic foundations has pledged $500 million over five years to ensure that AI evolves in ways that strengthen humanity rather than marginalise it.

The initiative, called Humanity AI, brings together organisations such as the Ford, MacArthur, Mellon, and Mozilla foundations to promote a people-driven vision for AI that enhances creativity, democracy, and security.

As AI increasingly shapes every aspect of daily life, the coalition seeks to place citizens at the centre of the conversation instead of leaving decisions to a few technology firms.

It plans to support new research, advocacy, and partnerships that safeguard democratic rights, protect creative ownership, and promote equitable access to education and employment.

The initiative also prioritises the ethical use of AI in safety and economic systems, ensuring innovation does not come at the expense of human welfare.

John Palfrey, president of the MacArthur Foundation, said Humanity AI aims to shift power back to the public by funding technologists and advocates committed to responsible innovation.

Michele Jawando of the Omidyar Network added that the future of AI should be designed by people collectively, not predetermined by algorithms or corporate agendas.

Rockefeller Philanthropy Advisors will oversee the fund, which begins issuing grants in 2026. Humanity AI invites additional partners to join in creating a future where people shape technology instead of being shaped by it.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Meta expands AI infrastructure with sustainable data centre in El Paso

The US tech giant, Meta, has begun construction on a new AI-optimised data centre in El Paso, Texas, designed to scale up to 1GW and power the company’s expanding AI ambitions.

The 29th in Meta’s global network, the site will support the next generation of AI models, underpinning technologies such as smart glasses, AI assistants, and real-time translation tools.

A data centre project that represents a major investment in both technology and the local community, contributing over $1.5 billion and creating about 1,800 construction jobs and 100 operational roles in its first phase.

Meta’s Community Accelerator programme will also help local businesses build digital and AI skills, while Community Action Grants are set to launch in El Paso next year.

Environmental sustainability remains central to the development. The data centre will operate on 100% renewable energy, with Meta covering the costs of new grid connections through El Paso Electric.

Using a closed-loop cooling system, the facility will consume no water for most of the year, aligning with Meta’s target to be water positive by 2030. The company plans to restore twice the amount of water used to local watersheds through partnerships with DigDeep and the Texas Water Action Collaborative.

The El Paso project, Meta’s third in Texas, underscores its long-term commitment to sustainable AI infrastructure. By combining efficiency, clean energy, and community investment, Meta aims to build the foundations for a responsible and scalable AI-driven future.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!