The UAE Ministry of Investment and Microsoft signed a Memorandum of Understanding at GITEX Global 2025 to apply AI to investment analytics, financial forecasting, and retail optimisation. The deal aims to strengthen data governance across the investment ecosystem.
Under the MoU, Microsoft will support upskilling through its AI National Skilling Initiative, targeting 100,000 government employees. Training will focus on practical adoption, responsible use, and measurable outcomes, in line with the UAE’s National AI Strategy 2031.
Both parties will promote best practices in data management using Azure services such as Data Catalog and Purview. Workshops and knowledge-sharing sessions with local experts will standardise governance. Strong controls are positioned as the foundation for trustworthy AI at scale.
The agreement was signed by His Excellency Mohammad Alhawi and Amr Kamel. Officials say the collaboration will embed AI agents into workflows while maintaining compliance. Investment teams are expected to gain real-time insights and automation that shorten the time to action.
The partnership supports the ambition to make the UAE a leader in AI-enabled investment. It also signals deeper public–private collaboration on sovereign capabilities. With skills, standards, and use cases in place, the ministry aims to attract capital and accelerate diversification.
Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!
Salesforce and AWS outlined a tighter partnership on agentic AI, citing rapid growth in enterprise agents and usage. They set four pillars for the ‘Agentic Enterprise’: unified data, interoperable agents, modernised contact centres and streamlined procurement via AWS Marketplace.
Data 360 ‘Zero Copy’ accesses Amazon Redshift without duplication, while Data 360 Clean Rooms integrate with AWS Clean Rooms for privacy-preserving collaboration. 1-800Accountant reports agents resolving most routine inquiries so human experts focus on higher-value work.
Agentforce supports open standards such as Model Context Protocol and Agent2Agent to coordinate multi-vendor agents. Pilots link Bedrock-based agents and Slack integrations that surface Quick Suite tools, with Anthropic and Amazon Nova models available inside Salesforce’s trust boundary.
Contact centres extend agentic workflows through Salesforce Contact Center with Amazon Connect, adding voice self-service plus real-time transcription and sentiment. Complex issues hand off to representatives with full context, and Toyota Motor North America plans automation for service tasks.
Procurement scales via AWS Marketplace, where Salesforce surpassed $2bn in lifetime sales across 30 countries. AgentExchange listings provide prebuilt, customisable agents and workflows, helping enterprises adopt agentic AI faster with governance and security intact.
Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!
Most firms are still struggling to turn AI pilots into measurable value, Cisco’s 2025 AI Readiness Index finds. Only 13% are ‘AI-ready’, having scaled deployments with results. The rest face gaps in data, security and governance.
Southeast Asia outperforms the global average at 16% ready. Indonesia reaches 23% and Thailand 21%, ahead of Europe at 11% and the Americas at 14%. Cisco says lower tech debt helps some emerging markets leapfrog.
Infrastructure debt is mounting: limited GPU capacity, fragmented data and constrained networks slow progress. Just 34% say their tech stack can adapt and scale for evolving compute needs. Most remain stuck in pilots.
Adoption plans are ambitious: 83% intend to deploy AI agents, with almost 40% expecting them to support staff within a year. Yet only one in three have change-management programmes, risking stalled workplace integration.
The leaders pair strong digital foundations with clear governance and cybersecurity embedded by design. Cisco urges broader collaboration among industry, government and tech firms, arguing that trust, regulation and investment will determine who monetises AI first.
Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!
AI data centres in Scotland use enough tap water to fill over 27 million half-litre bottles annually, BBC News reports. The number of centres has quadrupled since 2021, with AI growth increasing energy and water use, though it remains a small fraction of the national supply.
Scottish Water urges developers to adopt closed-loop cooling or treated wastewater instead of relying only on mains water. Open-loop systems, still used in many centres, consume vast amounts of water, but closed-loop alternatives can reduce demand, though they may increase energy usage.
Experts warn that AI data centres have a significant carbon footprint as well. Analysis from the University of Glasgow estimates the energy use of Scottish centres could equate to each person in the country driving an extra 145 kilometres per year.
Academic voices have called for greater transparency from tech companies and suggested carbon targets and potential penalties to ensure sustainable growth.
The Scottish government and industry stakeholders are promoting ‘green’ AI development, citing Scotland’s cool climate, renewable energy resources, and local expertise. Developers are urged to balance AI expansion with Scotland’s net zero and resource sustainability goals.
Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!
Yale University and Google unveiled Cell2Sentence-Scale 27B, a 27-billion-parameter model built on Gemma to decode the ‘language’ of cells. The system generated a novel hypothesis about cancer cell behaviour, and CEO Sundar Pichai called it ‘an exciting milestone’ for AI in science.
The work targets a core problem in immunotherapy: many tumours are ‘cold’ and evade immune detection. Making them visible requires boosting antigen presentation. C2S-Scale sought a ‘conditional amplifier’ drug that boosts signals only in immune-context-positive settings.
Smaller models lacked the reasoning to solve the problem, but scaling to 27B parameters unlocked the capability. The team then simulated 4,000 drugs across patient samples. The model flagged context-specific boosters of antigen presentation, with 10–30% already known and the rest entirely novel.
Researchers emphasise that conditional amplification aims to raise immune signals only where key proteins are present. That could reduce off-target effects and make ‘cold’ tumours discoverable. The result hints at AI-guided routes to more precise cancer therapies.
Google has released C2S-Scale 27B on GitHub and Hugging Face for the community to explore. The approach blends large-scale language modelling with cell biology, signalling a new toolkit for hypothesis generation, drug prioritisation, and patient-relevant testing.
Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!
A new era in data technology is emerging as Starcloud, a member of NVIDIA’s Inception startup program, prepares to send its first AI-driven satellite into orbit next month.
The mission marks the debut of NVIDIA’s H100 GPU in space and represents a decisive step toward the creation of large-scale orbital data centres designed to meet the planet’s soaring demand for AI.
By operating data centres in space, Starcloud aims to cut energy costs by tenfold and significantly reduce carbon emissions. The vacuum of space will serve as a natural cooling system, while constant exposure to solar energy will eliminate the need for batteries or backup power.
According to CEO Philip Johnston, the only environmental cost will come from the launch itself, resulting in substantial carbon savings over the data centre’s lifetime.
Starcloud’s technology could transform how Earth observation data is processed. Instead of transmitting raw information back to the ground, satellites will analyse it in real time, improving responses to wildfires, weather changes, and agricultural needs.
The company plans to run Google’s open AI model Gemma on its satellite and eventually integrate NVIDIA’s next-generation Blackwell GPUs, boosting computing power even further.
Johnston predicts that within a decade, most new data centres will be built in orbit. If achieved, Starcloud’s innovation could mark the beginning of a sustainable digital revolution powered by the stars instead of the grid.
Would you like to learn more aboutAI, tech and digital diplomacy? If so, ask our Diplo chatbot!
A new report from the Higher Education Policy Institute warns of an urgent need to improve AI literacy among staff and students in the UK. The study argues that without coordinated investment in training and policy, higher education risks deepening digital divides and losing relevance in an AI-driven world.
British report contributors say universities must move beyond acknowledging AI’s presence and instead adopt structured strategies for skill development. Kate Borthwick adds that both staff and students require ongoing education to manage how AI reshapes teaching, assessment, and research.
The publication highlights growing disparities in access and use of generative AI based on gender, wealth, and academic discipline. In a chapter written by ChatGPT, the report suggests universities create AI advisory teams within research offices and embed AI training into staff development programmes.
Elsewhere, Ant Bagshaw from the Australian Public Policy Institute warns that generative AI could lead to cuts in professional services staff as universities seek financial savings. He acknowledges the transition will be painful but argues that it could drive a more efficient and focused higher education sector.
Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!
Vietnam is preparing to become one of Asia’s first nations with a dedicated AI law, following the release of a draft bill that mirrors key elements of the EU’s AI Act. The proposal aims to consolidate rules for AI use, strengthen rights protections and promote innovation.
The law introduces a four-tier system for classifying risks, from banned applications such as manipulative facial recognition to low-risk uses subject to voluntary standards. High-risk systems, including those in healthcare or finance, would require registration, oversight and incident reporting to a national database.
Under the law, companies deploying powerful general-purpose AI models must meet strict transparency, safety and intellectual property standards. The law would create a National AI Commission and a National AI Development Fund to support local research, sandboxes and tax incentives for emerging businesses.
Violations involving unsafe AI systems could lead to revenue-based fines and suspensions. The phased rollout begins in January 2026, with full compliance for high-risk systems expected by mid-2027. The government of Vietnam says the initiative reflects its ambition to build a trustworthy AI ecosystem.
Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!
The UK government has developed an AI tool, named ‘Consult’, which analysed over 50,000 responses to the Independent Water Commission review in just two hours. The system matched human accuracy and could save 75,000 days of work annually, worth £20 million in staffing costs.
Consult sorted responses into key themes at a cost of just £240, with experts needing only 22 hours to verify the results. The AI agreed with human experts 83% of the time, versus 55% between humans, letting officials focus on policy instead of administrative work.
The technology has also been used to analyse consultations for the Scottish government on non-surgical cosmetics and the Digital Inclusion Action Plan. Part of the Humphrey suite, the tool helps government act faster and deliver better value for taxpayers.
Digital Government Minister Ian Murray highlighted the potential of AI to deliver efficient services and save costs. Engineers are using insights from Consult and Redbox to develop new tools, including GOV.UK Chat, a generative AI chatbot soon to be trialled in the GOV.UK App.
Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!
A Quebec court has fined Jean Laprade C$5,000 (US$3,562) for submitting AI-generated content as part of his legal defence. Justice Luc Morin described the move as ‘highly reprehensible,’ warning that it could undermine the integrity of the judicial system.
The case concerned a dispute over a contract for three helicopters and an airplane in Guinea, where a clerical error awarded Laprade a more valuable aircraft than agreed. He resisted attempts by aviation companies to recover it, and a 2021 Paris arbitration ruling ordered him to pay C$2.7 million.
Laprade submitted fabricated AI-generated materials, including non-existent legal citations and inconsistent conclusions, in an attempt to strengthen his defence.
The judge emphasised that AI-generated information must be carefully controlled by humans, and the filing of legal documents remains a solemn responsibility. Morin acknowledged the growing influence of AI in courts but stressed the dangers of misuse.
While noting Laprade’s self-representation, the judge condemned his use of ‘hallucinated’ AI evidence and warned of future challenges from AI in courts.
Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!