Microsoft to support UAE investment analytics with responsible AI tools

The UAE Ministry of Investment and Microsoft signed a Memorandum of Understanding at GITEX Global 2025 to apply AI to investment analytics, financial forecasting, and retail optimisation. The deal aims to strengthen data governance across the investment ecosystem.

Under the MoU, Microsoft will support upskilling through its AI National Skilling Initiative, targeting 100,000 government employees. Training will focus on practical adoption, responsible use, and measurable outcomes, in line with the UAE’s National AI Strategy 2031.

Both parties will promote best practices in data management using Azure services such as Data Catalog and Purview. Workshops and knowledge-sharing sessions with local experts will standardise governance. Strong controls are positioned as the foundation for trustworthy AI at scale.

The agreement was signed by His Excellency Mohammad Alhawi and Amr Kamel. Officials say the collaboration will embed AI agents into workflows while maintaining compliance. Investment teams are expected to gain real-time insights and automation that shorten the time to action.

The partnership supports the ambition to make the UAE a leader in AI-enabled investment. It also signals deeper public–private collaboration on sovereign capabilities. With skills, standards, and use cases in place, the ministry aims to attract capital and accelerate diversification.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

Agentic AI at scale with Salesforce and AWS

Salesforce and AWS outlined a tighter partnership on agentic AI, citing rapid growth in enterprise agents and usage. They set four pillars for the ‘Agentic Enterprise’: unified data, interoperable agents, modernised contact centres and streamlined procurement via AWS Marketplace.

Data 360 ‘Zero Copy’ accesses Amazon Redshift without duplication, while Data 360 Clean Rooms integrate with AWS Clean Rooms for privacy-preserving collaboration. 1-800Accountant reports agents resolving most routine inquiries so human experts focus on higher-value work.

Agentforce supports open standards such as Model Context Protocol and Agent2Agent to coordinate multi-vendor agents. Pilots link Bedrock-based agents and Slack integrations that surface Quick Suite tools, with Anthropic and Amazon Nova models available inside Salesforce’s trust boundary.

Contact centres extend agentic workflows through Salesforce Contact Center with Amazon Connect, adding voice self-service plus real-time transcription and sentiment. Complex issues hand off to representatives with full context, and Toyota Motor North America plans automation for service tasks.

Procurement scales via AWS Marketplace, where Salesforce surpassed $2bn in lifetime sales across 30 countries. AgentExchange listings provide prebuilt, customisable agents and workflows, helping enterprises adopt agentic AI faster with governance and security intact.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

New Cisco study shows most companies aren’t AI-ready

Most firms are still struggling to turn AI pilots into measurable value, Cisco’s 2025 AI Readiness Index finds. Only 13% are ‘AI-ready’, having scaled deployments with results. The rest face gaps in data, security and governance.

Southeast Asia outperforms the global average at 16% ready. Indonesia reaches 23% and Thailand 21%, ahead of Europe at 11% and the Americas at 14%. Cisco says lower tech debt helps some emerging markets leapfrog.

Infrastructure debt is mounting: limited GPU capacity, fragmented data and constrained networks slow progress. Just 34% say their tech stack can adapt and scale for evolving compute needs. Most remain stuck in pilots.

Adoption plans are ambitious: 83% intend to deploy AI agents, with almost 40% expecting them to support staff within a year. Yet only one in three have change-management programmes, risking stalled workplace integration.

The leaders pair strong digital foundations with clear governance and cybersecurity embedded by design. Cisco urges broader collaboration among industry, government and tech firms, arguing that trust, regulation and investment will determine who monetises AI first.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

Scaling a cell ‘language’ model yields new immunotherapy leads

Yale University and Google unveiled Cell2Sentence-Scale 27B, a 27-billion-parameter model built on Gemma to decode the ‘language’ of cells. The system generated a novel hypothesis about cancer cell behaviour, and CEO Sundar Pichai called it ‘an exciting milestone’ for AI in science.

The work targets a core problem in immunotherapy: many tumours are ‘cold’ and evade immune detection. Making them visible requires boosting antigen presentation. C2S-Scale sought a ‘conditional amplifier’ drug that boosts signals only in immune-context-positive settings.

Smaller models lacked the reasoning to solve the problem, but scaling to 27B parameters unlocked the capability. The team then simulated 4,000 drugs across patient samples. The model flagged context-specific boosters of antigen presentation, with 10–30% already known and the rest entirely novel.

Researchers emphasise that conditional amplification aims to raise immune signals only where key proteins are present. That could reduce off-target effects and make ‘cold’ tumours discoverable. The result hints at AI-guided routes to more precise cancer therapies.

Google has released C2S-Scale 27B on GitHub and Hugging Face for the community to explore. The approach blends large-scale language modelling with cell biology, signalling a new toolkit for hypothesis generation, drug prioritisation, and patient-relevant testing.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

Report warns of AI-driven divide in higher education

A new report from the Higher Education Policy Institute warns of an urgent need to improve AI literacy among staff and students in the UK. The study argues that without coordinated investment in training and policy, higher education risks deepening digital divides and losing relevance in an AI-driven world.

British report contributors say universities must move beyond acknowledging AI’s presence and instead adopt structured strategies for skill development. Kate Borthwick adds that both staff and students require ongoing education to manage how AI reshapes teaching, assessment, and research.

The publication highlights growing disparities in access and use of generative AI based on gender, wealth, and academic discipline. In a chapter written by ChatGPT, the report suggests universities create AI advisory teams within research offices and embed AI training into staff development programmes.

Elsewhere, Ant Bagshaw from the Australian Public Policy Institute warns that generative AI could lead to cuts in professional services staff as universities seek financial savings. He acknowledges the transition will be painful but argues that it could drive a more efficient and focused higher education sector.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

New ISO 27701 update strengthens privacy compliance

The International Organization for Standardization has released a major update to ISO 27701, the global standard for managing privacy compliance programmes. The revised version, published in 2025, separates the Privacy Information Management System (PIMS) from ISO 27001.

The updated standard introduces detailed clauses defining how organisations should establish, implement and continually improve their PIMS. It places strong emphasis on leadership accountability, risk assessment, performance evaluation and continual improvement.

Annex A of the standard sets out new control tables for both data controllers and processors. The update also refines terminology and aligns more closely with the principles of the EU GDPR and UK GDPR, making it suitable for multinational organisations seeking a unified privacy management approach.

Experts say the revised ISO 27701 offers a flexible structure but should not be seen as a substitute for legal compliance. Instead, it provides a foundation for building stronger, auditable privacy frameworks that align global business operations with evolving regulatory standards.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

Vietnam unveils draft AI law inspired by EU model

Vietnam is preparing to become one of Asia’s first nations with a dedicated AI law, following the release of a draft bill that mirrors key elements of the EU’s AI Act. The proposal aims to consolidate rules for AI use, strengthen rights protections and promote innovation.

The law introduces a four-tier system for classifying risks, from banned applications such as manipulative facial recognition to low-risk uses subject to voluntary standards. High-risk systems, including those in healthcare or finance, would require registration, oversight and incident reporting to a national database.

Under the law, companies deploying powerful general-purpose AI models must meet strict transparency, safety and intellectual property standards. The law would create a National AI Commission and a National AI Development Fund to support local research, sandboxes and tax incentives for emerging businesses.

Violations involving unsafe AI systems could lead to revenue-based fines and suspensions. The phased rollout begins in January 2026, with full compliance for high-risk systems expected by mid-2027. The government of Vietnam says the initiative reflects its ambition to build a trustworthy AI ecosystem.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

UK government uses AI to boost efficiency and save taxpayer money

The UK government has developed an AI tool, named ‘Consult’, which analysed over 50,000 responses to the Independent Water Commission review in just two hours. The system matched human accuracy and could save 75,000 days of work annually, worth £20 million in staffing costs.

Consult sorted responses into key themes at a cost of just £240, with experts needing only 22 hours to verify the results. The AI agreed with human experts 83% of the time, versus 55% between humans, letting officials focus on policy instead of administrative work.

The technology has also been used to analyse consultations for the Scottish government on non-surgical cosmetics and the Digital Inclusion Action Plan. Part of the Humphrey suite, the tool helps government act faster and deliver better value for taxpayers.

Digital Government Minister Ian Murray highlighted the potential of AI to deliver efficient services and save costs. Engineers are using insights from Consult and Redbox to develop new tools, including GOV.UK Chat, a generative AI chatbot soon to be trialled in the GOV.UK App.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Quebec man fined for using AI-generated evidence in court

A Quebec court has fined Jean Laprade C$5,000 (US$3,562) for submitting AI-generated content as part of his legal defence. Justice Luc Morin described the move as ‘highly reprehensible,’ warning that it could undermine the integrity of the judicial system.

The case concerned a dispute over a contract for three helicopters and an airplane in Guinea, where a clerical error awarded Laprade a more valuable aircraft than agreed. He resisted attempts by aviation companies to recover it, and a 2021 Paris arbitration ruling ordered him to pay C$2.7 million.

Laprade submitted fabricated AI-generated materials, including non-existent legal citations and inconsistent conclusions, in an attempt to strengthen his defence.

The judge emphasised that AI-generated information must be carefully controlled by humans, and the filing of legal documents remains a solemn responsibility. Morin acknowledged the growing influence of AI in courts but stressed the dangers of misuse.

While noting Laprade’s self-representation, the judge condemned his use of ‘hallucinated’ AI evidence and warned of future challenges from AI in courts.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Humanity AI launches $500M initiative to build a people-centred future

A coalition of ten leading philanthropic foundations has pledged $500 million over five years to ensure that AI evolves in ways that strengthen humanity rather than marginalise it.

The initiative, called Humanity AI, brings together organisations such as the Ford, MacArthur, Mellon, and Mozilla foundations to promote a people-driven vision for AI that enhances creativity, democracy, and security.

As AI increasingly shapes every aspect of daily life, the coalition seeks to place citizens at the centre of the conversation instead of leaving decisions to a few technology firms.

It plans to support new research, advocacy, and partnerships that safeguard democratic rights, protect creative ownership, and promote equitable access to education and employment.

The initiative also prioritises the ethical use of AI in safety and economic systems, ensuring innovation does not come at the expense of human welfare.

John Palfrey, president of the MacArthur Foundation, said Humanity AI aims to shift power back to the public by funding technologists and advocates committed to responsible innovation.

Michele Jawando of the Omidyar Network added that the future of AI should be designed by people collectively, not predetermined by algorithms or corporate agendas.

Rockefeller Philanthropy Advisors will oversee the fund, which begins issuing grants in 2026. Humanity AI invites additional partners to join in creating a future where people shape technology instead of being shaped by it.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!