Microsoft leaders envision AI as an invisible partner in work and play

AI, gaming and work were at the heart of the discussion during the Paley International Council Summit, where three Microsoft executives explored how technology is reshaping human experience and industry structures.

Mustafa Suleyman, Phil Spencer and Ryan Roslansky offered perspectives on the next phase of digital transformation, from personalised AI companions to the evolution of entertainment and the changing nature of work.

Mustafa Suleyman, CEO of Microsoft AI, described a future where AI becomes an invisible companion that quietly assists users. He explained that AI is moving beyond standalone apps to integrate directly into systems and browsers, performing tasks through natural language rather than manual navigation.

With features like Copilot on Windows and Edge, users can let AI automate everyday functions, creating a seamless experience where technology anticipates rather than responds.

Phil Spencer, CEO of Microsoft Gaming, underlined gaming’s cultural impact, noting that the industry now surpasses film, books and music combined. He emphasised that gaming’s interactive nature offers lessons for all media, where creativity, participation and community define success.

For Spencer, the future of entertainment lies in blending audience engagement with technology, allowing fans and creators to shape experiences together.

Ryan Roslansky, CEO of LinkedIn, discussed how AI is transforming skills and workforce dynamics. He highlighted that required job skills are changing faster than ever, with adaptability, AI literacy and human-centred leadership becoming essential.

Roslansky urged companies to focus on potential and continuous learning instead of static job descriptions, suggesting that the most successful organisations will be those that evolve with technology and cultivate resilience through education.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Kenya launches national AI skills alliance

In a significant step for Africa’s digital economy, KEPSA partnered with Microsoft to launch the Kenya AI Skilling Alliance (KAISA), a national platform aimed at accelerating inclusive and responsible AI adoption. The announcement, made in Nairobi, brings together government, academia, the private sector and development partners.

The platform responds to fragmentation in Kenya’s AI ecosystem by uniting training, innovation and policy into a coherent framework. With Africa’s AI potential estimated at up to USD 1.5 trillion by 2030, Kenya, already among the continent’s most AI-ready nations, is making deliberate efforts to turn promise into skills, jobs and innovation.

Leaders emphasised inclusivity: equipping youth, women and marginalised communities to participate meaningfully in the AI-driven economy. The Alliance will host sector-based working groups, national skilling programmes and an AI repository and innovation hub over its 24-month roadmap.

This initiative highlights how developing nations are moving beyond simply adopting technology to building capacity, governance and local innovation. It links directly to broader themes of digital diplomacy and capacity building in the African continent, reinforcing how skill ecosystems matter as much as hardware.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

UAE and Google launch ‘AI for All’ national skills initiative

In a major public-private collaboration, the UAE’s Artificial Intelligence, Digital Economy, and Remote Work Applications Office and Google announced the ‘AI for All’ initiative, aimed at delivering AI skills training across the United Arab Emirates.

The announcement was made on 29 October 2025 and will roll out through 2026.

The programme targets a broad audience, from students, teachers, university learners and government employees, to small and medium-sized enterprises (SMEs), creatives and content-makers.

It will cover fundamentals of AI, practical use-cases, responsible and safe AI use, and prompt-engineering for generative models. Google is also providing university students and other participants access to its advanced Gemini models as part of the skilling effort.

This initiative reflects the UAE’s broader ambition to become a global hub for innovation and talent in the AI economy, as well as Google’s regional strategy under its ‘AI Opportunity Initiative’ for the Middle East & North Africa.

By combining training, awareness campaigns and access to AI tools, the collaboration seeks to ensure that AI’s benefits are accessible to all segments of society in the UAE.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

A new capitalism for the intelligent age

In his Time’s article, Klaus Schwab argues that business is undergoing a deeper transformation than previous technological revolutions. He notes that we are entering what he terms the ‘Intelligent Age’, where value is less about physical assets and more about ideas, relationships and the ability to learn faster than change.

According to Schwab, the assumptions of the Industrial Age, that growth meant simply scaling, that efficiency trumped adaptability, and that workers were interchangeable, no longer hold. Instead, enterprises must become living ecosystems, adaptable platforms rather than pipelines.

However, Schwab warns that intelligent technologies such as AI and automation are not inherently benign.

On the one hand, they can amplify human potential; on the other, if misused, they risk diminishing it. Business leaders must therefore undergo not just digital transformation, but a mental transformation, embracing resilience, inclusivity and human dignity as core values.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

EU considers classifying ChatGPT as a search engine under the DSA. What are the implications?

The European Commission is pondering whether OpenAI’s ChatGPT should be designated as a ‘Very Large Online Search Engine’ (VLOSE) under the Digital Services Act (DSA), a move that could reshape how generative AI tools are regulated across Europe.

OpenAI recently reported that ChatGPT’s search feature reached 120.4 million monthly users in the EU over the past six months, well above the 45 million threshold that triggers stricter obligations for major online platforms and search engines. The Commission confirmed it is reviewing the figures and assessing whether ChatGPT meets the criteria for designation.

The key question is whether ChatGPT’s live search function should be treated as an independent service or as part of the chatbot as a whole. Legal experts note that the DSA applies to intermediary services such as hosting platforms or search engines, categories that do not neatly encompass generative AI systems.

Implications for OpenAI

If designated, ChatGPT would be the first AI chatbot formally subject to DSA obligations, including systemic risk assessments, transparency reporting, and independent audits. OpenAI would need to evaluate how ChatGPT affects fundamental rights, democratic processes, and mental health, updating its systems and features based on identified risks.

‘As part of mitigation measures, OpenAI may need to adapt ChatGPT’s design, features, and functionality,’ said Laureline Lemoine of AWO. ‘Compliance could also slow the rollout of new tools in Europe if risk assessments aren’t planned in advance.’

The company could also face new data-sharing obligations under Article 40 of the DSA, allowing vetted researchers to request information about systemic risks and mitigation efforts, potentially extending to model data or training processes.

A test case for AI oversight

Legal scholars say the decision could set a precedent for generative AI regulation across the EU. ‘Classifying ChatGPT as a VLOSE will expand scrutiny beyond what’s currently covered under the AI Act,’ said Natali Helberger, professor of information law at the University of Amsterdam.

Experts warn the DSA would shift OpenAI from voluntary AI-safety frameworks and self-defined benchmarks to binding obligations, moving beyond narrow ‘bias tests’ to audited systemic-risk assessments, transparency and mitigation duties. ‘The DSA’s due diligence regime will be a tough reality check,’ said Mathias Vermeulen, public policy director at AWO.

Would you like to learn more aboutAI, tech and digital diplomacyIf so, ask our Diplo chatbot!

AI-driven cybercrime rises across Asia

Cybersecurity experts met in Dubai for the World Economic Forum’s Annual Global Future Councils and Cybersecurity meetings. More than 500 participants, including 150 top cybersecurity leaders, discussed how emerging technologies such as AI are reshaping digital security.

UAE officials highlighted the importance of resilience, trust and secure infrastructure as fundamental to future prosperity. Sessions examined how geopolitical shifts and technological advances are changing the cyber landscape and stressed the need for coordinated global action.

AI-driven cybercrime is rising sharply in Japan, with criminals exploiting advanced technology to scale attacks and target data. Recent incidents include a cyber attack on Asahi Breweries, which temporarily halted production at its domestic factories.

Authorities are calling for stronger cross-border collaboration and improved cybersecurity measures, while Japan’s new Prime Minister, Sanae Takaichi, pledged to enhance cooperation on AI and cybersecurity with regional partners.

Significant global developments include the signing of the first UN cybercrime treaty by 65 nations in Viet Nam, establishing a framework for international cooperation, rapid-response networks and stronger legal protections.

High-profile cyber incidents in the UK, including attacks on Jaguar Land Rover and a nursery chain, have highlighted the growing economic and social costs of cybercrime. These events are prompting calls for businesses to prioritise cyber resilience.

Experts warn that technology is evolving faster than cyber defences, leaving small businesses and less developed regions highly vulnerable. Integrating AI, automation and proactive security strategies is seen as essential to protect organizations and ensure global digital stability.

Cyber resilience is increasingly recognised not just as an IT issue but as a strategic imperative for economic and national security.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

Experts caution that AI growth could double data centre energy use

AI’s rapid growth is fuelling a surge in electricity consumption across the United States, with data centres emerging as major contributors. Analysts warn that expanding AI infrastructure is pushing up national energy demand and could drive higher electricity bills for homes and businesses.

The US hosts more than 4,000 data centres, concentrated mainly in Virginia, Texas and California. Many now operate high-performance AI systems that consume up to 30 times more electricity than traditional facilities, according to energy experts.

The International Energy Agency reported that US data centres used a record 183 terawatt-hours of electricity in 2024, about 4% of national demand. That figure could more than double by 2030, reaching 426 terawatt-hours, as companies race to expand cloud and AI capacity.

With 60% of energy use tied to servers and processing hardware, the shift toward AI-driven computing poses growing challenges for green energy infrastructure. Researchers say that without major efficiency gains, the nation’s power grid will struggle to keep pace with AI’s accelerating appetite for electricity.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

Report reveals major barriers to UK workforce AI skills development

A new government analysis has identified deep-rooted barriers preventing widespread development of AI skills in the UK’s workforce. The research highlights systemic challenges across education, funding, and awareness, threatening the country’s ambition to build an inclusive and competitive AI economy.

UK experts found widespread confusion over what constitutes AI skills, with inconsistent terminology creating mismatches between training, qualifications, and labour market needs. Many learners and employers still conflate digital literacy with AI competence.

The report also revealed fragmented training provision, limited curriculum responsiveness, and fragile funding cycles that hinder long-term learning. Many adults lack even basic digital literacy, while small organisations and community programmes struggle to sustain AI courses beyond pilot stages.

Employers were found to have an incomplete understanding of their own AI skills needs, particularly within SMEs and public sector organisations. Without clearer frameworks, planning tools, and consistent investment, experts warn the UK risks falling behind in responsible AI adoption and workforce readiness.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

Nordic ministers fund AI language model network

Nordic ministers for culture have approved funding for a new network dedicated to language models for AI. The decision, taken at a meeting in Stockholm on 29 October, aims to ensure AI development reflects the region’s unique linguistic and cultural traits.

It is one of the first projects for the recently launched Nordic-Baltic centre for AI, New Nordics AI.

The network will bring together national stakeholders to address shared challenges in AI language models. The initiative aims to protect smaller languages and ensure AI tools reflect Nordic linguistic diversity through knowledge sharing and collaboration.

Finland’s Minister for Research and Culture, Mari-Leena Talvitie, said the project is a key step in safeguarding the future of regional languages in digital tools.

Ministers also discussed AI’s broader cultural impact, highlighting issues such as copyright and the need for regional oversight. The network will identify collaboration opportunities and guide future investments in culturally and linguistically anchored Nordic AI solutions.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

India deploys AI to modernise its military operations

In a move reflecting its growing strategic ambitions, India is rapidly implementing AI across its defence forces. The country’s military has moved from policy to practice, using tools from real-time sensor fusion to predictive maintenance to transform how it fights.

The shift has involved institutional change. India’s Defence AI Council and Defence AI Project Agency (established 2019) are steering an ecosystem that includes labs such as the Centre for Artificial Intelligence & Robotics of the Defence Research and Development Organisation (DRDO).

One recent example is the cross-border operation Operation Sindoor (May 2025), in which AI-driven platforms appeared in roles ranging from intelligence analysis to operational coordination.

This effort signals more than just a technological upgrade. It underscores a shift in warfare logic, where systems of systems, connectivity and rapid decision-making matter more than sheer numbers.

India’s incorporation of AI into its capabilities, drone swarming, combat simulation and logistics optimisation, is aligned with broader trends in defence innovation and digital diplomacy. The country’s strategy now places AI at the heart of its procurement demands and force design.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot