Morocco outlines national AI roadmap to 2030

Morocco is preparing to unveil ‘Maroc IA 2030’, a national AI roadmap designed to structure the country’s AI ecosystem and strengthen digital transformation.

The strategy seeks to modernise public services, improve interoperability across digital systems and enhance economic competitiveness, according to officials ahead of the ‘AI Made in Morocco’ event in Rabat.

A central element of the plan involves the creation of Al Jazari Institutes, a national network of AI centres of excellence connecting academic research with innovation and regional economic needs.

A roadmap that prioritises technological autonomy, trusted AI use, skills development, support for local innovation and balanced territorial coverage instead of fragmented deployment.

The initiative builds on the Digital Morocco 2030 strategy launched in 2024, which places AI at the core of national digital policy.

Authorities expect the combined efforts to generate around 240,000 digital jobs and contribute approximately $10 billion to gross domestic product by 2030, while improving the international AI readiness ranking of Morocco.

Additional measures include the establishment of a General Directorate for AI and Emerging Technologies to oversee public policy and the development of an Arab African regional digital hub in partnership with the United Nations Development Programme.

Their main goal is to support sustainable and responsible digital innovation.

Would you like to learn more aboutAI, tech and digital diplomacy? If so, ask our Diplo chatbot!

AI gap reflects China’s growing technological ambitions

China’s AI sector could narrow the technological AI gap with the United States through growing risk-taking and innovation, according to leading researchers. Despite export controls on advanced chipmaking tools, Chinese firms are accelerating development across multiple AI fields.

Yao Shunyu, a former senior researcher at ChatGPT maker OpenAI and now Tencent’s AI scientist, said a Chinese company could become the world’s leading AI firm within three to five years. He pointed to China’s strengths in electricity supply and infrastructure as key advantages.

Yao said the main bottlenecks remain production capacity, including access to advanced lithography machines and a mature software ecosystem. Such limits still restrict China’s ability to manufacture the most advanced semiconductors and narrow the AI gap with the US.

China has developed a working prototype of an extreme-ultraviolet lithography machine that could eventually rival Western technology. However, Reuters reported the system has not yet produced functioning chips.

Sources familiar with the project said commercial chip production using the machine may not begin until around 2030. Until then, Chinese AI ambitions are likely to remain constrained by hardware limitations.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

Taiwan aims to train 500,000 AI professionals

Taiwan aims to train 500,000 AI professionals by 2040, backed by a NT$100 billion (US$31.6 billion) government venture fund. President Lai Ching-te announced the 2026 AI talent forum in Taipei.

The government’s 10-year AI plan includes a national computing centre and the development of technologies such as silicon photonics, quantum computing, and robotics. President Lai said that national competitiveness depends on both chipmaking and citizens’ ability to utilise AI across various disciplines.

To achieve these goals, AI training courses are being introduced for public sector employees, and students are being encouraged to acquire AI skills. The initiative aims to foster cooperation between government, industry, and academia to drive economic transformation.

With a larger pool of AI professionals, Taiwan hopes to help small and medium-sized enterprises accelerate digital upgrades, enhance innovation, and strengthen the nation’s global competitiveness in emerging technologies.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

UK considers regulatory action after Grok’s deepfake images on X

UK Prime Minister Keir Starmer is consulting Canada and Australia on a coordinated response to concerns surrounding social media platform X, after its AI assistant Grok was used to generate sexualised deepfake images of women and children.

The discussions focus on shared regulatory approaches rather than immediate bans.

X acknowledged weaknesses in its AI safeguards and limited image generation to paying users. Lawmakers in several countries have stated that further regulatory scrutiny may be required, while Canada has clarified that no prohibition is currently under consideration, despite concerns over platform responsibility.

In the UK, media regulator Ofcom is examining potential breaches of online safety obligations. Technology secretary Liz Kendall confirmed that enforcement mechanisms remain available if legal requirements are not met.

Australian Prime Minister Anthony Albanese also raised broader concerns about social responsibility in the use of generative AI.

X owner Elon Musk rejected accusations of non-compliance, describing potential restrictions as censorship and suppression of free speech.

European authorities requested the preservation of internal records for possible investigations, while Indonesia and Malaysia have already blocked access to the platform.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

EU instructs X to keep all Grok chatbot records

The European Commission has ordered X to retain all internal documents and data on its AI chatbot Grok until the end of 2026. The order falls under the Digital Services Act after concerns Grok’s ‘spicy’ mode enabled sexualised deepfakes of minors.

The move continues EU oversight, recalling a January 2025 order to preserve X’s recommender system documents amid claims it amplified far-right content during German elections. EU regulators emphasised that platforms must manage the content generated by their AI responsibly.

Earlier this week, X submitted responses to the Commission regarding Grok’s outputs following concerns over Holocaust denial content. While the deepfake scandal has prompted calls for further action, the Commission has not launched a formal investigation into Grok.

Regulators reiterated that it remains X’s responsibility to ensure the chatbot’s outputs meet European standards, and retention of all internal records is crucial for ongoing monitoring and accountability.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

X restricts Grok image editing after deepfake backlash

Elon Musk’s platform X has restricted image editing with its AI chatbot Grok to paying users, following widespread criticism over the creation of non-consensual sexualised deepfakes.

The move comes after Grok allowed users to digitally alter images of people, including removing clothing without consent. While free users can still access image tools through Grok’s separate app and website, image editing within X now requires a paid subscription linked to verified user details.

Legal experts and child protection groups said the change does not address the underlying harm. Professor Clare McGlynn said limiting access fails to prevent abuse, while the Internet Watch Foundation warned that unsafe tools should never have been released without proper safeguards.

UK government officials urged regulator Ofcom to use its full powers under the Online Safety Act, including possible financial restrictions on X. Prime Minister Sir Keir Starmer described the creation of sexualised AI images involving adults and children as unlawful and unacceptable.

The controversy has renewed pressure on X to introduce stronger ethical guardrails for Grok. Critics argue that restricting features to subscribers does not prevent misuse, and that meaningful protections are needed to stop AI tools from enabling image-based abuse.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

EU faces pressure to strengthen Digital Markets Act oversight

Rivals of major technology firms have criticised the European Commission for weak enforcement of the Digital Markets Act, arguing that slow procedures and limited transparency undermine the regulation’s effectiveness.

Feedback gathered during a Commission consultation highlights concerns about delaying tactics, interface designs that restrict user choice, and circumvention strategies used by designated gatekeepers.

The Digital Markets Act entered into force in March 2024, prompting several non-compliance investigations against Apple, Meta and Google. Although Apple and Meta have already faced fines, follow-up proceedings remain ongoing, while Google has yet to receive sanctions.

Smaller technology firms argue that enforcement lacks urgency, particularly in areas such as self-preferencing, data sharing, interoperability and digital advertising markets.

Concerns also extend to AI and cloud services, where respondents say the current framework fails to reflect market realities.

Generative AI tools, such as large language models, raise questions about whether existing platform categories remain adequate or whether new classifications are necessary. Cloud services face similar scrutiny, as major providers often fall below formal thresholds despite acting as critical gateways.

The Commission plans to submit a review report to the European Parliament and the Council by early May, drawing on findings from the consultation.

Proposed changes include binding timelines and interim measures aimed at strengthening enforcement and restoring confidence in the bloc’s flagship competition rules.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Samsung puts AI trust and security at the centre of CES 2026

The South Korean tech giant, Samsung, used CES 2026 to foreground a cross-industry debate about trust, privacy and security in the age of AI.

During its Tech Forum session in Las Vegas, senior figures from AI research and industry argued that people will only fully accept AI when systems behave predictably, and users retain clear control instead of feeling locked inside opaque technologies.

Samsung outlined a trust-by-design philosophy centred on transparency, clarity and accountability. On-device AI was presented as a way to keep personal data local wherever possible, while cloud processing can be used selectively when scale is required.

Speakers said users increasingly want to know when AI is in operation, where their data is processed and how securely it is protected.

Security remained the core theme. Samsung highlighted its Knox platform and Knox Matrix to show how devices can authenticate one another and operate as a shared layer of protection.

Partnerships with companies such as Google and Microsoft were framed as essential for ecosystem-wide resilience. Although misinformation and misuse were recognised as real risks, the panel suggested that technological counter-measures will continue to develop alongside AI systems.

Consumer behaviour formed a final point of discussion. Amy Webb noted that people usually buy products for convenience rather than trust alone, meaning that AI will gain acceptance when it genuinely improves daily life.

The panel concluded that AI systems which embed transparency, robust security and meaningful user choice from the outset are most likely to earn long-term public confidence.

Would you like to learn more aboutAI, tech and digital diplomacy? If so, ask our Diplo chatbot!

UAE deploys AI ecosystem to support climate-vulnerable agriculture

The United Arab Emirates has launched an AI-driven ecosystem to help climate-vulnerable agricultural regions adapt to increasingly volatile weather. The initiative reinforces the country’s ambition to position itself as a global hub for applied AI in climate resilience and food security.

Unveiled in Abu Dhabi, the programme builds on a US$200m partnership with the Gates Foundation announced during COP28. It reflects a shift from climate pledges toward deployable technology as droughts, floods and heat stress intensify pressure on agriculture, particularly in the Global South.

At the core is an integrated ecosystem linking scientific research, AI model development and digital advisory tools with large-scale deployment. Rather than isolated pilots, the programmes are designed to translate data into practical tools used directly by governments, NGOs and farmers.

Abu Dhabi is positioning itself as a hub for agricultural AI through the CGIAR AI Hub and a new institute at Mohamed bin Zayed University of Artificial Intelligence. The ecosystem also includes AgriLLM, an open-source model trained on agricultural and climate data.

Delivery is supported by AIM for Scale, a joint UAE–Gates Foundation initiative expanding AI-powered weather forecasting in data-scarce regions. In India, AI-enabled monsoon forecasts reached an estimated 38 million farmers in 2025, with further deployments planned.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

CES 2026 shows AMD betting on on-device AI at scale

AMD used CES 2026 to position AI as a default feature of personal and commercial computing. The company said AI is no longer limited to premium systems. Instead, it is being built directly into processors for consumer PCs, business laptops, compact desktops, and embedded platforms.

Executives described the shift as a new phase in AI adoption. CEO Lisa Su said usage has grown from early experimentation to more than one billion active users worldwide. Senior vice president Jack Huynh added that AI is redefining the PC by embedding intelligence, performance, and efficiency across devices.

The strategy centres on the Ryzen AI 400 Series and Ryzen AI PRO 400 Series processors. These chips integrate neural processing units delivering up to 60 TOPS of local AI compute. Built on Zen 5 architecture and XDNA 2 NPUs, they target Copilot+ PCs and enterprise deployments.

AMD also expanded its Ryzen AI Max+ portfolio for ultra-thin laptops, mini-PCs, and small workstations. The processors combine CPU, GPU, and NPU resources in a unified memory design. Desktop users saw the launch of the Ryzen 7 9850X3D, while developers were offered the Ryzen AI Halo platform.

Beyond PCs, AMD introduced a new Ryzen AI Embedded processor lineup for edge deployments. The chips are aimed at vehicles, factories, and autonomous systems. AMD said single-chip designs will support real-time AI workloads in robotics, digital twins, smart cameras, and industrial automation.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!