Morocco is preparing to unveil ‘Maroc IA 2030’, a national AI roadmap designed to structure the country’s AI ecosystem and strengthen digital transformation.
The strategy seeks to modernise public services, improve interoperability across digital systems and enhance economic competitiveness, according to officials ahead of the ‘AI Made in Morocco’ event in Rabat.
A central element of the plan involves the creation of Al Jazari Institutes, a national network of AI centres of excellence connecting academic research with innovation and regional economic needs.
A roadmap that prioritises technological autonomy, trusted AI use, skills development, support for local innovation and balanced territorial coverage instead of fragmented deployment.
The initiative builds on the Digital Morocco 2030 strategy launched in 2024, which places AI at the core of national digital policy.
Authorities expect the combined efforts to generate around 240,000 digital jobs and contribute approximately $10 billion to gross domestic product by 2030, while improving the international AI readiness ranking of Morocco.
Additional measures include the establishment of a General Directorate for AI and Emerging Technologies to oversee public policy and the development of an Arab African regional digital hub in partnership with the United Nations Development Programme.
Their main goal is to support sustainable and responsible digital innovation.
Would you like to learn more aboutAI, tech and digital diplomacy? If so, ask our Diplo chatbot!
The eSafety regulator in Australia has expressed concern over the misuse of the generative AI system Grok on social media platform X, following reports involving sexualised or exploitative content, particularly affecting children.
Although overall report numbers remain low, authorities in Australia have observed a recent increase over the past weeks.
The regulator confirmed that enforcement powers under the Online Safety Act remain available where content meets defined legal thresholds.
X and other services are subject to systemic obligations requiring the detection and removal of child sexual exploitation material, alongside broader industry codes and safety standards.
eSafety has formally requested further information from X regarding safeguards designed to prevent misuse of generative AI features and to ensure compliance with existing obligations.
Previous enforcement actions taken in 2025 against similar AI services resulted in their withdrawal from the Australian market.
Additional mandatory safety codes will take effect in March 2026, introducing new obligations for AI services to limit children’s exposure to sexually explicit, violent and self-harm-related material.
Authorities emphasised the importance of Safety by Design measures and continued international cooperation among online safety regulators.
Would you like to learn more aboutAI, tech and digital diplomacy? If so, ask our Diplo chatbot!
The Welsh Government is providing £2.1 million in funding to support small and medium-sized businesses across Wales in adopting AI. The initiative aims to promote the ethical and practical use of AI, enhancing productivity and competitiveness.
Business Wales will receive £600,000 to deliver an AI awareness and adoption programme, following recent reviews on SME productivity. Additional funding will enhance tourism and events through targeted AI projects and practical workshops.
A further £1 million will expand AI upskilling through the Flexible Skills Programme, addressing digital skills gaps across regions and sectors. Employers will contribute part of the training costs to support inclusive growth.
Swansea-based Something Different Wholesale is already using AI to automate tasks, analyse market data and improve customer services. Welsh ministers say the funding supports the responsible adoption of AI, aligned with the AI Plan for Wales.
Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!
Innovations across China are moving rapidly from laboratories into everyday use, spanning robotics, autonomous vehicles and quantum computing. Airports, hotels and city streets are increasingly becoming testing grounds for advanced technologies.
In Hefei, humanoid cleaning robots developed by local start-up Zerith are already operating in public venues across major cities. The company scaled from prototype to mass production within a year, securing significant commercial orders.
Beyond robotics, frontier research is finding industrial applications in energy, healthcare and manufacturing. Advances from fusion research and quantum mechanics are being adapted for cancer screening, battery safety and precision measurement.
Policy support and investment are accelerating this transition from research to market. National planning and local funding initiatives aim to turn scientific breakthroughs into scalable technologies with global reach.
Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!
Teachers across Colorado are exploring how AI can be utilised as an instructional assistant to support classroom instruction and student learning.
Some educators are experimenting with generative AI tools that help with tasks like lesson planning, summarising material and creating examples, while also educating students on responsible use of AI.
The broader trend mirrors state and district efforts to develop AI strategies for education. Reports indicate that many districts are establishing steering committees and policies to guide the safe and effective use of classrooms. In contrast, others limit student access due to privacy concerns, underscoring the need for training and clear guidelines.
Teachers have noted both benefits, such as time savings and personalised support, and challenges, including ethical questions about plagiarism and student independence, highlighting a period of experimentation and adjustment as AI becomes part of mainstream education.
Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!
AI is accelerating the creation of digital twins by reducing the time and labour required to build complex models. Consulting firm McKinsey says specialised virtual replicas can take six months or more to develop, but generative AI tools can now automate much of the coding process.
McKinsey analysts say AI can structure inputs and synthesise outputs for these simulations, while the models provide safe testing environments for AI systems. Together, the technologies can reduce costs, shorten development cycles, and accelerate deployment.
Quantum Elements, a startup backed by QNDL Participations and the USC Viterbi School of Engineering, is applying this approach to quantum computing. Its Constellation platform combines AI agents, natural language tools, and simulation software.
The company says quantum systems are hard to model because qubits behave differently across hardware types such as superconducting circuits, trapped ions, and photonics. These variations affect stability, error rates, and performance.
By using digital twins, developers can test algorithms, simulate noise, and evaluate error correction without building physical hardware. Quantum Elements says this can cut testing time from months to minutes.
Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!
Taiwan aims to train 500,000 AI professionals by 2040, backed by a NT$100 billion (US$31.6 billion) government venture fund. President Lai Ching-te announced the 2026 AI talent forum in Taipei.
The government’s 10-year AI plan includes a national computing centre and the development of technologies such as silicon photonics, quantum computing, and robotics. President Lai said that national competitiveness depends on both chipmaking and citizens’ ability to utilise AI across various disciplines.
To achieve these goals, AI training courses are being introduced for public sector employees, and students are being encouraged to acquire AI skills. The initiative aims to foster cooperation between government, industry, and academia to drive economic transformation.
With a larger pool of AI professionals, Taiwan hopes to help small and medium-sized enterprises accelerate digital upgrades, enhance innovation, and strengthen the nation’s global competitiveness in emerging technologies.
Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!
UK Prime Minister Keir Starmer is consulting Canada and Australia on a coordinated response to concerns surrounding social media platform X, after its AI assistant Grok was used to generate sexualised deepfake images of women and children.
The discussions focus on shared regulatory approaches rather than immediate bans.
X acknowledged weaknesses in its AI safeguards and limited image generation to paying users. Lawmakers in several countries have stated that further regulatory scrutiny may be required, while Canada has clarified that no prohibition is currently under consideration, despite concerns over platform responsibility.
In the UK, media regulator Ofcom is examining potential breaches of online safety obligations. Technology secretary Liz Kendall confirmed that enforcement mechanisms remain available if legal requirements are not met.
Australian Prime Minister Anthony Albanese also raised broader concerns about social responsibility in the use of generative AI.
X owner Elon Musk rejected accusations of non-compliance, describing potential restrictions as censorship and suppression of free speech.
European authorities requested the preservation of internal records for possible investigations, while Indonesia and Malaysia have already blocked access to the platform.
Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!
Malaysia and Indonesia have restricted access to Grok, the AI chatbot available through the X platform, following concerns about its image generation capabilities.
Authorities said the tool had been used to create manipulated images depicting real individuals in sexually explicit contexts.
Regulatory bodies in Malaysia and Indonesia stated that the decision was based on the absence of sufficient safeguards to prevent misuse.
Requests for additional risk mitigation measures were communicated to the platform operator, with access expected to remain limited until further protections are introduced.
The move has drawn attention from regulators in other regions, where online safety frameworks allow intervention when digital services fail to address harmful content. Discussions have focused on platform responsibility, content moderation standards, and compliance with existing legal obligations.
Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!
Reports published by cybersecurity researchers indicated that data linked to approximately 17.5 million Instagram accounts has been offered for sale on underground forums.
The dataset reportedly includes usernames, contact details and physical address information, raising broader concerns around digital privacy and data aggregation.
A few hours later, Instagram responded by stating that no breach of internal systems occurred. According to the company, some users received password reset emails after an external party abused a feature that has since been addressed.
The platform said affected accounts remained secure, with no unauthorised access recorded.
Security analysts have noted that risks arise when online identifiers are combined with external datasets, rather than originating from a single platform.
Such aggregation can increase exposure to targeted fraud, impersonation and harassment, reinforcing the importance of cautious digital security practices across social media ecosystems.
Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!