Lille proposed as EU customs hub

France has submitted a bid to host the future EU Customs Authority in Lille, positioning itself at the centre of efforts to modernise the customs union. The proposal highlights national expertise and a leading role in shaping recent reforms.

Authorities argue the new body will strengthen internal market security, improve oversight of e-commerce and enhance cooperation between member states. France has supported initiatives to tackle illicit trade and improve risk management.

Officials also point to strong operational experience, including international customs networks and the use of AI tools to screen postal shipments. Such capabilities are presented as key to supporting the authority from its launch, but questions are raised concerning the use of AI and its biases.

Lille is promoted as a strategic logistics hub with strong transport links and access to skilled workers. Its location near major European trade routes is expected to support recruitment and coordination across the bloc.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

Digital divide shapes AI job outcomes

A joint study by the International Labour Organization and the World Bank finds that AI will reshape labour markets unevenly across countries. Research covering 135 economies highlights growing risks for workers as automation expands.

Advanced economies show higher exposure to AI, particularly in clerical and professional roles. Lower-income regions face fewer direct impacts but lack the infrastructure and skills needed to capture productivity gains.

The digital divide plays a central role, with many vulnerable jobs already online and therefore exposed to automation. Workers in roles with potential benefits often lack reliable internet access, limiting opportunities.

The ILO’s findings suggest outcomes depend on infrastructure, skills and job design rather than technology alone. Policymakers are urged to improve connectivity, training and social protections to spread benefits more evenly.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

FCA outlines AI-driven plan to modernise financial regulation

The UK’s Financial Conduct Authority (FCA) has outlined plans to integrate AI and data-driven tools into its regulatory processes as part of its 2026/27 work programme to become a more efficient and effective regulator.

The programme includes developing an internal authorisation tool to speed up approvals and using generative AI to review documents and support supervision, while maintaining human decision-making at the core of regulatory actions.

The FCA said it will also test automated data-sharing in a sandbox environment, expand its Supercharged Sandbox for firms developing AI-based financial products, and invest in analytics to better identify risks and prioritise cases.

Measures to reduce burdens on firms include removing certain data reporting requirements, simplifying digital processes and improving authorisation timelines, alongside efforts to enhance firms’ experience through new tools and feedback mechanisms.

The regulator also plans to support economic growth and consumer protection by advancing measures such as regulating buy now pay later products, speeding up IPO processes, expanding international presence, and addressing emerging risks, including the use of general-purpose AI in financial decision-making.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot 

National security rules to prioritise UK contracts in AI, steel and shipbuilding

The UK government has announced new procurement guidance that will treat shipbuilding, steel, AI, and energy infrastructure as critical to national security, with departments directed to prioritise British businesses where necessary to protect national security. The press release was published on 26 March by the Cabinet Office and its Minister, Chris Ward.

According to the government, the new approach is intended to respond to recent supply-chain fragility and strengthen domestic capacity in sectors it describes as vital to national security. The guidance is presented as the first clear framework for how departments can protect the UK’s economic security and build resilience in the four named sectors.

Additional measures in the package go beyond sector prioritisation. The government says departments will either use British steel or provide a justification if steel is sourced from overseas, linking the change to the UK Steel Strategy launched the previous week. Officials also say the reforms support the government’s Modern Industrial Strategy and follow the publication of the National Security Strategy.

Procurement reform is another part of the package. Under a new Public Interest Test, departments will be asked to assess whether outsourced service contracts worth more than £1 million could be delivered more effectively in-house. The government says the test will cover more than 95% of central government contracts by value.

Community impact is also being built into the contracting framework. Departments will be required to publish and report annually on a specific social value goal for contracts above £5 million, which the government says will cover more than 90% of central government contracts by value. Companies bidding for public contracts are also being encouraged to include commitments on local jobs, skills, and apprenticeships.

The press release also says a new suite of AI tools has been developed to streamline the commercial process. Contract terms will be simplified, and additional business information will be integrated into a central platform, with the stated aim of reducing repeated submissions by smaller businesses bidding for multiple contracts.

Chris Ward said: ‘This Government is backing British businesses and the working people who power them. These reforms are about using the full weight of Government spending to support British jobs, protect our national security and grow our economy.’ He added: ‘Whether you make steel in Scunthorpe, build ships on the Clyde or run a small tech firm in the Midlands, this Government is on your side.’

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

VTC expands AI training across all programmes in Hong Kong

The Vocational Training Council (VTC) has introduced an ‘AI for All’ strategy to integrate AI training across its programmes, aiming to support Hong Kong’s ambition to strengthen its innovation and technology sector.

The initiative aligns with broader policy priorities, including the ‘AI Plus’ approach outlined in national planning frameworks and Hong Kong’s budget, which emphasise integrating AI across industries while addressing a shortage of skilled professionals.

Under the ‘AI+Professional’ model, all Higher Diploma students are required to study IT modules covering prompt engineering, generative AI, and AI ethics and security, with training adapted to disciplines such as engineering, design, and information technology.

The council has also partnered with technology companies through memorandums of understanding. It provides ongoing training for employees in government and industry, while offering internal AI tools and a ‘Virtual Tutor’ platform to support teaching and learning.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot 

EU demands stronger age verification from adult websites

The European Commission has preliminarily found that several major adult platforms, including Pornhub, Stripchat, XNXX, and XVideos, may be in breach of the Digital Services Act for failing to adequately protect minors from accessing harmful content.

These findings highlight concerns that children can easily access such platforms rather than being effectively prevented by robust safeguards.

The Commission’s investigation indicates that the platforms’ risk assessments were insufficient. In several cases, companies focused on reputational or business risks instead of fully addressing societal harms to minors.

Authorities also raised concerns that some platforms did not adequately consider input from civil society organisations specialising in children’s rights and age-assurance technologies, undermining the reliability of their evaluations.

Regarding risk mitigation, the Commission found that existing measures are ineffective. Simple self-declaration systems, in which users confirm they are over 18, were deemed inadequate, while additional features such as warnings, labels, or blurred content failed to prevent minors from accessing content.

The Commission considers that stronger, privacy-preserving age-verification solutions are necessary to ensure meaningful protection of children’s rights and well-being online.

The companies involved now have the opportunity to respond and propose corrective measures, while consultations with the European Board for Digital Services continue.

If the preliminary findings are confirmed, the Commission may impose fines of up to 6 percent of global annual turnover, alongside periodic penalties to enforce compliance.

The case forms part of broader efforts to enforce the Digital Services Act and strengthen online safety across the EU, rather than relying on voluntary measures by platforms.

Would you like to learn more about AI, tech and digital diplomacyIf so, ask our Diplo chatbot!  

EU opens probe into Snapchat child safety compliance

The European Commission has launched formal proceedings to assess whether Snapchat is complying with child protection obligations under the Digital Services Act. The investigation focuses on whether the platform ensures adequate safety, privacy, and security for minors.

Authorities suspect Snapchat may have failed to prevent exposure of children to grooming attempts, recruitment for criminal activity, and content linked to illegal goods such as drugs, vapes, and alcohol.

Concerns also include whether minors can be effectively prevented from accessing the platform or interacting with adults posing as peers.

The inquiry will examine age assurance methods, default account settings, reporting tools, and the spread of illegal content. Regulators argue that self-declared age may be insufficient, while default settings and recommendations may expose minors to risks.

The Commission will now gather further evidence through information requests, inspections, and interviews, and may take enforcement actions, including interim measures or penalties.

National regulators will support the investigation as part of coordinated oversight under the Digital Services Act.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

Meta unveils TRIBE v2 brain modelling AI

TRIBE v2 is a next-generation AI model introduced by Meta, designed to simulate how the human brain responds to complex stimuli such as images, sounds and language. The system functions as a digital twin of neural activity, enabling high-speed and high-resolution predictions of brain responses.

Built on data from over 700 volunteers, TRIBE v2 analyses fMRI recordings to predict brain responses to media such as videos, podcasts, and text. The model improves significantly on previous approaches, offering higher accuracy and the ability to generalise across new subjects, tasks, and languages.

Meta says the system could enable brain studies without human participants in every experiment, potentially accelerating research into neurological conditions. The approach may also support future AI development by incorporating principles derived from neuroscience.

Alongside the launch, Meta has released a research paper, model code, and interactive demo under a non-commercial licence to encourage wider exploration and collaboration in neuroscience and AI research.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

Mistral AI launches open-source voice model for enterprises

Mistral AI has introduced a new open-source text-to-speech model designed to power voice assistants and enterprise applications, rather than relying on proprietary solutions.

The model, named Voxtral TTS, marks the company’s entry into the competitive voice AI market alongside players such as OpenAI and ElevenLabs.

Voxtral TTS supports nine languages, including English, French, German, Spanish, and Arabic, allowing organisations to deploy multilingual voice systems across different markets.

The Mistral AI model is designed to operate efficiently on devices such as smartphones, laptops, and even wearables, reducing infrastructure costs rather than relying on large-scale cloud systems.

It can replicate custom voices using only a few seconds of audio, capturing accents and speech patterns while maintaining consistency across languages.

The system is optimised for real-time performance, delivering rapid response times and enabling applications such as live translation, dubbing, and customer engagement tools.

Built on a compact architecture, it balances efficiency with high-quality output, aiming to produce natural-sounding speech instead of robotic voice synthesis. Earlier releases of transcription models suggest a broader strategy to develop a full suite of voice technologies.

Looking ahead, Mistral AI plans to expand towards end-to-end multimodal systems capable of handling audio, text, and image inputs within a single platform.

The company’s focus on open-source development and customisation is intended to attract enterprises seeking flexible solutions, positioning its technology as an alternative to closed ecosystems in the growing voice AI market.

Would you like to learn more about AI, tech and digital diplomacyIf so, ask our Diplo chatbot!