Wikipedia limits generative AI use in article creation

Wikipedia has strengthened its approach to AI use, introducing new restrictions on the use of generative AI in article creation and editing. The changes reflect growing concerns about accuracy, sourcing and editorial standards.

Guidance issued in January 2026 warned contributors against copying and pasting outputs from generative AI into articles. Editors were advised to avoid using such tools to create new entries, as the content often fails verification against reliable sources.

In March 2026, stricter rules were introduced, prohibiting the use of AI to generate or rewrite article content. Limited exceptions allow AI to copyedit one’s own writing or translate material from other Wikipedia language versions.

The updated framework highlights concerns that AI-generated text may include fabricated references, bias and non-encyclopaedic language. Wikipedia continues to allow AI for support tasks such as identifying gaps and locating sources, while maintaining human oversight.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot 

India strengthens digital economy with AI and media initiatives

India has launched three initiatives to expand AI adoption, digital content creation and access to broadcasting services. The programme focuses on building an AI-skilled workforce and strengthening the country’s digital ecosystem.

A national AI skilling initiative aims to train 15,000 creators and media professionals through partnerships with Google and YouTube. The programme covers generative AI, prompting and advanced tools, supporting future-ready skills in media and creative industries.

The government also introduced MyWAVES, a platform within WAVES OTT that enables users to create, upload and share content. Designed for user-generated content, it supports multiple formats and multilingual participation across India.

Access to broadcasting has been simplified through in-built satellite tuners and an advanced programme guide in television sets. The update removes the need for set-top boxes, improving affordability and expanding reach, particularly in remote areas.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot 

Study highlights blind spots in digital regulation

A recent academic study argues that legal frameworks fail to fully capture the power of cloud infrastructure in the digital economy. The research suggests that regulators focus too heavily on services and outputs rather than on the underlying systems that shape markets.

Authors highlight how major cloud providers influence innovation, data flows and technological development. Existing laws are said to overlook the ability of these actors to structure markets and define how digital systems operate.

The paper links these gaps to fragmented legal approaches, specifically in the EU, that treat technology as a series of isolated issues. Such perspectives risk missing broader forms of control embedded in infrastructure and platform ecosystems.

Researchers call for a shift in legal thinking to better recognise infrastructure-level power and its societal impact. Stronger frameworks are seen as essential as global digital systems become increasingly central to economic and political life.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

Lille proposed as EU customs hub

France has submitted a bid to host the future EU Customs Authority in Lille, positioning itself at the centre of efforts to modernise the customs union. The proposal highlights national expertise and a leading role in shaping recent reforms.

Authorities argue the new body will strengthen internal market security, improve oversight of e-commerce and enhance cooperation between member states. France has supported initiatives to tackle illicit trade and improve risk management.

Officials also point to strong operational experience, including international customs networks and the use of AI tools to screen postal shipments. Such capabilities are presented as key to supporting the authority from its launch, but questions are raised concerning the use of AI and its biases.

Lille is promoted as a strategic logistics hub with strong transport links and access to skilled workers. Its location near major European trade routes is expected to support recruitment and coordination across the bloc.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

Digital divide shapes AI job outcomes

A joint study by the International Labour Organization and the World Bank finds that AI will reshape labour markets unevenly across countries. Research covering 135 economies highlights growing risks for workers as automation expands.

Advanced economies show higher exposure to AI, particularly in clerical and professional roles. Lower-income regions face fewer direct impacts but lack the infrastructure and skills needed to capture productivity gains.

The digital divide plays a central role, with many vulnerable jobs already online and therefore exposed to automation. Workers in roles with potential benefits often lack reliable internet access, limiting opportunities.

The ILO’s findings suggest outcomes depend on infrastructure, skills and job design rather than technology alone. Policymakers are urged to improve connectivity, training and social protections to spread benefits more evenly.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

FCA outlines AI-driven plan to modernise financial regulation

The UK’s Financial Conduct Authority (FCA) has outlined plans to integrate AI and data-driven tools into its regulatory processes as part of its 2026/27 work programme to become a more efficient and effective regulator.

The programme includes developing an internal authorisation tool to speed up approvals and using generative AI to review documents and support supervision, while maintaining human decision-making at the core of regulatory actions.

The FCA said it will also test automated data-sharing in a sandbox environment, expand its Supercharged Sandbox for firms developing AI-based financial products, and invest in analytics to better identify risks and prioritise cases.

Measures to reduce burdens on firms include removing certain data reporting requirements, simplifying digital processes and improving authorisation timelines, alongside efforts to enhance firms’ experience through new tools and feedback mechanisms.

The regulator also plans to support economic growth and consumer protection by advancing measures such as regulating buy now pay later products, speeding up IPO processes, expanding international presence, and addressing emerging risks, including the use of general-purpose AI in financial decision-making.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot 

National security rules to prioritise UK contracts in AI, steel and shipbuilding

The UK government has announced new procurement guidance that will treat shipbuilding, steel, AI, and energy infrastructure as critical to national security, with departments directed to prioritise British businesses where necessary to protect national security. The press release was published on 26 March by the Cabinet Office and its Minister, Chris Ward.

According to the government, the new approach is intended to respond to recent supply-chain fragility and strengthen domestic capacity in sectors it describes as vital to national security. The guidance is presented as the first clear framework for how departments can protect the UK’s economic security and build resilience in the four named sectors.

Additional measures in the package go beyond sector prioritisation. The government says departments will either use British steel or provide a justification if steel is sourced from overseas, linking the change to the UK Steel Strategy launched the previous week. Officials also say the reforms support the government’s Modern Industrial Strategy and follow the publication of the National Security Strategy.

Procurement reform is another part of the package. Under a new Public Interest Test, departments will be asked to assess whether outsourced service contracts worth more than £1 million could be delivered more effectively in-house. The government says the test will cover more than 95% of central government contracts by value.

Community impact is also being built into the contracting framework. Departments will be required to publish and report annually on a specific social value goal for contracts above £5 million, which the government says will cover more than 90% of central government contracts by value. Companies bidding for public contracts are also being encouraged to include commitments on local jobs, skills, and apprenticeships.

The press release also says a new suite of AI tools has been developed to streamline the commercial process. Contract terms will be simplified, and additional business information will be integrated into a central platform, with the stated aim of reducing repeated submissions by smaller businesses bidding for multiple contracts.

Chris Ward said: ‘This Government is backing British businesses and the working people who power them. These reforms are about using the full weight of Government spending to support British jobs, protect our national security and grow our economy.’ He added: ‘Whether you make steel in Scunthorpe, build ships on the Clyde or run a small tech firm in the Midlands, this Government is on your side.’

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

VTC expands AI training across all programmes in Hong Kong

The Vocational Training Council (VTC) has introduced an ‘AI for All’ strategy to integrate AI training across its programmes, aiming to support Hong Kong’s ambition to strengthen its innovation and technology sector.

The initiative aligns with broader policy priorities, including the ‘AI Plus’ approach outlined in national planning frameworks and Hong Kong’s budget, which emphasise integrating AI across industries while addressing a shortage of skilled professionals.

Under the ‘AI+Professional’ model, all Higher Diploma students are required to study IT modules covering prompt engineering, generative AI, and AI ethics and security, with training adapted to disciplines such as engineering, design, and information technology.

The council has also partnered with technology companies through memorandums of understanding. It provides ongoing training for employees in government and industry, while offering internal AI tools and a ‘Virtual Tutor’ platform to support teaching and learning.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot 

EU demands stronger age verification from adult websites

The European Commission has preliminarily found that several major adult platforms, including Pornhub, Stripchat, XNXX, and XVideos, may be in breach of the Digital Services Act for failing to adequately protect minors from accessing harmful content.

These findings highlight concerns that children can easily access such platforms rather than being effectively prevented by robust safeguards.

The Commission’s investigation indicates that the platforms’ risk assessments were insufficient. In several cases, companies focused on reputational or business risks instead of fully addressing societal harms to minors.

Authorities also raised concerns that some platforms did not adequately consider input from civil society organisations specialising in children’s rights and age-assurance technologies, undermining the reliability of their evaluations.

Regarding risk mitigation, the Commission found that existing measures are ineffective. Simple self-declaration systems, in which users confirm they are over 18, were deemed inadequate, while additional features such as warnings, labels, or blurred content failed to prevent minors from accessing content.

The Commission considers that stronger, privacy-preserving age-verification solutions are necessary to ensure meaningful protection of children’s rights and well-being online.

The companies involved now have the opportunity to respond and propose corrective measures, while consultations with the European Board for Digital Services continue.

If the preliminary findings are confirmed, the Commission may impose fines of up to 6 percent of global annual turnover, alongside periodic penalties to enforce compliance.

The case forms part of broader efforts to enforce the Digital Services Act and strengthen online safety across the EU, rather than relying on voluntary measures by platforms.

Would you like to learn more about AI, tech and digital diplomacyIf so, ask our Diplo chatbot!  

EU opens probe into Snapchat child safety compliance

The European Commission has launched formal proceedings to assess whether Snapchat is complying with child protection obligations under the Digital Services Act. The investigation focuses on whether the platform ensures adequate safety, privacy, and security for minors.

Authorities suspect Snapchat may have failed to prevent exposure of children to grooming attempts, recruitment for criminal activity, and content linked to illegal goods such as drugs, vapes, and alcohol.

Concerns also include whether minors can be effectively prevented from accessing the platform or interacting with adults posing as peers.

The inquiry will examine age assurance methods, default account settings, reporting tools, and the spread of illegal content. Regulators argue that self-declared age may be insufficient, while default settings and recommendations may expose minors to risks.

The Commission will now gather further evidence through information requests, inspections, and interviews, and may take enforcement actions, including interim measures or penalties.

National regulators will support the investigation as part of coordinated oversight under the Digital Services Act.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot