FCA outlines AI-driven plan to modernise financial regulation

The UK’s Financial Conduct Authority (FCA) has outlined plans to integrate AI and data-driven tools into its regulatory processes as part of its 2026/27 work programme to become a more efficient and effective regulator.

The programme includes developing an internal authorisation tool to speed up approvals and using generative AI to review documents and support supervision, while maintaining human decision-making at the core of regulatory actions.

The FCA said it will also test automated data-sharing in a sandbox environment, expand its Supercharged Sandbox for firms developing AI-based financial products, and invest in analytics to better identify risks and prioritise cases.

Measures to reduce burdens on firms include removing certain data reporting requirements, simplifying digital processes and improving authorisation timelines, alongside efforts to enhance firms’ experience through new tools and feedback mechanisms.

The regulator also plans to support economic growth and consumer protection by advancing measures such as regulating buy now pay later products, speeding up IPO processes, expanding international presence, and addressing emerging risks, including the use of general-purpose AI in financial decision-making.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot 

National security rules to prioritise UK contracts in AI, steel and shipbuilding

The UK government has announced new procurement guidance that will treat shipbuilding, steel, AI, and energy infrastructure as critical to national security, with departments directed to prioritise British businesses where necessary to protect national security. The press release was published on 26 March by the Cabinet Office and its Minister, Chris Ward.

According to the government, the new approach is intended to respond to recent supply-chain fragility and strengthen domestic capacity in sectors it describes as vital to national security. The guidance is presented as the first clear framework for how departments can protect the UK’s economic security and build resilience in the four named sectors.

Additional measures in the package go beyond sector prioritisation. The government says departments will either use British steel or provide a justification if steel is sourced from overseas, linking the change to the UK Steel Strategy launched the previous week. Officials also say the reforms support the government’s Modern Industrial Strategy and follow the publication of the National Security Strategy.

Procurement reform is another part of the package. Under a new Public Interest Test, departments will be asked to assess whether outsourced service contracts worth more than £1 million could be delivered more effectively in-house. The government says the test will cover more than 95% of central government contracts by value.

Community impact is also being built into the contracting framework. Departments will be required to publish and report annually on a specific social value goal for contracts above £5 million, which the government says will cover more than 90% of central government contracts by value. Companies bidding for public contracts are also being encouraged to include commitments on local jobs, skills, and apprenticeships.

The press release also says a new suite of AI tools has been developed to streamline the commercial process. Contract terms will be simplified, and additional business information will be integrated into a central platform, with the stated aim of reducing repeated submissions by smaller businesses bidding for multiple contracts.

Chris Ward said: ‘This Government is backing British businesses and the working people who power them. These reforms are about using the full weight of Government spending to support British jobs, protect our national security and grow our economy.’ He added: ‘Whether you make steel in Scunthorpe, build ships on the Clyde or run a small tech firm in the Midlands, this Government is on your side.’

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

India AI governance faces court, privacy and cyber pressures

An opinion article published by the International Association of Privacy Professionals says India’s data protection and AI governance environment is facing growing pressure as compliance work around the Digital Personal Data Protection Act (DPDPA) unfolds, court challenges continue, and regulators widen oversight into new sectors. The piece, published on 26 March, is labelled as an opinion article and includes an editor’s note stating that the IAPP is policy neutral and publishes contributed opinion pieces to reflect a broad spectrum of views.

The article says several legal and regulatory developments are unfolding simultaneously. One example cited is a public interest litigation filed before India’s Supreme Court by journalist Geeta Seshu and the Software Freedom Law Centre, India, challenging parts of the DPDPA on constitutional and rights-related grounds. According to the piece, the Supreme Court later issued a notice to the Government of India on 12 March.

Concerns outlined in the article include the absence of journalistic exemptions, the lack of compensation for data breach victims when penalties are imposed to the government, broad state powers to exempt departments from the law, and questions about the independence of the Data Protection Board given the government’s control over appointments. The article notes that similar petitions had already been filed, but says this was the first time the court issued notice to the government.

The article also turns to proceedings before the Kerala High Court involving privacy concerns about biometric and personal data collected through Digi Yatra, a not-for-profit foundation that operates airport passenger-processing infrastructure in India. According to the piece, a public interest litigation filed by C R Neelakandan asked for a temporary restraint on the sharing of collected personal data and its commercial use without proper authorisation.

The article says the Kerala High Court issued notice to the Digi Yatra Foundation and sought clarification from the government on whether the Data Protection Board had been established to oversee such matters.

Alongside the litigation, the opinion piece points to government efforts to show legal preparedness for AI-related risks. It says Electronics and Information Technology Minister Ashwini Vaishnaw outlined existing safeguards during the ongoing parliamentary session, referring to the Information Technology Act, the DPDPA, and subordinate rules, along with published guidelines on AI governance, toy safety, harmful content, awareness-building measures, and cyber safety.

Cybersecurity developments also feature in the article. It says the Indian Computer Emergency Response Team, working with the SatCom Industry Association, issued guidelines on 26 February for space, including satellite communications. According to the piece, the framework is intended to strengthen resilience in India’s space ecosystem.

It applies to covered entities, including government agencies, satellite service providers, ground station operators, terminal equipment vendors, and private space entities. Incident reporting within six hours and annual audits are among the measures described.

A further section of the article draws on Thales’ 2026 Data Threat Report. The piece says 64% of surveyed organisations in India identified AI-driven transformation as their biggest security risk, while 55% said they had to deal with reputational damage caused by AI-generated misinformation. It also says 65% reported deepfake-driven attacks, 35% had a complete view of their data, and 36% could fully classify their data.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

EU demands stronger age verification from adult websites

The European Commission has preliminarily found that several major adult platforms, including Pornhub, Stripchat, XNXX, and XVideos, may be in breach of the Digital Services Act for failing to adequately protect minors from accessing harmful content.

These findings highlight concerns that children can easily access such platforms rather than being effectively prevented by robust safeguards.

The Commission’s investigation indicates that the platforms’ risk assessments were insufficient. In several cases, companies focused on reputational or business risks instead of fully addressing societal harms to minors.

Authorities also raised concerns that some platforms did not adequately consider input from civil society organisations specialising in children’s rights and age-assurance technologies, undermining the reliability of their evaluations.

Regarding risk mitigation, the Commission found that existing measures are ineffective. Simple self-declaration systems, in which users confirm they are over 18, were deemed inadequate, while additional features such as warnings, labels, or blurred content failed to prevent minors from accessing content.

The Commission considers that stronger, privacy-preserving age-verification solutions are necessary to ensure meaningful protection of children’s rights and well-being online.

The companies involved now have the opportunity to respond and propose corrective measures, while consultations with the European Board for Digital Services continue.

If the preliminary findings are confirmed, the Commission may impose fines of up to 6 percent of global annual turnover, alongside periodic penalties to enforce compliance.

The case forms part of broader efforts to enforce the Digital Services Act and strengthen online safety across the EU, rather than relying on voluntary measures by platforms.

Would you like to learn more about AI, tech and digital diplomacyIf so, ask our Diplo chatbot!  

Europol warns legal gaps could weaken child abuse detection online

Efforts to combat online child sexual exploitation could be severely weakened, Europol has warned, if legal frameworks supporting detection and reporting are disrupted.

Executive Director Catherine De Bolle highlighted growing concerns over the increasing volume of harmful content online and stressed that protecting children remains a top priority for European law enforcement.

Authorities rely heavily on reports submitted by online service providers, which play a central role in identifying victims and supporting investigations, rather than relying solely on traditional policing methods.

Europol processed around 1.1 million CyberTips in a single year, many originating from the National Centre for Missing & Exploited Children and shared across 24 European countries.

These CyberTips include critical evidence such as images, videos, and other digital data used to track criminal activity.

Europol cautioned that removing the legal basis allowing voluntary detection by platforms could significantly reduce the number of reports submitted to authorities. A decline in CyberTips would limit investigative leads, making it harder to identify victims and disrupt online criminal networks.

Such a development could undermine broader security efforts and weaken the protection of minors across the EU instead of strengthening safeguards.

The agency emphasised that maintaining online service providers’ ability to detect and report suspected abuse is essential to effective law enforcement.

Ensuring continued cooperation between platforms and authorities remains a key factor in safeguarding children and addressing the growing threat of online exploitation.

Would you like to learn more about AI, tech and digital diplomacyIf so, ask our Diplo chatbot!  

Mistral AI launches open-source voice model for enterprises

Mistral AI has introduced a new open-source text-to-speech model designed to power voice assistants and enterprise applications, rather than relying on proprietary solutions.

The model, named Voxtral TTS, marks the company’s entry into the competitive voice AI market alongside players such as OpenAI and ElevenLabs.

Voxtral TTS supports nine languages, including English, French, German, Spanish, and Arabic, allowing organisations to deploy multilingual voice systems across different markets.

The Mistral AI model is designed to operate efficiently on devices such as smartphones, laptops, and even wearables, reducing infrastructure costs rather than relying on large-scale cloud systems.

It can replicate custom voices using only a few seconds of audio, capturing accents and speech patterns while maintaining consistency across languages.

The system is optimised for real-time performance, delivering rapid response times and enabling applications such as live translation, dubbing, and customer engagement tools.

Built on a compact architecture, it balances efficiency with high-quality output, aiming to produce natural-sounding speech instead of robotic voice synthesis. Earlier releases of transcription models suggest a broader strategy to develop a full suite of voice technologies.

Looking ahead, Mistral AI plans to expand towards end-to-end multimodal systems capable of handling audio, text, and image inputs within a single platform.

The company’s focus on open-source development and customisation is intended to attract enterprises seeking flexible solutions, positioning its technology as an alternative to closed ecosystems in the growing voice AI market.

Would you like to learn more about AI, tech and digital diplomacyIf so, ask our Diplo chatbot!  

HP reveals advanced AI devices and workflow tools at Imagine 2026

HP has announced a broad set of AI-focused products and workplace tools at HP Imagine 2026, presenting the update as part of a wider effort to simplify work across PCs, collaboration devices, security systems, and workflow platforms.

In a press release published on 24 March, HP said the new portfolio includes AI PCs, collaboration tools, workstations, printers, and software intended for hybrid work and on-device AI use.

HP says the update includes a new intelligence layer called HP IQ, which it describes as a system designed to orchestrate work across AI PCs, workplace devices, and meeting spaces through local AI and proximity-based connectivity.

The company also announced new EliteBook devices, workstation updates, and workflow automation changes through its Workforce Experience Platform and Build Workspace capabilities.

Several sections of the release focus on on-device AI. According to the company, HP IQ will debut on the next generation of EliteBook X G2 AI PCs and will support features such as prompt-based assistance, document analysis, note organisation, and meeting support.

The release also says NearSense is intended to help devices discover, connect, and collaborate, including through file sharing and one-click joining of conference room meetings.

Security is another central theme in the release. HP says it has introduced what it describes as the world’s first hardware solution to stop physical TPM bypass attacks, using a cryptographically bound link between the TPM and CPU.

The company also said it is expanding capabilities in HP Wolf Security and introducing HP Wolf Pro Security Next Gen Antivirus, as well as physical intrusion detection designed to protect memory if a device chassis is opened.

The announcement also includes new printers and document tools. HP says the LaserJet Pro 4000 and 4100 series, and the LaserJet Enterprise 5000 and 6000 series, are intended to support AI-powered document processing and quantum-resistant security. The release also highlights scanning shortcuts, editable OCR, reduced management time, and a design intended to improve serviceability.

For higher-performance users, the company says it is launching a new generation of Z workstations and mobile workstations. The release refers to systems such as the Z8 Fury, Max Side Panel for Z8 Fury and Z4 workstations, and updated mobile workstation models. Advanced AI development, visual effects, and simulation workloads are among the uses cited in the announcement.

Beyond enterprise work, the release also extends the same AI and device strategy into gaming. New HyperX and OMEN products are part of the announcement, including desktops, a gaming and modular ecosystem, and expanded AI game support through OMEN Gaming Hub and OMEN AI.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

Oracle expands Oracle AI Database with new agentic AI tools

Oracle has announced new agentic AI capabilities for Oracle AI Database, presenting them as tools for building, deploying, and scaling production-grade AI applications that work with business data across operational databases and analytic lakehouses. The company says the new features are available across multicloud and on-premises environments.

According to Oracle, the announcement concerning Oracle AI Database centres on bringing AI and data together within the database so that agents can securely access real-time enterprise data where it resides. Oracle also says customers can choose AI models, agentic frameworks, open data formats, and deployment platforms, while Oracle Exadata users can use Exadata Powered AI Search for high-volume, multi-step agentic workloads.

Oracle’s new product set includes Oracle Autonomous AI Vector Database, which the company says is intended to simplify vector-based application development while preserving the broader database features of Oracle AI Database. Oracle says the service is available in limited capacity through the Oracle Cloud free tier or a low-cost developer tier, with one-click upgrade to full capabilities as requirements expand.

The company also introduced the Oracle AI Database Private Agent Factory, described as a no-code agent builder that can run in public clouds or on-premises without requiring customers to share data with third parties. Oracle says the service includes pre-built agents such as a Database Knowledge Agent, a Structured Data Analysis Agent, and a Deep Data Research Agent. Oracle Unified Memory Core was also announced as a way to store context for AI agents across vector, JSON, graph, relational, text, spatial, and columnar data, all in a single engine with consistent transactions and security.

A separate part of the announcement focuses on what Oracle describes as AI data risk reduction. Oracle says Deep Data Security applies end-user-specific access rules within the database, so that each user or AI agent acting on a user’s behalf can only see the data the user is allowed to access.

Besides the Oracle AI Database, Oracle also announced Private AI Services Container for customers that want to run private model instances without sharing data with third-party AI providers, including in air-gapped environments. Trusted Answer Search was presented as a method for providing answers based on previously created reports rather than relying directly on large language model responses.

Open standards and interoperability form another part of Oracle’s pitch. Oracle says Vectors on Ice adds native support for vector data stored in Apache Iceberg tables, enabling unified search across database and data-lake content. Oracle also announced an Autonomous AI Database MCP Server to allow external AI agents and MCP clients to access Autonomous AI Database capabilities without custom integration code or manual security administration.

Juan Loaiza, executive vice president of Oracle Database Technologies, said: ‘The next wave of enterprise AI will be defined by customers’ ability to use AI in business-critical production systems to safely deliver breakthrough innovations, insights, and productivity.’ He added: ‘With Oracle AI Database, customers don’t just store data, they activate it for AI. By architecting AI and data together, we help customers quickly build and manage agentic AI applications that can securely query and act on real-enterprise data with stock exchange-level robustness in every leading cloud and on-premises.’

Steven Dickens, CEO and principal analyst at HyperFRAME Research, said: ‘In the era of agentic AI, a unified memory core is essential for agents to maintain context across diverse data types, such as vector, JSON, graph, columnar, spatial, text, and relational, without the latency or staleness of external syncing.’

Dickens added: ‘Only Oracle AI Database delivers this in a single, mission-critical engine with concurrent transactional and analytical processing, high availability, and ironclad security, enabling real-time reasoning over live business data. Organisations without this foundation will struggle with fragmented, unreliable agents, while those leveraging Oracle gain a decisive edge in scalable AI deployment.’

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

Open letter targets Meta ad practices

A coalition of civil society and industry groups has urged the European Commission to enforce the Digital Markets Act more rigorously, warning that major tech firms continue to exploit compliance gaps. The appeal centres on concerns over data use and online advertising practices.

Organisations including noyb, Check My Ads, and the Irish Council for Civil Liberties argue that current models fail to offer users genuine choice. Critics say consent mechanisms tied to payment or tracking undermine the intent of the EU digital rules.

The letter against Meta calls for clearer standards, including equal options for personalised and non-personalised advertising, as well as stricter limits on design practices that influence user decisions. Campaigners also want stronger coordination between regulators to ensure consistent enforcement.

The push reflects wider frustration among European organisations, with several recent letters demanding faster action against dominant platforms. Observers warn that delayed enforcement risks weakening the credibility of the EU digital regulation.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot