Employee monitoring grows at Meta as AI overhaul accelerates

Meta has introduced a new internal tool to track employee activity, including keystrokes and mouse movements, as part of efforts to train its AI systems. The company says the data will help improve AI models designed to perform everyday digital tasks.

According to company statements, the tracking is limited to Meta-owned devices and applications, with safeguards in place to protect sensitive information. The initiative reflects a broader strategy to gather real-world usage data to enhance the performance and accuracy of AI tools.

The move has raised concerns among employees, some of whom view the monitoring as intrusive, particularly amid ongoing job cuts and reduced hiring. Reports indicate that Meta has significantly scaled back recruitment while increasing investment in AI development.

The company has committed substantial resources to AI, with plans to expand spending and accelerate model development. Internal tracking is positioned as part of a broader shift toward automation, as firms seek to reshape workflows and productivity through AI.

The development highlights growing tensions between AI innovation and workplace privacy. Increased reliance on employee data to train AI systems may reshape labour practices, raising questions about surveillance, consent, and the balance between technological advancement and workers’ rights.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!  

Singapore proposes more tailored capital rules for crypto assets

Singapore’s central bank has launched a consultation on new capital rules for crypto-asset exposures, proposing a more differentiated approach than treating all blockchain-based assets as equally risky.

Under the draft framework, tokenised traditional assets and certain stablecoins would fall into a lower-risk category with lighter capital treatment. The proposal also leaves room for some assets on permissionless blockchains to qualify for that category if they meet principle-based risk conditions.

At the same time, the approach remains cautious. Singapore-incorporated banks would face strict exposure limits, including a cap of 2% of Tier 1 capital for qualifying crypto-asset exposures and a 5% Tier 1 capital limit for exposures that give rise to liabilities.

The consultation suggests Singapore is not trying to open the door widely to bank crypto activity, but rather to create a more workable prudential framework for selected forms of tokenised finance. That would allow regulators to distinguish between higher-risk crypto exposures and assets that more closely resemble traditional financial instruments in tokenised form.

The move is significant because it points to a more tailored interpretation of international prudential standards rather than a one-size-fits-all approach. If adopted, it could reduce uncertainty for banks seeking to engage with tokenised assets while preserving tight capital safeguards around the sector.

More broadly, the proposal reflects a cautious effort to integrate parts of the crypto and tokenisation market into mainstream finance without weakening the core logic of bank capital regulation. In that sense, the consultation is less a loosening of rules than an attempt to apply them with greater precision.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!  

UK’s ICO outlines personal data use in elections

The UK Information Commissioner’s Office has issued guidance on the use of personal data during the upcoming local elections. The publication aims to inform voters about their rights and expectations.

According to the Office, personal data plays a central role in political campaigning, helping parties communicate with voters and understand public concerns. The regulator emphasises that trust depends on lawful and transparent data use.

The guidance states that voters should expect clear explanations of how their data is used, including when profiling or targeted advertising is involved. Political organisations must provide accessible privacy information and follow data protection rules.

The Information Commissioner’s Office also highlights that individuals have the right to question or object to data use, reinforcing accountability during election campaigns in the UK.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

India forms expert committee to support AI governance framework

India’s Ministry of Electronics and Information Technology has constituted a Technology and Policy Expert Committee to support the country’s AI governance architecture. The committee will advise the AI Governance and Economic Group (AIGEG) on policy design, regulatory measures, and international engagement.

The committee is chaired by the ministry’s Secretary and includes experts from academia, industry, and digital policy. Its mandate is to provide informed input grounded in technological developments, regulatory approaches, and global practices.

AIGEG will set strategic direction and coordinate policy across government. The expert committee will translate technical and policy issues into actionable insights for decision-making.

The framework aims to ensure a dynamic and adaptive approach to AI governance. It also seeks to align strategic, technical, and policy considerations with India’s social and economic context.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot 

European firms launch Disaster Recovery Pack for tech independence

A group of European technology companies, Cubbit, SUSE, Elemento, and StorPool Storage, has launched a joint ‘Disaster Recovery Pack’ to support the continuity of organisations’ data and operations in the event of disruptions caused to external dependencies.

The solution was presented on 15 April 2026 at the European Data Summit organised by the Konrad-Adenauer-Foundation in Berlin. It is described as a system intended to maintain critical workloads even in scenarios involving disruptions associated with foreign technology providers.

The Disaster Recovery Pack integrates multiple components of the cloud software stack into a single deployable system. These components include storage, compute, orchestration, networking, identity, observability, and management. By combining these elements, the solution aims to reduce fragmentation and facilitate the deployment of a unified technology stack.

According to the providers, the system is designed to allow organisations to transfer critical workloads to a European-based infrastructure without major disruption. It can be used to identify essential services, establish and test recovery setups, and extend these configurations to additional workloads over time.

The solution is positioned to address operational requirements for disaster recovery while also supporting a broader transition to infrastructure based on European providers. It has already been deployed by an IT service provider in Italy and is expected to be adopted by additional partners.

Why does it matter?

The initiative is linked to efforts to reduce reliance on non-European cloud infrastructure and to strengthen the resilience of digital operations. In a statement, Sebastiano Toffaletti, Secretary General of the European DIGITAL SME Alliance, said that European companies are capable of developing and integrating such solutions, and highlighted the need for policy measures that support their adoption, including considerations related to public procurement and definitions of sovereign cloud within future policy frameworks.

Would you like to learn more about AI, tech and digital diplomacyIf so, ask our Diplo chatbot!

Kazakhstan introduces mandatory audits for high-risk AI systems

Kazakhstan has introduced new rules requiring audits of high-risk AI systems before they are included in official government lists. The framework sets out procedures for identifying and publishing trusted AI systems across sectors.

Sectoral authorities will compile and update lists of high-risk AI systems based on applications submitted by system owners. These lists will be published on official government websites to promote transparency and trust.

Applicants must submit formal requests, documents confirming intellectual property rights and a positive audit conclusion. Authorities will review submissions within ten working days, assessing system purpose, functionality and required documentation.

Systems that meet all criteria will be added to the list and published within five working days. If inconsistencies are identified, applicants will be notified and may resubmit documents for review within a shortened timeframe.

Updated versions of the lists will be released as revisions occur, ensuring ongoing oversight of AI systems. The measures aim to support structured monitoring and responsible use of AI technologies.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot 

Minnesota weighs AI free speech limits

The National Constitution Center reports that Minnesota lawmakers are considering a constitutional amendment to exclude AI systems from free speech protections. The proposal would clarify that such rights apply to people, not machines.

According to the National Constitution Center, the amendment would add language stating that AI does not have the right to speak, write or publish sentiments freely. Human free speech protections would remain unchanged under the proposal.

The article highlights ongoing debate around the measure, with supporters arguing it distinguishes human rights from technological tools, while critics warn it could affect how AI-generated content is treated under the law.

The National Constitution Center notes that the proposal reflects broader tensions over how legal systems should address AI and free expression as the issue develops in Minnesota.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

International Organization for Migration conference in Ankara addresses migrant digital identity gaps

More than 70 global leaders and experts gathered in Ankara on 14–15 April to address gaps in legal identity for migrants, a key barrier to access to services and protection.

The conference was convened by the International Organization for Migration (IOM) and brought together governments, international organisations, academia, and the private sector to discuss practical solutions.

Legal identity was highlighted as a fundamental human right and a critical enabler of safe and regular migration, yet millions of migrants still lack recognised documentation. Participants examined how digital identity systems, including biometrics and mobile tools, could improve access while ensuring security, inclusion, and the protection of rights.

Discussions focused on strengthening migration governance through scalable and context-specific digital identity solutions. Attention also focused on implementation challenges, including keeping systems inclusive and secure for displaced populations affected by conflict or administrative barriers.

The COMPASS conference also showcased private sector technologies and enabled countries from Africa, the Middle East, and Europe to share experiences. Outcomes are expected to inform best practices and support the development of more resilient and inclusive identity systems for migrants.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

UK strengthens AI healthcare governance to ensure safety, equity and system-wide evaluation

The Medicines and Healthcare products Regulatory Agency in the UK has outlined priorities for regulating AI in healthcare, focusing on safety, effectiveness and public trust.

An approach that includes strengthening pre-market evaluation and post-market surveillance, particularly for adaptive systems operating in real-world settings.

Contributions from the Health Foundation and the National Commission for the Regulation of AI in Healthcare highlight the need for broader governance frameworks.

These extend beyond technical validation to include implementation challenges, system-wide impacts and the role of human oversight in clinical environments.

The analysis emphasises that AI in healthcare operates as a socio-technical system, requiring assessment of usability, fairness and real-world outcomes. It also identifies gaps in current evaluation practices, particularly in local service assessments, which may lack consistency and reliability.

Strengthening evaluation standards, improving coordination and addressing risks such as bias and inequity are presented as central to enabling safe and scalable adoption.

Such a framework in the UK aims to balance innovation with accountability while ensuring equitable access to healthcare technologies.

Would you like to learn more about AI, tech and digital diplomacyIf so, ask our Diplo chatbot!

Brazil links AI and technical standards in competitiveness push

Brazil’s Ministry of Development, Industry and Foreign Trade said the integration of AI and technical standardisation should be treated as a strategic issue for the country’s competitiveness.

The position was presented during a meeting organised by the Ministry of Science, Technology, and Innovation, which brought together public bodies and specialists to discuss AI governance and its effects on the productive sector and on the state.

Pedro Ivo, secretary for Competitiveness and Regulatory Policy at the Ministry of Development, Industry and Foreign Trade, said technical standards can help reduce costs, facilitate trade, and improve competitiveness. He also said linking that process to AI could support a more predictable regulatory environment.

According to the ministry, the discussion also highlighted the international dimension of the issue and Brazil’s efforts to expand its role in shaping AI-related standards and guidelines. The programme included discussions of global AI impacts, regulatory challenges, and the role of international organisations in technical regulation for information and communication technologies.

Tiago Munk, the ministry’s coordinator-general for quality infrastructure, said technical standards can play a central role in AI governance by defining criteria, requirements, and good practices for systems, products, and services. He added that Brazil should take an active role in developing international standards.

The meeting was presented as part of a broader government effort to strengthen coordination on AI, with attention to policy direction, institutional coordination, and the country’s position in the digital economy.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!