OpenAI acquires Astral to expand Codex developer tools

Astral is being acquired by OpenAI as developer tooling becomes a bigger focus, with the deal aimed at boosting the capabilities of its Codex platform. The move is expected to bring widely used open-source Python tools into the ecosystem, including uv, Ruff, and ty, which are already embedded in millions of developer workflows.

The acquisition is intended to strengthen Codex’s role across the full software development lifecycle, moving beyond code generation toward more integrated and autonomous systems.

The company has positioned Codex as a system that can plan changes, modify codebases, run tools, and verify results, with usage already growing rapidly. OpenAI reported a threefold increase in users and a fivefold increase in activity this year, bringing its total to more than 2 million weekly active users.

Astral’s tools are seen as a natural fit for this vision, given their role in managing dependencies, enforcing code quality, and improving reliability in Python-based development. Integrating these tools could allow AI agents to interact more directly with the environments developers already use.

The acquisition also reinforces the importance of Python as a core language in modern software development, particularly across AI, data science, and backend systems. OpenAI said it plans to continue supporting Astral’s open-source projects while exploring deeper integration with Codex.

The deal remains subject to regulatory approval, and both companies will operate independently until completion. Once finalised, Astral’s team is expected to join OpenAI’s Codex division as the company continues building AI systems designed to collaborate across the development workflow.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

Firefox adds VPN and AI tools

Mozilla is preparing a major update to its Firefox browser, introducing a built-in VPN and new AI-powered tools. The company says the changes aim to strengthen privacy and give users greater control over browsing.

The integrated VPN will hide the user’s location and IP address while offering a limited monthly data allowance in selected regions. The feature replaces a previously separate paid service and will be built into the browser.

New AI tools will support tasks such as summarising content and comparing products without leaving a web page. Additional features include split-screen browsing and tools to organise notes across tabs.

The update also introduces redesigned settings and a refreshed interface to improve usability. Mozilla says the changes are intended to create a more personalised and modern browsing experience.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

AI agent causes internal data leak at Meta

Meta recently confirmed that an AI agent inadvertently exposed sensitive company and user data to some employees. The leak happened when an engineer followed the AI agent’s forum suggestion, exposing data for about two hours.

Meta stated that no user data was mishandled and emphasised that human errors could cause similar issues.

The incident reflects broader challenges in deploying agentic AI tools within major tech companies. Amazon faced similar issues, with internal AI tools causing outages and operational errors, showing risks of quickly integrating AI into critical workflows.

Experts describe these deployments as experimental, with companies testing AI at scale without fully assessing potential risks.

Security specialists note that AI agents lack the contextual awareness that human engineers accumulate over years of experience. Lacking long-term operational knowledge, AI can make decisions that compromise security, a factor in the Meta breach.

Analysts warn that such errors are likely to recur as AI adoption accelerates.

The episode comes amid growing attention on agentic AI’s potential to disrupt workflows, affect productivity, and introduce new vulnerabilities. Industry observers caution that AI tools must be carefully monitored and accompanied by robust safeguards to prevent future incidents.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

Bulgaria becomes first country to deploy a national AI model across a tax authority

Bulgaria’s National Revenue Agency (NRA) has begun rolling out an AI system developed by INSAIT, the Institute for Computer Science, Artificial Intelligence and Technology at Sofia University, across all of its organisational structures, making it the first large-scale public administrative body in the country to deploy the BgGPT national language model.

Following a successful pilot phase, the system is now in expanded use across the NRA’s central office and seven territorial directorates.

The AI system enables staff to conduct general and specialised searches related to tax and social security legislation, generating instant responses to improve service quality for citizens and businesses.

Crucially, it runs exclusively on open-weight models and operates on proprietary hardware, an approach specifically designed to prevent data leakage and protect privacy, two of the central concerns when integrating AI into government institutions.

The next phase of the project will see the system adapted for specialised use cases and integrated into internal processes alongside national integrator ‘Information Services’, with the goal of reaching daily use by more than 7,000 NRA employees.

INSAIT describes the initiative as a concrete contribution to European AI sovereignty, with Bulgaria combining nationally developed language models and locally controlled hardware to reduce dependence on commercial AI providers.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Mastercard expands AI strategy with new payments model

Mastercard has introduced a generative AI foundation model trained on billions of anonymised transactions. The model is designed as a backend system to power insights across payments and commerce services.

The company plans to extend AI use beyond fraud detection into cybersecurity, loyalty programmes and small-business tools. The model is being developed with support from Nvidia and Databricks technologies.

Earlier AI tools focused on fraud detection, significantly improving accuracy and reducing false positives. The new model marks a shift towards a broader infrastructure approach across multiple products.

This move aligns with Mastercard’s growing reliance on value-added services, which generated over $13 billion in revenue. These services include security, analytics and digital payment solutions beyond the core network.

Competitors such as Visa and PayPal are also expanding AI-driven commerce platforms. The race is intensifying as firms build integrated systems for payments, automation and intelligent services.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

Agentic Ready programme by Visa launched to prepare for AI-driven payments

Visa has launched Agentic Ready, a global programme preparing the payments ecosystem for AI agents to initiate transactions for consumers. The programme builds on Visa Intelligent Commerce, the company’s framework for secure, AI-driven payment experiences.

The first phase, launching in Europe, including the UK, focuses on issuer readiness. Participating banks and financial institutions can test and validate agent-initiated transactions in controlled production environments, ensuring they remain secure, reliable, and scalable.

Visa’s trust layer integrates tokenisation, identity verification, risk controls, and biometric authentication to maintain consumer consent and protection throughout transactions.

Controlled testing with selected merchants allows issuers to gain practical experience of agentic commerce in real-world settings. Early participants, including Barclays, HSBC UK, Revolut, and Banco Santander, help Visa test and refine safe AI-driven payments across channels.

The programme advances Visa’s vision of AI-driven commerce, enabling flexible payments while keeping consumers in control. Expansion beyond Europe is planned, leveraging lessons from the initial rollout to accelerate agentic commerce globally.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

YouTube enlists users to rate videos as AI slop in content quality push

YouTube has introduced a new pop-up survey asking viewers to rate whether videos feel like ‘AI slop’, with users able to score content on a scale from ‘not at all’ to ‘extremely’ sloppy.

The feature began appearing on 17 March 2026 and marks a shift in approach, with YouTube now enlisting its audience directly to help identify low-quality, AI-generated content.

The move adds a third layer of detection on top of YouTube’s existing automated and human review systems, both of which have struggled to keep pace with the flood of AI-generated uploads.

Research found that roughly 21% of the first 500 videos recommended to a brand-new YouTube account were identified as AI slop, with a further 33% falling into a broader category of repetitive, low-substance content.

Combating this was named a 2026 priority by YouTube CEO Neal Mohan in his annual letter to the platform.

The survey has not been without controversy.

Critics on social media have pointed out that viewer-labelled ‘slop’ data could be fed into Google’s Veo video generation models, potentially training future AI to avoid the very patterns humans flag as low quality, raising questions about whether YouTube is crowdsourcing content moderation or, inadvertently, AI improvement.

YouTube has not clarified how the feedback data will be used.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Smart Ship Hub calls for careful approach to AI cameras on vessels

Digital vessel performance platform Smart Ship Hub is calling on the maritime industry to embrace AI-enabled camera systems as proactive safety tools, while insisting that their deployment must be underpinned by strong governance and genuine respect for seafarers’ working and living environments.

The company warns that, introduced without clarity or context, the technology risks being perceived as surveillance rather than safety enhancement.

Captain Nagpaul, Voyage Performance Specialist at Smart Ship Hub, outlined a broad range of operational applications for AI cameras at sea, from early fire detection and cargo monitoring during high-risk activities such as mooring operations, to improved situational awareness in areas of poor visibility and high vessel traffic.

The systems can also generate time-stamped visual records to support incident investigations and enable shore-based specialists to provide remote technical support through secure mobile applications.

Smart Ship Hub CEO Joy Basu argued that resisting the technology is not a viable strategy for the sector, noting that crew acceptance improves when workers see tangible benefits such as reduced workload and safer daily operations.

He described AI camera systems as powerful tools that enhance safety and strengthen the connection between ship and shore, but stressed they are not substitutes for professional experience and judgement.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

UNESCO promotes safe AI use and gender equality in Caribbean workshop

A regional workshop in Kingston has been organised by UNESCO to explore the relationship between AI, gender equality and online safety, reflecting wider efforts to support inclusive digital governance across the Caribbean.

Discussions examined the impact of technology-facilitated gender-based violence, including harassment, impersonation and image-based abuse, which continue to affect women and girls disproportionately.

Generative AI was presented as both an opportunity and a risk, with concerns linked to bias, deepfakes, misinformation and non-consensual content.

More than 50 participants from government, civil society and youth organisations engaged in practical sessions aimed at strengthening awareness and digital skills. A participatory approach encouraged peer learning and critical thinking, aligning with UNESCO’s ethical AI principles.

Technology reflects the hands that build it and the society that feeds it data. If we are not careful, AI will not just mirror our existing inequalities; it will magnify them.

The Honourable Olivia Grange, Minister of Culture, Gender, Entertainment and Sport of Jamaica.

The pursuit of equality must extend into every space where women live, work, and where they connect and express themselves – including the digital world,

For Eric Falt, Regional Director and Representative of UNESCO.

The initiative forms part of broader efforts to ensure that digital transformation supports inclusion rather than reinforcing existing disparities, while equipping stakeholders with tools for safe and responsible AI use.

Would you like to learn more about AI, tech and digital diplomacyIf so, ask our Diplo chatbot!

TikTok disinformation study raises concerns over AI content and EU regulation

A new study by Science Feedback indicates that TikTok has a higher proportion of misleading content than other major platforms operating in the EU.

The analysis covered France, Poland, Slovakia and Spain, assessing content across multiple thematic areas including health, politics and climate.

Findings suggest that approximately one in four posts on TikTok contained misleading elements, placing the platform ahead of competitors such as Facebook, YouTube and X. Health-related narratives were the most prominent category, reflecting broader patterns observed across digital ecosystems.

Researchers describe disinformation as a persistent feature embedded within platform structures instead of an isolated occurrence.

The study also highlights a growing presence of AI-generated content, particularly in video formats, where synthetic material accounted for a significant share of misleading posts. Despite existing platform policies, most identified content lacked clear labelling.

The regulatory context remains under development.

While the Digital Services Act integrates voluntary commitments from the EU disinformation code, it does not impose mandatory requirements for identifying AI-generated material.

Ongoing debates therefore focus on transparency, accountability and the evolving responsibilities of digital platforms within the European information environment.

Would you like to learn more about AI, tech and digital diplomacyIf so, ask our Diplo chatbot!