OpenAI acquires Astral to expand Codex developer tools

Astral is being acquired by OpenAI as developer tooling becomes a bigger focus, with the deal aimed at boosting the capabilities of its Codex platform. The move is expected to bring widely used open-source Python tools into the ecosystem, including uv, Ruff, and ty, which are already embedded in millions of developer workflows.

The acquisition is intended to strengthen Codex’s role across the full software development lifecycle, moving beyond code generation toward more integrated and autonomous systems.

The company has positioned Codex as a system that can plan changes, modify codebases, run tools, and verify results, with usage already growing rapidly. OpenAI reported a threefold increase in users and a fivefold increase in activity this year, bringing its total to more than 2 million weekly active users.

Astral’s tools are seen as a natural fit for this vision, given their role in managing dependencies, enforcing code quality, and improving reliability in Python-based development. Integrating these tools could allow AI agents to interact more directly with the environments developers already use.

The acquisition also reinforces the importance of Python as a core language in modern software development, particularly across AI, data science, and backend systems. OpenAI said it plans to continue supporting Astral’s open-source projects while exploring deeper integration with Codex.

The deal remains subject to regulatory approval, and both companies will operate independently until completion. Once finalised, Astral’s team is expected to join OpenAI’s Codex division as the company continues building AI systems designed to collaborate across the development workflow.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

EU digital wallet nears rollout

Interoperability tests for the European Digital Identity Wallet have marked a significant step towards deployment, following a major industry-wide exercise. Systems were tested under real conditions to ensure compatibility across providers.

The initiative forms part of the EU’s plan to provide citizens with a secure digital wallet for identification and online services. The system will allow users to store identity data and access services, including electronic signatures.

Results showed that most test scenarios were successfully completed, confirming that independent systems can work together effectively. The exercise also highlighted areas requiring further refinement ahead of wider implementation.

EU officials and industry leaders said the progress supports the development of a unified digital ecosystem. The wallet is expected to simplify everyday services while strengthening security and trust in digital identity solutions.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

Malaysia tightens rules on data centres

Malaysia has quietly restricted new data centre approvals to projects linked to AI, signalling a strategic shift in its digital economy. Authorities confirmed that non-AI development has been halted for nearly 2 years.

The policy reflects mounting pressure on energy and water resources as demand for data centres accelerates. Officials aim to ensure infrastructure supports high-value AI projects rather than lower-impact investments.

Rapid growth has positioned Malaysia as a key regional hub, attracting major global technology firms. Concerns remain over whether the country risks hosting infrastructure without building local innovation capacity.

Leaders say future efforts will focus on balancing investment with domestic benefits and energy sustainability. Plans include expanding power supply and strengthening national AI capabilities to secure long term gains.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

Amazon upgrades Alexa with AI features

Amazon is rolling out an AI upgrade to its Alexa assistant, aiming to make interactions more conversational and responsive. The new version is designed to follow the context and respond more naturally.

The update comes as Amazon seeks to compete with advanced AI chatbots that have gained popularity in recent years. Critics have argued that smart speakers have fallen behind newer AI tools.

Users in the UK are expected to notice more personalised and proactive responses from the upgraded assistant. This will be based on user and customer personal data. The service will be included with Prime subscriptions or offered as a standalone monthly option.

Analysts say the update could help Amazon gather even more user data and improve engagement by picking up on customers’ habits through conversations. However, questions remain about whether the changes will drive revenue or revive interest in smart speakers.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

AI safety push sees Anthropic and OpenAI recruit explosives specialists

Anthropic and OpenAI are recruiting chemical and explosives experts to strengthen safeguards for their AI systems, reflecting growing concern about the potential misuse of advanced models.

Anthropic is seeking a policy specialist to design and monitor guardrails governing how its systems respond to prompts involving chemical weapons and explosives. The role includes assessing high-risk scenarios and responding to potential escalation signals in real time.

OpenAI is expanding its Preparedness team, hiring researchers and a threat modeller to identify and forecast risks linked to frontier AI systems. The positions focus on evaluating catastrophic risks and aligning technical, policy, and governance responses.

The recruitment drive comes amid heightened scrutiny of AI safety and national security implications. Anthropic is currently challenging a US government designation that labels it a supply-chain risk, while tensions have emerged over restrictions on the military use of AI systems.

At the same time, OpenAI has secured agreements to deploy its technology in classified environments under defined constraints. The parallel developments highlight how AI firms are balancing commercial expansion with increasing pressure to implement robust safety controls.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

NVIDIA Isaac powers generalist specialist robots at scale

A new class of robots is emerging, combining broad adaptability with task-specific precision as developers move toward generalist specialist systems. Within this shift, NVIDIA Isaac is enabling integrated workflows that connect data generation, simulation, training, and deployment across robotics pipelines.

NVIDIA Isaac unifies robotics development across these stages, integrating cloud-to-robot workflows that allow developers to build, test, and scale systems more efficiently across both real and simulated environments.

A key driver is the growing reliance on synthetic data, which allows developers to simulate rare or hazardous scenarios that are difficult to capture in the real world. NVIDIA Isaac supports this through tools such as Omniverse-based simulation and teleoperation pipelines, helping convert real-world signals into scalable training datasets and accelerating development cycles.

The platform also enables advanced robot training using reasoning vision-language-action models, which allow machines to perceive, interpret, and act across complex environments. With frameworks like Isaac Lab and integrated physics engines, NVIDIA Isaac enables robots to train across thousands of parallel simulations, significantly reducing time, cost, and risk compared to real-world training.

Once trained, NVIDIA Isaac supports deployment across edge AI systems, including the Jetson platform, while maintaining consistency between simulation and real-world performance. Combined with modular workflows and open frameworks, the platform is positioning itself as a core foundation for scalable, next-generation robotics.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

Parents underestimate how teenagers use AI in daily life

Parents often believe they understand how their children use AI tools in daily life, but recent studies suggest a clear and growing disconnect. Teenagers are using AI more frequently and in more complex ways than most adults realise.

Research indicates that 64% of teens use AI, while only 51% of parents think their children do. A large share of families have never discussed AI, leaving teenagers to navigate its role without guidance.

Teenagers commonly use AI for schoolwork, research and entertainment as part of their routine activities. However, a notable number also rely on it for advice, conversation and even emotional support in personal situations.

Experts warn that this awareness gap can increase risks linked to misuse and emotional dependence on AI tools. Limited parental understanding means many overlook how strongly AI is influencing behaviour and decision-making.

Despite these concerns, many teenagers feel confident using AI and see it as a helpful tool. Specialists emphasise that open conversations are essential to ensure more responsible and balanced use at home.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

AI standards and regulation struggle to keep pace with global innovation

Global efforts to regulate AI are accelerating, but innovation continues to outpace formal rules. Policymakers and industry leaders are increasingly turning to standards to help bridge compliance gaps.

At the AI Standards Hub Global Summit, experts highlighted how technical standards support responsible AI development. These tools are seen as essential for scaling AI safely while regulatory frameworks continue to evolve.

Differences across regions remain significant, with the EU relying on formal regulation and the US leaning on flexible standards. This fragmented landscape is raising concerns over compliance costs and barriers to cross-border deployment.

Experts stress that standards must evolve alongside AI while aligning with global frameworks and enforcement efforts. Without coordination, inconsistencies could limit innovation and weaken trust in AI systems.

Calls are growing for shared definitions, measurable benchmarks and stronger international cooperation. Stakeholders argue that aligning standards with regulation will be critical for future AI governance.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

Quantum cryptography pioneers win top computing prize

Two researchers have been awarded the Turing Award for pioneering work in quantum cryptography. Their research laid the foundations for a new form of secure communication based on quantum physics.

The method, developed in the 1980s, enables encryption keys that cannot be copied without detection. Any attempt to intercept the data alters its physical properties, revealing interference.

Experts say the approach could become vital as quantum computing advances. Traditional encryption methods may become vulnerable as computing power increases.

The award highlights the growing importance of secure data transmission in a digital world. Researchers believe quantum cryptography could play a central role in encrypting and protecting future communications.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

Google responds to UK digital market rules and CMA proposals

Debate over proposed UK digital market rules is intensifying, with Google outlining its position and emphasising the need to balance competition with user experience and platform integrity. The company said it supports the objectives of the Competition and Markets Authority but warned that some proposals could introduce risks for users.

Google argued that maintaining fair and relevant search results remains a priority, stating that its ranking systems are designed to prioritise quality rather than favour its own services. It cautioned that certain third-party proposals could expose its systems to manipulation, potentially weakening protections against spam and reducing the pace of product improvements.

The company also addressed user choice on Android devices, noting that existing options already allow users to select preferred services. It suggested that adding frequent mandatory choice screens could disrupt user experience, proposing instead a permanent settings-based option to change defaults without repeated prompts.

Regarding publisher relations, Google highlighted efforts to increase control over how content is used, particularly with generative AI features such as AI Overviews. It said new tools are being developed to allow publishers to opt out of specific AI functionalities while maintaining visibility in search results.

Google said it would continue engaging with UK regulators to shape rules that support users, publishers, and businesses, while ensuring that innovation and service quality are not compromised.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!