Oracle expands Oracle AI Database with new agentic AI tools

Oracle has announced new agentic AI capabilities for Oracle AI Database, presenting them as tools for building, deploying, and scaling production-grade AI applications that work with business data across operational databases and analytic lakehouses. The company says the new features are available across multicloud and on-premises environments.

According to Oracle, the announcement concerning Oracle AI Database centres on bringing AI and data together within the database so that agents can securely access real-time enterprise data where it resides. Oracle also says customers can choose AI models, agentic frameworks, open data formats, and deployment platforms, while Oracle Exadata users can use Exadata Powered AI Search for high-volume, multi-step agentic workloads.

Oracle’s new product set includes Oracle Autonomous AI Vector Database, which the company says is intended to simplify vector-based application development while preserving the broader database features of Oracle AI Database. Oracle says the service is available in limited capacity through the Oracle Cloud free tier or a low-cost developer tier, with one-click upgrade to full capabilities as requirements expand.

The company also introduced the Oracle AI Database Private Agent Factory, described as a no-code agent builder that can run in public clouds or on-premises without requiring customers to share data with third parties. Oracle says the service includes pre-built agents such as a Database Knowledge Agent, a Structured Data Analysis Agent, and a Deep Data Research Agent. Oracle Unified Memory Core was also announced as a way to store context for AI agents across vector, JSON, graph, relational, text, spatial, and columnar data, all in a single engine with consistent transactions and security.

A separate part of the announcement focuses on what Oracle describes as AI data risk reduction. Oracle says Deep Data Security applies end-user-specific access rules within the database, so that each user or AI agent acting on a user’s behalf can only see the data the user is allowed to access.

Besides the Oracle AI Database, Oracle also announced Private AI Services Container for customers that want to run private model instances without sharing data with third-party AI providers, including in air-gapped environments. Trusted Answer Search was presented as a method for providing answers based on previously created reports rather than relying directly on large language model responses.

Open standards and interoperability form another part of Oracle’s pitch. Oracle says Vectors on Ice adds native support for vector data stored in Apache Iceberg tables, enabling unified search across database and data-lake content. Oracle also announced an Autonomous AI Database MCP Server to allow external AI agents and MCP clients to access Autonomous AI Database capabilities without custom integration code or manual security administration.

Juan Loaiza, executive vice president of Oracle Database Technologies, said: ‘The next wave of enterprise AI will be defined by customers’ ability to use AI in business-critical production systems to safely deliver breakthrough innovations, insights, and productivity.’ He added: ‘With Oracle AI Database, customers don’t just store data, they activate it for AI. By architecting AI and data together, we help customers quickly build and manage agentic AI applications that can securely query and act on real-enterprise data with stock exchange-level robustness in every leading cloud and on-premises.’

Steven Dickens, CEO and principal analyst at HyperFRAME Research, said: ‘In the era of agentic AI, a unified memory core is essential for agents to maintain context across diverse data types, such as vector, JSON, graph, columnar, spatial, text, and relational, without the latency or staleness of external syncing.’

Dickens added: ‘Only Oracle AI Database delivers this in a single, mission-critical engine with concurrent transactional and analytical processing, high availability, and ironclad security, enabling real-time reasoning over live business data. Organisations without this foundation will struggle with fragmented, unreliable agents, while those leveraging Oracle gain a decisive edge in scalable AI deployment.’

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

Open letter targets Meta ad practices

A coalition of civil society and industry groups has urged the European Commission to enforce the Digital Markets Act more rigorously, warning that major tech firms continue to exploit compliance gaps. The appeal centres on concerns over data use and online advertising practices.

Organisations including noyb, Check My Ads, and the Irish Council for Civil Liberties argue that current models fail to offer users genuine choice. Critics say consent mechanisms tied to payment or tracking undermine the intent of the EU digital rules.

The letter against Meta calls for clearer standards, including equal options for personalised and non-personalised advertising, as well as stricter limits on design practices that influence user decisions. Campaigners also want stronger coordination between regulators to ensure consistent enforcement.

The push reflects wider frustration among European organisations, with several recent letters demanding faster action against dominant platforms. Observers warn that delayed enforcement risks weakening the credibility of the EU digital regulation.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

UK tightens sanctions on crypto-linked scam networks

The UK has stepped up its crackdown by sanctioning a crypto marketplace tied to major scam centres in Southeast Asia. Measures aim to disrupt the sale of stolen personal data and limit the financial infrastructure enabling online fraud targeting British victims.

Authorities also targeted operators behind ‘#8 Park’, Cambodia’s largest scam compound, believed to house up to 20,000 trafficked workers. Many individuals forced to run scams were lured with false job offers before being coerced into fraudulent activity under severe threats.

Sanctions extend to key entities and individuals connected to the wider network, including those facilitating crypto laundering and cross-border financial flows. Earlier UK action froze over £1 billion in assets and helped shut down platforms used for laundering illicit funds.

Officials said the measures will isolate these operations from the crypto ecosystem and freeze UK-based assets. The measures come ahead of an international summit in June aimed at strengthening global coordination against illicit finance and digital fraud.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

Microsoft launches nonprofit AI training and fellowship initiative

Microsoft has announced a new programme called Microsoft Elevate for Changemakers, aimed at helping nonprofit leaders build AI skills, credentials, and organisational capacity. In a post published on 25 March, the initiative is said to have been introduced alongside the company’s Global Nonprofit Leaders Summit, which it says brought together more than 1,500 nonprofit leaders from around the world.

The company says the programme is designed to help nonprofit organisations adopt AI in ways that reflect their missions and the communities they serve. According to the company, the new initiative includes an AI for Nonprofits credential developed with LinkedIn and NetHope, live and on-demand training on topics such as Copilot, change management, and responsible AI governance, and a Changemaker Fellowship for nonprofit professionals working on AI-related projects.

The AI for Nonprofits credential is a professional certificate built on work across the nonprofit sector, with participants receiving a LinkedIn professional certificate. Microsoft also says the fellowship will provide resources, investment, and expert guidance, while connecting participants to a global cohort and a wider network of nonprofit AI leaders. According to the post, support for the fellowship includes Microsoft and launch partners EY and Caribou.

Microsoft places the announcement within a broader argument about how AI is affecting labour, communities, and service delivery. The company says nonprofits are often closely connected to people seeking new skills, employment pathways, and community support, and that such organisations are well-positioned to help shape AI adoption at the local level. Microsoft also says the programme forms part of its wider Microsoft Elevate commitment and refers to plans to deliver more than $5 billion in discounts, donations, and grants over the next year to support nonprofit organisations and education systems.

Several examples in the post illustrate how Microsoft says AI is already being applied in nonprofit work. Microsoft says ARcare has used AI to reduce administrative work and estimates it has eliminated six to eight hours of manual tasks per day. Opportunity International is cited as using AI to scale a local-language chatbot for farmers, while Head Start Homes is described as using AI to increase organisational bandwidth and attract new funding. The tech conglomerate also points to de Alliantie, saying AI has helped the organisation improve efficiency in housing support operations while maintaining a human-centred approach.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

New Mexico wins major case against Meta

A jury has found Meta Platforms liable for misleading consumers and endangering children in a landmark case brought by the New Mexico Department of Justice. The verdict marks the first successful trial by a US state against a major tech firm over child safety concerns.

Jurors awarded civil penalties totalling 375 million dollars after finding violations of consumer protection law. The case focused on claims that platform design choices exposed young users to harmful and exploitative content.

Evidence presented in court included internal company documents and testimony suggesting awareness of risks to children. Allegations centred on failures to prevent exploitation, as well as features linked to addictive behaviour and exposure to harmful material.

Further proceedings in the US are scheduled, with authorities seeking additional penalties and mandated changes to platform safety measures. Proposed actions include stronger age verification and improved protections for minors online.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

Microsoft and NVIDIA unveil AI tools for nuclear energy permitting and operations

Microsoft has announced an AI collaboration with NVIDIA to support nuclear energy projects across permitting, design, construction, and operations. In a post published on 24 March, the tech conglomerate said the initiative aims to provide end-to-end tools for the nuclear sector, focusing on streamlining permitting, accelerating design, and optimising operations.

Microsoft frames the effort within a broader energy challenge, arguing that rising power demand and long project timelines are putting pressure to accelerate the delivery of firm, carbon-free power. The company says customised engineering, fragmented data, and manual regulatory review slow nuclear projects. It presents AI as a way to make project development more repeatable, traceable, secure, and predictable.

The post says the collaboration spans the full lifecycle of a nuclear plant. Microsoft describes a model in which digital twins, high-fidelity simulations, and AI-assisted workflows support design and engineering, licensing and permitting, construction and delivery, and operations and maintenance.

According to the company, engineers would be able to reuse design patterns, model the impact of changes before construction begins, and link project decisions to supporting evidence and applicable rules. Microsoft also says generative AI can assist with drafting and gap analysis in permit documentation, while predictive modelling and operational digital twins can support anomaly detection and maintenance planning.

Microsoft says traceability and auditability are central to the approach. The company lists four intended qualities of the system: traceable records linking engineering decisions to evidence and regulations, audit-ready documentation, secure use within a governed environment, and predictable outcomes through simulations intended to identify delays before they occur in the real world.

Several case examples are included in the post. Microsoft says Aalo Atomics reduced the permitting process by 92% using its Generative AI for Permitting solution and estimates annual savings of 80$ million.

Aalo Atomics Chief Technology Officer Yasir Arafat is quoted as saying: ‘Two things matter most: enterprise-scale complexity and mission-critical reliability. We’re deploying something complex at a scale only a company like Microsoft really understands. There’s no room for anything less than proven reliability.’

Microsoft also says Southern Nuclear has deployed Copilot agents across engineering and licensing workstreams to improve consistency, reuse knowledge faster, and support decision-making. Idaho National Laboratory is described as an early adopter in the US federal context, with Microsoft saying the lab is using AI capabilities to automate the assembly of engineering and safety analysis reports and to create standard methodologies for regulators to adopt the tools safely.

The post also expands beyond those three examples. Microsoft says Everstar, described as an NVIDIA Inception startup, is bringing domain-specific AI for nuclear to Azure to support project workflows and governed data pipelines.

Everstar Chief Executive Officer Kevin Kong is quoted as saying: ‘The nuclear industry has been bottlenecked by documentation burden and regulatory complexity for decades. This partnership means our customers get the secure, scalable cloud deployments they demand. It’s a significant step toward making nuclear power fast, safe, and unstoppable.’

Microsoft also says Atomic Canyon’s Neutron platform is available on the Microsoft Marketplace for nuclear developers via established procurement channels.

At the technical level, Microsoft says the collaboration brings together NVIDIA Omniverse, NVIDIA Earth-2, NVIDIA CUDA-X, NVIDIA AI Enterprise, PhysicsNeMo, Isaac Sim, and Metropolis with Microsoft Generative AI for Permitting Solution Accelerator and Microsoft Planetary Computer. The company presents the stack as a digital ecosystem for nuclear energy on Azure.

The official post is a corporate announcement rather than an independent assessment of the approach’s effectiveness. The published note outlines the company’s intended use cases, named partners, and customer examples, but it does not provide a third-party evaluation of the broader claims regarding delivery speed, regulatory confidence, or sector-wide impact.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

Zimbabwe advances AI national strategy with UNESCO support

Zimbabwe has launched a National Artificial Intelligence Strategy for 2026 to 2030, marking a significant step towards shaping its digital future instead of relying solely on traditional development pathways.

Announced by President Emmerson Mnangagwa in Harare, the strategy sets out a national framework for the responsible use of AI to support innovation, improve public services, and expand economic opportunities across sectors such as agriculture, healthcare, education, finance, and public administration.

The strategy places strong emphasis on building digital infrastructure, developing AI skills, and strengthening research and innovation ecosystems.

Officials highlighted the importance of governance frameworks to ensure that AI systems remain transparent, ethical, and aligned with national priorities instead of advancing without oversight.

The initiative reflects a broader effort to position Zimbabwe within the evolving technological landscape of the fourth industrial revolution while promoting sustainable economic growth.

Development of the strategy was supported by UNESCO, working alongside national institutions and stakeholders from academia, industry, and civil society.

The process was informed by the Artificial Intelligence Readiness Assessment Methodology and aligned with UNESCO Recommendation on the Ethics of Artificial Intelligence, promoting a human-centred approach that prioritises human rights, fairness, and transparency.

Regional initiatives across Southern Africa have also contributed to strengthening AI adoption readiness through similar assessment frameworks.

Looking ahead, Zimbabwe aims to translate the strategy into concrete investments in infrastructure, talent development, and innovation ecosystems.

International partners, including the UN, have expressed support for implementation efforts, emphasising the importance of inclusive growth and equitable access to digital opportunities.

By combining national leadership with international collaboration, Zimbabwe seeks to ensure that AI benefits communities across urban and rural areas rather than widening existing socioeconomic divides.

Would you like to learn more about AI, tech and digital diplomacyIf so, ask our Diplo chatbot!

New AI safety policies target teen protection in apps

OpenAI has released a set of prompt-based safety policies to help developers build safer AI experiences for teenagers. The tools work with the open-weight model gpt-oss-safeguard, turning safety requirements into practical classifiers for real-world use.

The policies address teen risks, including graphic violence, sexual content, harmful body image behaviour, dangerous challenges, roleplay, and age-restricted goods and services. Developers can use them for both real-time filtering and offline content analysis.

The framework was developed with input from organisations such as Common Sense Media and everyone.ai to improve clarity and consistency in teen safety rules. The initiative also responds to long-standing challenges in translating high-level safety goals into precise operational systems.

Open-source availability through the ROOST Model Community allows developers to adapt and expand the policies for different use cases and languages. The framework is a foundational step, not a complete solution, encouraging layered safeguards and ongoing refinement.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

Google sets 2029 deadline for post-quantum cryptography migration

A transition to post-quantum cryptography by 2029 is being led by Google, aiming to secure digital systems against future quantum computing threats instead of relying on existing encryption standards.

The move reflects growing concern that advances in quantum hardware and algorithms could eventually undermine current cryptographic protections, particularly through attacks that store encrypted data today for decryption in the future.

Quantum computers are expected to challenge widely used encryption and digital signature systems, prompting the need for early transition strategies.

Google has updated its threat model to prioritise authentication services, recognising that digital signatures pose a critical vulnerability if not addressed before the arrival of quantum machines capable of cryptanalysis.

The company is encouraging broader industry action to accelerate migration efforts and reduce long-term security risks.

As part of its strategy, Google is integrating post-quantum cryptography into its products and services.

Android 17 will include quantum-resistant digital signature protection aligned with standards developed by the US’s National Institute of Standards and Technology. At the same time, support has already been introduced in Google Chrome and cloud platforms.

These measures aim to bring advanced security technologies directly to users instead of limiting them to experimental environments.

By setting a clear timeline, Google aims to instil urgency and direction across the wider technology sector.

The transition to post-quantum cryptography is expected to become a critical step in maintaining online security, ensuring that digital infrastructure remains resilient as quantum computing capabilities continue to evolve.

Would you like to learn more about AI, tech and digital diplomacyIf so, ask our Diplo chatbot!

OpenAI launches a public Safety Bug Bounty programme

OpenAI has introduced a public Safety Bug Bounty programme to identify misuse and safety risks across its AI systems. The initiative expands the company’s existing vulnerability reporting framework by focusing on harms that fall outside traditional security definitions.

The programme covers AI threats such as agentic risks, prompt injection, data exfiltration, and bypassing platform integrity controls. Researchers are encouraged to submit reproducible cases where AI systems perform harmful actions or expose sensitive information.

Unlike standard security reports, the initiative accepts safety issues that pose real-world risk, even if they are not classified as technical vulnerabilities. Dedicated safety and security teams will assess submissions and may be reassigned depending on relevance.

The scheme is open to external researchers and ethical hackers to strengthen AI safety through broader collaboration. OpenAI says the approach is intended to improve resilience against evolving misuse as AI systems become more advanced.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot