Oracle expands Oracle AI Database with new agentic AI tools

Oracle has announced new agentic AI capabilities for Oracle AI Database, presenting them as tools for building, deploying, and scaling production-grade AI applications that work with business data across operational databases and analytic lakehouses. The company says the new features are available across multicloud and on-premises environments.

According to Oracle, the announcement concerning Oracle AI Database centres on bringing AI and data together within the database so that agents can securely access real-time enterprise data where it resides. Oracle also says customers can choose AI models, agentic frameworks, open data formats, and deployment platforms, while Oracle Exadata users can use Exadata Powered AI Search for high-volume, multi-step agentic workloads.

Oracle’s new product set includes Oracle Autonomous AI Vector Database, which the company says is intended to simplify vector-based application development while preserving the broader database features of Oracle AI Database. Oracle says the service is available in limited capacity through the Oracle Cloud free tier or a low-cost developer tier, with one-click upgrade to full capabilities as requirements expand.

The company also introduced the Oracle AI Database Private Agent Factory, described as a no-code agent builder that can run in public clouds or on-premises without requiring customers to share data with third parties. Oracle says the service includes pre-built agents such as a Database Knowledge Agent, a Structured Data Analysis Agent, and a Deep Data Research Agent. Oracle Unified Memory Core was also announced as a way to store context for AI agents across vector, JSON, graph, relational, text, spatial, and columnar data, all in a single engine with consistent transactions and security.

A separate part of the announcement focuses on what Oracle describes as AI data risk reduction. Oracle says Deep Data Security applies end-user-specific access rules within the database, so that each user or AI agent acting on a user’s behalf can only see the data the user is allowed to access.

Besides the Oracle AI Database, Oracle also announced Private AI Services Container for customers that want to run private model instances without sharing data with third-party AI providers, including in air-gapped environments. Trusted Answer Search was presented as a method for providing answers based on previously created reports rather than relying directly on large language model responses.

Open standards and interoperability form another part of Oracle’s pitch. Oracle says Vectors on Ice adds native support for vector data stored in Apache Iceberg tables, enabling unified search across database and data-lake content. Oracle also announced an Autonomous AI Database MCP Server to allow external AI agents and MCP clients to access Autonomous AI Database capabilities without custom integration code or manual security administration.

Juan Loaiza, executive vice president of Oracle Database Technologies, said: ‘The next wave of enterprise AI will be defined by customers’ ability to use AI in business-critical production systems to safely deliver breakthrough innovations, insights, and productivity.’ He added: ‘With Oracle AI Database, customers don’t just store data, they activate it for AI. By architecting AI and data together, we help customers quickly build and manage agentic AI applications that can securely query and act on real-enterprise data with stock exchange-level robustness in every leading cloud and on-premises.’

Steven Dickens, CEO and principal analyst at HyperFRAME Research, said: ‘In the era of agentic AI, a unified memory core is essential for agents to maintain context across diverse data types, such as vector, JSON, graph, columnar, spatial, text, and relational, without the latency or staleness of external syncing.’

Dickens added: ‘Only Oracle AI Database delivers this in a single, mission-critical engine with concurrent transactional and analytical processing, high availability, and ironclad security, enabling real-time reasoning over live business data. Organisations without this foundation will struggle with fragmented, unreliable agents, while those leveraging Oracle gain a decisive edge in scalable AI deployment.’

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

Microsoft launches nonprofit AI training and fellowship initiative

Microsoft has announced a new programme called Microsoft Elevate for Changemakers, aimed at helping nonprofit leaders build AI skills, credentials, and organisational capacity. In a post published on 25 March, the initiative is said to have been introduced alongside the company’s Global Nonprofit Leaders Summit, which it says brought together more than 1,500 nonprofit leaders from around the world.

The company says the programme is designed to help nonprofit organisations adopt AI in ways that reflect their missions and the communities they serve. According to the company, the new initiative includes an AI for Nonprofits credential developed with LinkedIn and NetHope, live and on-demand training on topics such as Copilot, change management, and responsible AI governance, and a Changemaker Fellowship for nonprofit professionals working on AI-related projects.

The AI for Nonprofits credential is a professional certificate built on work across the nonprofit sector, with participants receiving a LinkedIn professional certificate. Microsoft also says the fellowship will provide resources, investment, and expert guidance, while connecting participants to a global cohort and a wider network of nonprofit AI leaders. According to the post, support for the fellowship includes Microsoft and launch partners EY and Caribou.

Microsoft places the announcement within a broader argument about how AI is affecting labour, communities, and service delivery. The company says nonprofits are often closely connected to people seeking new skills, employment pathways, and community support, and that such organisations are well-positioned to help shape AI adoption at the local level. Microsoft also says the programme forms part of its wider Microsoft Elevate commitment and refers to plans to deliver more than $5 billion in discounts, donations, and grants over the next year to support nonprofit organisations and education systems.

Several examples in the post illustrate how Microsoft says AI is already being applied in nonprofit work. Microsoft says ARcare has used AI to reduce administrative work and estimates it has eliminated six to eight hours of manual tasks per day. Opportunity International is cited as using AI to scale a local-language chatbot for farmers, while Head Start Homes is described as using AI to increase organisational bandwidth and attract new funding. The tech conglomerate also points to de Alliantie, saying AI has helped the organisation improve efficiency in housing support operations while maintaining a human-centred approach.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

UNESCO and Tecnológico de Monterrey partner on AI in education initiative

UNESCO and Tecnológico de Monterrey have signed an agreement to collaborate on advancing the use of AI in education, as digital transformation reshapes learning systems and workforce skills across Latin America and the Caribbean.

The agreement establishes a framework for joint work on generating evidence, developing standards and formulating public policy recommendations on AI in education, and supports the launch of a Regional Observatory on Artificial Intelligence in Education.

A financial contribution of $90,000 will support the Observatory’s implementation, following months of technical coordination and institutional validation between the two organisations.

After the signing, technical teams reviewed the operational plan for the first year, including methodological frameworks on teachers’ digital competencies and AI ethics, as well as pilot projects in Chile, El Salvador and Mexico.

According to Esther Kuisch Laroche, the initiative aims to ensure AI contributes to more inclusive, ethical and relevant education systems, while moving from principles to practical solutions.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot 

OpenAI details Sora 2 safeguards for likeness, audio, and harmful content

OpenAI has published a new overview of the safety measures built into Sora 2 and the Sora app, setting out how the company says it is approaching provenance, likeness protection, teen safeguards, harmful-content filtering, audio controls, and user reporting tools. The Sora team published the note on 23 March 2026.

OpenAI says every video generated with Sora includes visible and invisible provenance signals, and that all videos also embed C2PA metadata. The company adds that many outputs feature visible moving watermarks that include the creator’s name, while internal reverse-image and audio search tools are used to trace videos back to Sora.

A substantial part of the update focuses on likeness and consent. OpenAI says users can upload images of people to generate videos, but only after attesting that they have consent from the people featured and the right to upload the media. OpenAI also says image-to-video generations involving people are subject to stricter safeguards than Sora Characters, and that images including children and young-looking persons face stricter moderation. Shared videos generated from such images will always carry watermarks, according to the company.

OpenAI also sets out controls linked to its characters feature, which it says is intended to give users stronger control over their likeness, including both appearance and voice. According to the company, users can decide who can use their characters, revoke access at any time, and review, delete, or report videos featuring their characters. OpenAI says it also applies additional restrictions designed to limit major changes to a person’s appearance, avoid embarrassing uses, and maintain broadly consistent identity presentation.

Protections for younger users form another part of the update. OpenAI says teen accounts are subject to stronger limitations on mature output, that age-inappropriate or harmful content is filtered from teen feeds, and that adult users cannot initiate direct messages with teens. Parental controls in ChatGPT can also be used to manage teen messaging permissions and to select a non-personalised feed in the app, while default limits apply to continuous scrolling for teens.

OpenAI says harmful-content controls operate at both creation and distribution stages. Prompt and output checks are used across multiple video frames and audio transcripts to block content including sexual material, terrorist propaganda, and self-harm promotion. OpenAI also says it has tightened policies for video generation compared with image generation because of added realism, motion, and audio, while automated systems and human review are used to monitor feed content against its global usage policies.

Audio generation is treated separately in the note. OpenAI says generated speech transcripts are automatically scanned for possible policy violations, and that prompts intended to imitate living artists or existing works are blocked. The company also says it honours takedown requests from creators who believe an output infringes their work.

User controls and recourse are presented as the final layer. OpenAI says users can choose whether to share videos to the feed, remove published content, and report videos, profiles, direct messages, comments, and characters for abuse. Blocking tools are also available, according to the company, to stop other users from viewing a profile or posts, using a character, or contacting someone through direct message.

OpenAI’s post is framed as a product-safety explanation rather than an independent assessment of the effectiveness of the measures in practice. Much of the note describes controls that the company says it has built into Sora 2, but it does not provide external evaluation data in the published summary.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

Microsoft and NVIDIA unveil AI tools for nuclear energy permitting and operations

Microsoft has announced an AI collaboration with NVIDIA to support nuclear energy projects across permitting, design, construction, and operations. In a post published on 24 March, the tech conglomerate said the initiative aims to provide end-to-end tools for the nuclear sector, focusing on streamlining permitting, accelerating design, and optimising operations.

Microsoft frames the effort within a broader energy challenge, arguing that rising power demand and long project timelines are putting pressure to accelerate the delivery of firm, carbon-free power. The company says customised engineering, fragmented data, and manual regulatory review slow nuclear projects. It presents AI as a way to make project development more repeatable, traceable, secure, and predictable.

The post says the collaboration spans the full lifecycle of a nuclear plant. Microsoft describes a model in which digital twins, high-fidelity simulations, and AI-assisted workflows support design and engineering, licensing and permitting, construction and delivery, and operations and maintenance.

According to the company, engineers would be able to reuse design patterns, model the impact of changes before construction begins, and link project decisions to supporting evidence and applicable rules. Microsoft also says generative AI can assist with drafting and gap analysis in permit documentation, while predictive modelling and operational digital twins can support anomaly detection and maintenance planning.

Microsoft says traceability and auditability are central to the approach. The company lists four intended qualities of the system: traceable records linking engineering decisions to evidence and regulations, audit-ready documentation, secure use within a governed environment, and predictable outcomes through simulations intended to identify delays before they occur in the real world.

Several case examples are included in the post. Microsoft says Aalo Atomics reduced the permitting process by 92% using its Generative AI for Permitting solution and estimates annual savings of 80$ million.

Aalo Atomics Chief Technology Officer Yasir Arafat is quoted as saying: ‘Two things matter most: enterprise-scale complexity and mission-critical reliability. We’re deploying something complex at a scale only a company like Microsoft really understands. There’s no room for anything less than proven reliability.’

Microsoft also says Southern Nuclear has deployed Copilot agents across engineering and licensing workstreams to improve consistency, reuse knowledge faster, and support decision-making. Idaho National Laboratory is described as an early adopter in the US federal context, with Microsoft saying the lab is using AI capabilities to automate the assembly of engineering and safety analysis reports and to create standard methodologies for regulators to adopt the tools safely.

The post also expands beyond those three examples. Microsoft says Everstar, described as an NVIDIA Inception startup, is bringing domain-specific AI for nuclear to Azure to support project workflows and governed data pipelines.

Everstar Chief Executive Officer Kevin Kong is quoted as saying: ‘The nuclear industry has been bottlenecked by documentation burden and regulatory complexity for decades. This partnership means our customers get the secure, scalable cloud deployments they demand. It’s a significant step toward making nuclear power fast, safe, and unstoppable.’

Microsoft also says Atomic Canyon’s Neutron platform is available on the Microsoft Marketplace for nuclear developers via established procurement channels.

At the technical level, Microsoft says the collaboration brings together NVIDIA Omniverse, NVIDIA Earth-2, NVIDIA CUDA-X, NVIDIA AI Enterprise, PhysicsNeMo, Isaac Sim, and Metropolis with Microsoft Generative AI for Permitting Solution Accelerator and Microsoft Planetary Computer. The company presents the stack as a digital ecosystem for nuclear energy on Azure.

The official post is a corporate announcement rather than an independent assessment of the approach’s effectiveness. The published note outlines the company’s intended use cases, named partners, and customer examples, but it does not provide a third-party evaluation of the broader claims regarding delivery speed, regulatory confidence, or sector-wide impact.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

Zimbabwe advances AI national strategy with UNESCO support

Zimbabwe has launched a National Artificial Intelligence Strategy for 2026 to 2030, marking a significant step towards shaping its digital future instead of relying solely on traditional development pathways.

Announced by President Emmerson Mnangagwa in Harare, the strategy sets out a national framework for the responsible use of AI to support innovation, improve public services, and expand economic opportunities across sectors such as agriculture, healthcare, education, finance, and public administration.

The strategy places strong emphasis on building digital infrastructure, developing AI skills, and strengthening research and innovation ecosystems.

Officials highlighted the importance of governance frameworks to ensure that AI systems remain transparent, ethical, and aligned with national priorities instead of advancing without oversight.

The initiative reflects a broader effort to position Zimbabwe within the evolving technological landscape of the fourth industrial revolution while promoting sustainable economic growth.

Development of the strategy was supported by UNESCO, working alongside national institutions and stakeholders from academia, industry, and civil society.

The process was informed by the Artificial Intelligence Readiness Assessment Methodology and aligned with UNESCO Recommendation on the Ethics of Artificial Intelligence, promoting a human-centred approach that prioritises human rights, fairness, and transparency.

Regional initiatives across Southern Africa have also contributed to strengthening AI adoption readiness through similar assessment frameworks.

Looking ahead, Zimbabwe aims to translate the strategy into concrete investments in infrastructure, talent development, and innovation ecosystems.

International partners, including the UN, have expressed support for implementation efforts, emphasising the importance of inclusive growth and equitable access to digital opportunities.

By combining national leadership with international collaboration, Zimbabwe seeks to ensure that AI benefits communities across urban and rural areas rather than widening existing socioeconomic divides.

Would you like to learn more about AI, tech and digital diplomacyIf so, ask our Diplo chatbot!

Google sets 2029 deadline for post-quantum cryptography migration

A transition to post-quantum cryptography by 2029 is being led by Google, aiming to secure digital systems against future quantum computing threats instead of relying on existing encryption standards.

The move reflects growing concern that advances in quantum hardware and algorithms could eventually undermine current cryptographic protections, particularly through attacks that store encrypted data today for decryption in the future.

Quantum computers are expected to challenge widely used encryption and digital signature systems, prompting the need for early transition strategies.

Google has updated its threat model to prioritise authentication services, recognising that digital signatures pose a critical vulnerability if not addressed before the arrival of quantum machines capable of cryptanalysis.

The company is encouraging broader industry action to accelerate migration efforts and reduce long-term security risks.

As part of its strategy, Google is integrating post-quantum cryptography into its products and services.

Android 17 will include quantum-resistant digital signature protection aligned with standards developed by the US’s National Institute of Standards and Technology. At the same time, support has already been introduced in Google Chrome and cloud platforms.

These measures aim to bring advanced security technologies directly to users instead of limiting them to experimental environments.

By setting a clear timeline, Google aims to instil urgency and direction across the wider technology sector.

The transition to post-quantum cryptography is expected to become a critical step in maintaining online security, ensuring that digital infrastructure remains resilient as quantum computing capabilities continue to evolve.

Would you like to learn more about AI, tech and digital diplomacyIf so, ask our Diplo chatbot!

OpenAI launches a public Safety Bug Bounty programme

OpenAI has introduced a public Safety Bug Bounty programme to identify misuse and safety risks across its AI systems. The initiative expands the company’s existing vulnerability reporting framework by focusing on harms that fall outside traditional security definitions.

The programme covers AI threats such as agentic risks, prompt injection, data exfiltration, and bypassing platform integrity controls. Researchers are encouraged to submit reproducible cases where AI systems perform harmful actions or expose sensitive information.

Unlike standard security reports, the initiative accepts safety issues that pose real-world risk, even if they are not classified as technical vulnerabilities. Dedicated safety and security teams will assess submissions and may be reassigned depending on relevance.

The scheme is open to external researchers and ethical hackers to strengthen AI safety through broader collaboration. OpenAI says the approach is intended to improve resilience against evolving misuse as AI systems become more advanced.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

EU strengthens semiconductor strategy through Chips Act dialogue

Executive Vice-President Henna Virkkunen will host a high-level dialogue in Brussels to assess the implementation of the European Chips Act Regulation and gather industry feedback ahead of its planned revision.

Stakeholders from across the semiconductor ecosystem are expected to exchange views and present recommendations to shape future policy direction.

An initiative that forms part of the broader strategy led by the European Commission to reinforce technological sovereignty and competitiveness, rather than relying heavily on external suppliers.

The Chips Act seeks to strengthen Europe’s semiconductor ecosystem, improve supply chain resilience, and reduce strategic dependencies in critical technologies.

The dialogue follows a public consultation and call for evidence conducted in autumn 2025, with findings set to inform the upcoming legislative revision.

Industry representatives will provide direct input through a report outlining challenges, opportunities, and proposed policy adjustments, contributing to a more targeted and effective framework for semiconductor development.

Looking ahead, the revision of the Chips Act will be integrated into a wider Technological Sovereignty package designed to boost the capacity of Europe’s digital industries.

By combining stakeholder engagement with policy reform, the European Commission aims to ensure that semiconductor innovation and production can expand across the EU rather than remain constrained by reliance on external suppliers.

Would you like to learn more about AI, tech and digital diplomacyIf so, ask our Diplo chatbot!  

Malaysia launches AI platform Rakan Tani to support farmers and stabilise incomes

The National AI Office (NAIO), through its NAIO Lab, is advancing Malaysia’s AI-driven development by building an ecosystem that supports innovation, collaboration, and startups. NAIO Lab aims to position the country as a hub for AI innovation where developers can experiment and create practical solutions.

Rakan Tani, the first project under NAIO Lab, is an AI-powered digital platform designed to transform the agricultural sector. It connects farmers with buyers early in the crop cycle and uses AI-driven order matching to help secure competitive prices and improve financial predictability.

The platform integrates multiple AI-driven features, including pre-harvest commerce, subsidy access via national ID systems, agriculture financing using pre-harvest orders as collateral, real-time cash payouts through digital banking, and logistics coordination with distributors and providers. It is delivered via WhatsApp and supports both Malay and English, with a pilot planned in Terengganu in May 2025.

NAIO Lab also provides AI startups with resources, mentorship, and funding, enabling collaboration between experts, researchers, and entrepreneurs. The initiative is supported by partnerships across government, academia, and industry, including the Ministry of Digital, Ministry of Agriculture and Food Security, GAIV, UPM, and Segi Fresh, with the goal of accelerating AI adoption and supporting sustainable economic growth.

Would you like to learn more about AI, tech and digital diplomacyIf so, ask our Diplo chatbot