Joule Agent workshops help organisations build practical AI agent solutions

Artificial intelligence agents, autonomous systems that perform tasks or assist decision-making, are increasingly part of digital transformation discussions, but their value depends on solving actual business problems rather than adopting technology for its own sake.

SAP’s AppHaus Joule Agent Discovery and Design workshops provide a structured, human-centred approach to help organisations discover where agentic AI can deliver real impact and design agents that collaborate effectively with humans.

The Discovery workshop focuses on identifying challenges and inefficiencies where automation can add value, guiding participants to select high-priority use cases that suit agentic solutions.

The Design workshop then brings users and business experts together to define each AI agent’s role, responsibilities and required skills. By the end of these sessions, participants have detailed plans defining tasks, workflows and instructions that can be translated into actual AI agent implementations.

SAP also supports these formats with self-paced learning courses and toolkits to help anyone run the workshops confidently, emphasising practical human–AI partnerships rather than technology hype.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Gemini users can now build custom AI mini-apps with Opal

Google has expanded the availability of Opal, a no-code experimental tool from Google Labs, by integrating it directly into the Gemini web application.

This integration allows users to build AI-powered mini-apps, known as Gems, without writing any code, using natural language descriptions and a visual workflow editor inside Gemini’s interface.

Previously available only via separate Google Labs experiments, Opal now appears in the Gems manager section of the Gemini web app, where users can describe the functionality they want and have Gemini generate a customised mini-app.

These mini-apps can be reused for specific tasks and workflows and saved as part of a user’s Gem collection.

The no-code ‘vibe-coding’ approach aims to democratise AI development by enabling creators, developers and non-technical users alike to build applications that automate or augment tasks, all through intuitive language prompts and visual building blocks.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Instacart faces FTC scrutiny over AI pricing tool

US regulators are examining Instacart’s use of AI in grocery pricing, after reports that shoppers were shown different prices for identical items. Sources told Reuters the Federal Trade Commission has opened a probe into the company’s AI-driven pricing practices.

The FTC has issued a civil investigative demand seeking information about Instacart’s Eversight tool, which allows retailers to test different prices using AI. The agency said it does not comment on ongoing investigations, but expressed concern over reports of alleged pricing behaviour.

Scrutiny follows a study of 437 shoppers across four US cities, which found average price differences of 7 percent for the same grocery lists at the same stores. Some shoppers reportedly paid up to 23 percent more than others for identical items, according to the researchers.

Instacart said the pricing experiments were randomised and not based on personal data or individual behaviour. The company maintains that retailers, not Instacart, set prices on the platform, with the exception of Target, where prices are sourced externally and adjusted to cover costs.

The investigation comes amid wider regulatory focus on technology-driven pricing as living costs remain politically sensitive in the United States. Lawmakers have urged greater transparency, while the FTC continues broader inquiries into AI tools used to analyse consumer data and set prices.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

ChatGPT expands with a new app directory from OpenAI

OpenAI has opened submissions for third-party apps inside ChatGPT, allowing developers to publish tools that extend conversations with real-world actions. Approved apps will appear in a new in-product directory, enabling users to move directly from discussion to execution.

The initiative builds on OpenAI’s earlier DevDay announcement, where it outlined how apps could add specialised context to conversations. Developers can now submit apps for review, provided they meet the company’s requirements on safety, privacy, and user experience.

ChatGPT apps are designed to support practical workflows such as ordering groceries, creating slide decks, or searching for apartments. Apps can be activated during conversations via the tools menu, by mentioning them directly, or through automated recommendations based on context and usage signals.

To support adoption, OpenAI has released developer resources including best-practice guides, open-source example apps, and a chat-native UI library. An Apps SDK, currently in beta, allows developers to build experiences that integrate directly into conversational flows.

During the initial rollout, OpenAI’s monetisation is limited to external links directing users to developers’ own platforms. said it plans to explore additional revenue models over time as the app ecosystem matures.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

Competing visions of AGI emerge at Google DeepMind and Microsoft

Two former DeepMind co-founders now leading rival AI labs have outlined sharply different visions for how artificial general intelligence (AGI) should be developed, highlighting a growing strategic divide at the top of the industry.

Google DeepMind chief executive Demis Hassabis has framed AGI as a scientific tool for tackling foundational challenges. These include fusion energy, advanced materials, and fundamental physics. He says current models still lack consistent reasoning across tasks.

Hassabis has pointed to weaknesses, such as so-called ‘jagged intelligence’. Systems can perform well on complex benchmarks but fail simple tasks. DeepMind is investing in physics-based evaluations and AlphaZero-inspired research to enable genuine knowledge discovery rather than data replication.

Microsoft AI chief executive Mustafa Suleyman has taken a more product-led stance, framing AGI as an economic force rather than a scientific milestone. He has rejected the idea of race, instead prioritising controllable and reliable AI agents that operate under human oversight.

Suleyman has argued that governance, not raw capability, is the central challenge. He has emphasised containment, liability frameworks, and certified agents, reflecting wider tensions between rapid deployment and long-term scientific ambition as AI systems grow more influential.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

OpenAI expands AI training for newsrooms worldwide

The US tech company, OpenAI, has launched the OpenAI Academy for News Organisations, a new learning hub designed to support journalists, editors and publishers adopting AI in their work.

An initiative that builds on existing partnerships with the American Journalism Project and The Lenfest Institute for Journalism, reflecting a broader effort to strengthen journalism as a pillar of democratic life.

The Academy goes live with practical training, newsroom-focused playbooks and real-world examples aimed at helping news teams save time and focus on high-impact reporting.

Areas of focus include investigative research, multilingual reporting, data analysis, production efficiency and operational workflows that sustain news organisations over time.

Responsible use sits at the centre of the programme. Guidance on governance, internal policies and ethical deployment is intended to address concerns around trust, accuracy and newsroom culture, recognising that AI adoption raises structural questions rather than purely technical ones.

OpenAI plans to expand the Academy in the year ahead with additional courses, case studies and live programming.

Through collaboration with publishers, industry bodies and journalism networks worldwide, the Academy is positioned as a shared learning space that supports editorial independence while adapting journalism to an AI-shaped media environment.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

New AI framework simplifies complex scientific problems into basic equations

A team of scientists has created a new AI method that addresses complex problems across science and engineering by reducing them to simpler mathematical equations.

Unlike typical black-box AI models, this approach focuses on interpretable representations that can be expressed in basic symbolic forms, aiding understanding and trust in AI-generated solutions.

The research demonstrates that this symbolic reasoning capability allows AI to uncover underlying structure in tasks such as physics simulations, optimisation challenges and system modelling, potentially boosting both accuracy and generalisation.

Researchers argue that breaking problems down into fundamental components not only enhances performance but also makes AI outputs more understandable to human experts.

By combining machine learning with classical mathematical reasoning, the work points toward a hybrid paradigm in which AI augments human insight rather than merely approximating outcomes. Such methods could accelerate scientific discovery in fields where complexity has traditionally limited the effectiveness of computational approaches.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Google launches Gemini 3 Flash for scalable frontier AI

The US tech giant, Google, has unveiled Gemini 3 Flash, a new frontier AI model designed for developers who need high reasoning performance combined with speed and low cost.

Built on the multimodal and agentic foundations of Gemini 3 Pro, Gemini 3 Flash delivers faster responses at less than a quarter of the price, while surpassing Gemini 2.5 Pro across several major benchmarks.

The model is rolling out through the Gemini API, Google AI Studio, Vertex AI, Android Studio and other developer platforms, offering higher rate limits, batch processing and context caching that significantly reduce operational costs.

Gemini 3 Flash achieves frontier-level results on advanced reasoning benchmarks while remaining optimised for large-scale production workloads, reinforcing Google’s focus on efficiency alongside intelligence.

Early adopters are already deploying Gemini 3 Flash across coding, gaming, deepfake detection and legal document analysis, benefiting from improved agentic capabilities and near real-time multimodal reasoning.

By lowering cost barriers while expanding performance, Gemini 3 Flash enhances Google’s competitive position in the rapidly evolving AI model market. It broadens access to advanced AI systems for developers and enterprises.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

UNDP and UNESCO support AI training for judiciary

UNESCO and UNDP have partnered to enhance judicial capacity on the ethical use of AI. A three-day Bangkok training, supported by the Thailand Institute of Justice, brought together 27 judges from 13 Asia-Pacific countries to discuss the impact of AI on justice and safeguards for fairness.

Expert sessions highlighted the global use of AI in court administration, research, and case management, emphasising opportunities and risks. Participants explored ways to use AI ethically while protecting human rights and judicial integrity, warning that unsupervised tools could increase bias and undermine public trust.

Trainers emphasised that AI must be implemented with careful attention to bias, transparency, and structural inequalities.

Judges reflected on the growing complexity of verifying evidence in the age of generative AI and deepfakes, and acknowledged that responsible AI can improve access to justice, support case reviews, and free time for substantive decision-making.

The initiative concluded with a consensus that AI adoption in courts should be guided by governance, transparency, and ongoing dialogue. The UNDP will continue to collaborate in advancing ethical, human rights-focused AI in regional judiciaries.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

AI-generated ads face new disclosure rules in South Korea

South Korea will require advertisers to label AI-generated or AI-assisted advertising from early 2026, marking a shift in how the country governs AI in online commerce and consumer protection.

The measure responds to a sharp rise in deceptive ads using synthetic imagery and deepfakes, particularly in healthcare and financial promotions. Regulators say transparency at the point of content delivery is intended to reduce manipulation and restore consumer trust.

Authorities in South Korea acknowledge that mandatory labelling alone may not deter malicious actors, who can bypass rules through offshore hosting or rapidly changing content. Detection challenges and uneven enforcement capacity across platforms remain open concerns.

South Korea’s industry groups warn that the policy could have uneven economic effects within the country’s advertising ecosystem. Large platforms and agencies are expected to adapt quickly, while smaller firms may face higher compliance costs that slow experimentation with generative tools.

Policymakers argue the framework aligns with South Korea’s broader AI governance strategy, positioning the country between innovation-led and precautionary regulatory models as synthetic media becomes more widespread.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!