Instacart faces FTC scrutiny over AI pricing tool

US regulators are examining Instacart’s use of AI in grocery pricing, after reports that shoppers were shown different prices for identical items. Sources told Reuters the Federal Trade Commission has opened a probe into the company’s AI-driven pricing practices.

The FTC has issued a civil investigative demand seeking information about Instacart’s Eversight tool, which allows retailers to test different prices using AI. The agency said it does not comment on ongoing investigations, but expressed concern over reports of alleged pricing behaviour.

Scrutiny follows a study of 437 shoppers across four US cities, which found average price differences of 7 percent for the same grocery lists at the same stores. Some shoppers reportedly paid up to 23 percent more than others for identical items, according to the researchers.

Instacart said the pricing experiments were randomised and not based on personal data or individual behaviour. The company maintains that retailers, not Instacart, set prices on the platform, with the exception of Target, where prices are sourced externally and adjusted to cover costs.

The investigation comes amid wider regulatory focus on technology-driven pricing as living costs remain politically sensitive in the United States. Lawmakers have urged greater transparency, while the FTC continues broader inquiries into AI tools used to analyse consumer data and set prices.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

ChatGPT expands with a new app directory from OpenAI

OpenAI has opened submissions for third-party apps inside ChatGPT, allowing developers to publish tools that extend conversations with real-world actions. Approved apps will appear in a new in-product directory, enabling users to move directly from discussion to execution.

The initiative builds on OpenAI’s earlier DevDay announcement, where it outlined how apps could add specialised context to conversations. Developers can now submit apps for review, provided they meet the company’s requirements on safety, privacy, and user experience.

ChatGPT apps are designed to support practical workflows such as ordering groceries, creating slide decks, or searching for apartments. Apps can be activated during conversations via the tools menu, by mentioning them directly, or through automated recommendations based on context and usage signals.

To support adoption, OpenAI has released developer resources including best-practice guides, open-source example apps, and a chat-native UI library. An Apps SDK, currently in beta, allows developers to build experiences that integrate directly into conversational flows.

During the initial rollout, OpenAI’s monetisation is limited to external links directing users to developers’ own platforms. said it plans to explore additional revenue models over time as the app ecosystem matures.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

Competing visions of AGI emerge at Google DeepMind and Microsoft

Two former DeepMind co-founders now leading rival AI labs have outlined sharply different visions for how artificial general intelligence (AGI) should be developed, highlighting a growing strategic divide at the top of the industry.

Google DeepMind chief executive Demis Hassabis has framed AGI as a scientific tool for tackling foundational challenges. These include fusion energy, advanced materials, and fundamental physics. He says current models still lack consistent reasoning across tasks.

Hassabis has pointed to weaknesses, such as so-called ‘jagged intelligence’. Systems can perform well on complex benchmarks but fail simple tasks. DeepMind is investing in physics-based evaluations and AlphaZero-inspired research to enable genuine knowledge discovery rather than data replication.

Microsoft AI chief executive Mustafa Suleyman has taken a more product-led stance, framing AGI as an economic force rather than a scientific milestone. He has rejected the idea of race, instead prioritising controllable and reliable AI agents that operate under human oversight.

Suleyman has argued that governance, not raw capability, is the central challenge. He has emphasised containment, liability frameworks, and certified agents, reflecting wider tensions between rapid deployment and long-term scientific ambition as AI systems grow more influential.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

Russia considers restoring Roblox access after compliance talks

Roblox has signalled willingness to comply with Russian law, opening the possibility of the platform being unblocked in Russia following earlier access restrictions.

Roskomnadzor stated that cooperation could resume if Roblox demonstrates concrete steps instead of declarations towards meeting domestic legal requirements.

The regulator said Roblox acknowledged shortcomings in moderating game content and ensuring the safety of user chats, particularly involving minors.

Russian authorities stressed that compliance would require systematic measures to remove harmful material and prevent criminal communication rather than partial adjustments.

Access to Roblox was restricted in early December after officials cited the spread of content linked to extremist and terrorist activity.

Roskomnadzor indicated that continued engagement and demonstrable compliance could allow the platform to restore operations under the regulatory oversight of Russia.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

OpenAI expands AI training for newsrooms worldwide

The US tech company, OpenAI, has launched the OpenAI Academy for News Organisations, a new learning hub designed to support journalists, editors and publishers adopting AI in their work.

An initiative that builds on existing partnerships with the American Journalism Project and The Lenfest Institute for Journalism, reflecting a broader effort to strengthen journalism as a pillar of democratic life.

The Academy goes live with practical training, newsroom-focused playbooks and real-world examples aimed at helping news teams save time and focus on high-impact reporting.

Areas of focus include investigative research, multilingual reporting, data analysis, production efficiency and operational workflows that sustain news organisations over time.

Responsible use sits at the centre of the programme. Guidance on governance, internal policies and ethical deployment is intended to address concerns around trust, accuracy and newsroom culture, recognising that AI adoption raises structural questions rather than purely technical ones.

OpenAI plans to expand the Academy in the year ahead with additional courses, case studies and live programming.

Through collaboration with publishers, industry bodies and journalism networks worldwide, the Academy is positioned as a shared learning space that supports editorial independence while adapting journalism to an AI-shaped media environment.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Google launches Gemini 3 Flash for scalable frontier AI

The US tech giant, Google, has unveiled Gemini 3 Flash, a new frontier AI model designed for developers who need high reasoning performance combined with speed and low cost.

Built on the multimodal and agentic foundations of Gemini 3 Pro, Gemini 3 Flash delivers faster responses at less than a quarter of the price, while surpassing Gemini 2.5 Pro across several major benchmarks.

The model is rolling out through the Gemini API, Google AI Studio, Vertex AI, Android Studio and other developer platforms, offering higher rate limits, batch processing and context caching that significantly reduce operational costs.

Gemini 3 Flash achieves frontier-level results on advanced reasoning benchmarks while remaining optimised for large-scale production workloads, reinforcing Google’s focus on efficiency alongside intelligence.

Early adopters are already deploying Gemini 3 Flash across coding, gaming, deepfake detection and legal document analysis, benefiting from improved agentic capabilities and near real-time multimodal reasoning.

By lowering cost barriers while expanding performance, Gemini 3 Flash enhances Google’s competitive position in the rapidly evolving AI model market. It broadens access to advanced AI systems for developers and enterprises.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

OpenAI brings in former UK chancellor George Osborne

Former UK chancellor George Osborne has joined OpenAI in a London-based role. He will lead the OpenAI for Countries programme focused on government partnerships.

The initiative aims to help governments build AI capacity and ensure systems reflect democratic values. OpenAI says more than 50 countries are already involved.

Osborne will work on developing AI infrastructure, boosting AI literacy and improving public services. The role follows discussions with OpenAI chief executive Sam Altman.

His appointment comes as UK-US tech talks face setbacks and investment in AI accelerates. Against this backdrop, financial authorities have warned of risks linked to the sector’s rapid growth.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

AI-generated ads face new disclosure rules in South Korea

South Korea will require advertisers to label AI-generated or AI-assisted advertising from early 2026, marking a shift in how the country governs AI in online commerce and consumer protection.

The measure responds to a sharp rise in deceptive ads using synthetic imagery and deepfakes, particularly in healthcare and financial promotions. Regulators say transparency at the point of content delivery is intended to reduce manipulation and restore consumer trust.

Authorities in South Korea acknowledge that mandatory labelling alone may not deter malicious actors, who can bypass rules through offshore hosting or rapidly changing content. Detection challenges and uneven enforcement capacity across platforms remain open concerns.

South Korea’s industry groups warn that the policy could have uneven economic effects within the country’s advertising ecosystem. Large platforms and agencies are expected to adapt quickly, while smaller firms may face higher compliance costs that slow experimentation with generative tools.

Policymakers argue the framework aligns with South Korea’s broader AI governance strategy, positioning the country between innovation-led and precautionary regulatory models as synthetic media becomes more widespread.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

Segment Anything adds audio as Meta unveils SAM Audio

Meta has introduced SAM Audio, a new AI model that uses intuitive prompts to isolate and segment sounds from complex audio recordings. The release extends the company’s Segment Anything collection beyond visuals into audio and video workflows.

SAM Audio allows users to separate sounds through text prompts, visual cues, or time-based selections. Creators can extract vocals or instruments, remove background noise, or isolate specific sound sources in recordings without specialised audio engineering tools.

Meta describes SAM Audio as a unified model designed around how people naturally think about sound. It supports combined text, visual, and time-based prompts, enabling flexible audio separation across music, podcasting, film, accessibility, and research.

Meta says the model achieves strong performance across diverse audio environments and is already being used internally to develop next-generation creative tools. The approach lowers technical barriers while expanding the range of possible audio editing applications.

SAM Audio is available through the Segment Anything Playground, where users can test the model with sample assets or upload their own files. Meta has also made the model available for download, signalling broader ambitions to make audio segmentation a core capability of its AI ecosystem.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

BioTechEU aims to close Europe’s biotech funding gap

The European Commission and the European Investment Bank Group have launched BioTechEU, a new initiative to mobilise €10 billion in investment for biotechnology and life sciences between 2026 and 2027.

The programme targets Europe’s biotech funding gap, seeking to strengthen global competitiveness by channelling public and private capital into health innovation, including gene therapies, mRNA treatments, personalised medicine and AI-enabled medical technologies.

BioTechEU will operate under the EIB Group’s TechEU framework and draw on instruments such as the InvestEU guarantee. The initiative aligns with broader EU efforts to retain strategic health innovation within Europe and reduce reliance on external markets.

European Health Commissioner Olivér Várhelyi said under-investment continues to constrain biotech startups, adding that the European Commission sees BioTechEU as a way to help promising treatments scale and reach patients more efficiently across the EU.

EIB President Nadia Calviño said Europe has strong scientific talent and ideas, but deeper capital markets are needed. She described BioTechEU as a catalyst for enabling EU-based biotech companies to grow and compete globally.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!